paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Fair Differential Privacy Can Mitigate the Disparate Impact on Model Accuracy
1 INTRODUCTION . Protecting data privacy is a significant concern in many data-driven decision-making applications ( Zhu et al. , 2017 ) , such as social networking service , recommender system , location-based service . For example , the United States Census Bureau will firstly employ differential privacy to the 2020 census data ( Bureau , 2020 ) . Differential privacy ( DP ) guarantees that the released model can not be exploited by attackers to derive whether one particular instance is present or absent in the training dataset ( Dwork et al. , 2006 ) . However , DP intentionally restricts the instance influence and introduces noise into the learning procedure . When we enforce DP to a model , DP may amplify the discriminative effect towards the underrepresented and relatively complicated classes ( Bagdasaryan et al. , 2019 ; Du et al. , 2020 ; Jaiswal & Provost , 2020 ) . That is , reduction in accuracy from nonprivate learning to private learning may be uneven for each class . There are several empirical studies on utility reduction : ( Bagdasaryan et al. , 2019 ; Du et al. , 2020 ) show that the model accuracy in private learning tends to decrease more on classes that already have lower accuracy in non-private learning . ( Jaiswal & Provost , 2020 ) shows different observations that the inequality in accuracy is not consistent for classes across multiple setups and datasets . It needs to be cautionary that although private learning improves individual participants ’ security , the model performance should not harm one class more than others . The machine learning model , specifically in supervised learning tasks , outputs a hypothesis f ( x ; θ ) parameterized by θ , which predicts the label y given the unprotected attributes x . Each instance ’ s label y belongs to a class k. The model aims to minimize the objective ( loss ) function L ( θ ; x , y ) , i.e. , θ∗ : = arg min θ E [ L ( θ ; x , y ) ] . ( 1 ) Our work builds on a recent advance in machine learning models ’ training that uses the differentially private mechanism , i.e. , DPSGD ( Abadi et al. , 2016 ) for releasing model . The key idea can be extended to other DP mechanisms with the specialized noise form ( generally Laplacian or Gaussian distribution ) . The iterative update scheme of DPSGD at the ( t+ 1 ) -th iteration is of the form θ̃t+1 = θ̃t − µt · 1 n ( ∑ i∈St gt ( xi ) max ( 1 , ‖g t ( xi ) ‖2 C ) + ξ1 ) , ( 2 ) where n and µt denote the batch size and step-size ( learning rate ) respectively ; St denotes the randomly chosen instance set ; the vector 1 denotes the vector filled with scalar value one ; and gt ( xi ) denotes the gradient of the loss function in ( 1 ) at iteration t , i.e. , ∇L ( yi ; θt , xi ) . The two key operations of DPSGD are : i ) clipping each gradient gt ( xi ) in ` 2-norm based on the threshold parameter C ; ii ) adding noise ξ drawn from Gaussian distribution N ( 0 , σ2C2 ) with a variance of noise scale σ and the clipping threshold parameter C. These operations enable training machine learning models with non-convex objectives at a manageable privacy cost . Based on the result of traditional SGD , we theoretically analyze the sufficient decrease type scheme of DPSGD , i.e. , E [ f ( θt+1 ) ] 6 f ( θt ) + E [ 〈 ∇f ( θt ) , θt+1 − θt 〉 ] + L 2 E [ ∥∥θt+1 − θt∥∥2 ] + τ ( C , σ ; θt ) , ( 3 ) where the last term τ ( C , σ ; θt ) denotes the gap of loss expectation compared with ideal SGD at this ( t+ 1 ) -th iteration , and related with parameters C , and σ . The term τ ( C , σ ; θ ) , which can be called bias-variance term , can be calculated mathematically as 2 ( 1 + 1 µtL ) ‖∇f ( θ ) ‖ · η + η2︸ ︷︷ ︸ Clipping bias + 1 n2 · σ2C2|1|︸ ︷︷ ︸ Noise variance , ( 4 ) where L denotes the Lipschitz constant of f ; |1| denotes the vector dimension ; and we have η : = 1 n ∑ I‖gt ( xi ) ‖ > C ( ‖gt ( xi ) ‖ − C ) , where I‖gt ( xi ) ‖ > C denotes the cardinality number of satisfying ‖gt ( xi ) ‖ > C. The detailed proof of ( 3 ) and ( 4 ) can be found in Appendix A. τ ( C , σ ) is consist of clipping bias and noise variance terms , which means the amount that the private gradient differs from the non-private gradient due to the influence truncation and depending on the scale of the noise respectively . As a result , we call τ ( C , σ ) the bias-variance term . As underrepresented class instances or complicated instances manifest differently from common instances , a uniform threshold parameter C may incur significant accuracy disparate for different classes . In Figure 1 ( a ) , we employ DPSGD ( Abadi et al. , 2016 ) on the unbalanced MNIST dataset ( Bagdasaryan et al. , 2019 ) to numerical study the inequality of utility loss ( i.e. , the prediction accuracy gap between private model and non-private model ) caused by differential privacy . For the unbalanced MNIST dataset , the underrepresented class ( Class 8 ) has significantly larger utility loss than the other classes ( e.g. , Class 2 ) in the private model . DPSGD results in a 6.74 % decrease in accuracy on the well-represented classes , but accuracy on the underrepresented class drops 74.16 % . Training with more epochs does not reduce this gap while exhausting the privacy budget . DPSGD obviously introduces negative discrimination against the underrepresented class ( which already has lower accuracy in the non-private SGD model ) . Further , Figure 1 ( b ) shows the classification accuracy of different sub-classes for τ ( C , σ ; θ ) on the unbalanced MNIST dataset . Larger bias-variance term τ ( C , σ ; θ ) ( determined by C and σ ) results in more serious accuracy bias on different classes , while similar results are also shown in ( Bagdasaryan et al. , 2019 ; Du et al. , 2020 ; Jaiswal & Provost , 2020 ) . Both theoretical analysis and experimental discussion suggest that minimizing the clipping bias and noise variance simultaneously could learn “ better ” DP parameters , which mitigates the accuracy bias between different classes . This motivates us to pursue fairness with a self-adaptive differentially privacy scheme1 . This paper proposes a fair differential privacy algorithm ( FairDP ) to mitigate the disparate impact problem . FairDP introduces a self-adaptive DP mechanism and automatically adjusts instance influence in each class . The main idea is to formulate the problem as bilevel programming by minimizing the bias-variance term as the upper-level objective with a lower-level differential privacy machine learning model . The self-adaptive clipping threshold parameters are calculated by balancing the fairness bias-variance and per-class accuracy terms simultaneously . Our contributions can be summarized as follows : • FairDP uses a self-adaptive clipping threshold to adjust the instance influence in each class , so the model accuracy for each class is calibrated based on their privacy cost through fairness balancing . The class utility reduction is semblable for each class in FairDP . • To our knowledge , we are the first to introduce bilevel programming to private learning , aiming to mitigate the disparate impact on model accuracy . We further design an alternating scheme to learn the self-adaptive clipping and private model simultaneously . • Our experimental evaluation shows that FairDP strikes a balance among privacy , fairness , and accuracy by performing stratified clipping over different subclasses . The following is the road-map of this paper . Section 2 describes the proposed FairDP algorithm . In Section 3 , we provide a brief but complete introduction to related works in privacy-aware learning , fairness-aware learning , and the intersection of differential privacy and fairness . Extensive experiments are further presented in Section 4 , and we finally conclude this paper and discuss some future work in Section 5 . 2 FAIRDP : FAIR DIFFERENTIAL PRIVACY . 2.1 THE BILEVEL FAIRDP FORMULATION . Our approach ’ s intuition is to fairly balance the level of privacy ( based on the clipping threshold ) for each class based on their bias-variance terms , which are introduced by associated DP . The biasvariance terms arise from capping instance influences to reduce the sensitivity of a machine learning algorithm . In detail , a self-adaptive DP mechanism is designed to balance the bias-variance difference among all groups , while the obtained DP mechanism must adapt to the original machine learning problem simultaneously . Recall the definition of the machine learning problem , we assume there are ` classes and according to the bias-variance term ( 4 ) for class k ∈ { 1 , · · · , ` } can be denoted as τk ( Ck , σ ; θ ∗ ) : = 2 ( 1 + 1 µtL ) ‖∇f ( θ∗ ) ‖ · ηk + ηk2 + |Gk|2 n2 · σ2Ck2|1| , ( 5 ) where Ck denotes the clipping parameter for class k ; Gk denotes the data sample set for class k. As motivated by Section 1 , we aim to minimize the associated bias-variance term to obtain a unified clipping parameter for the machine learning problem . However , to mitigate the disparate impact on model accuracy for different classes , we minimize the summation of per-class bias-variance terms . This objective can lead to the self-adaptive clipping threshold among different classes , while the inconsistent DP schemes for different classes should work on the privacy protection on the machine learning model . The self-adaptive clipping threshold parameters should be utilized to learn the original machine learning privately with the DP mechanism . A simple bilevel programming problem2 ( Dempe et al. , 2019 ; Liu et al. , 2019 ) is introduced to model these two goals which influence each 1Note that we do not attempt to optimize the bias-variance bound in a differentially private way , and we are most interested in understanding the forces at play . 2The simple bilevel programming is not to say that the bilevel problem is simple , and it denotes a specific bilevel programming problem . other . The formulation can be denoted as follows , i.e. , min { Ck } , θ ∑̀ k=1 τk ( Ck , σ ; θ ) , ( 6a ) s.t . θ ∈ arg min θ L ( θ ; { Gk } ` k=1 ) , ( 6b ) where the upper-level problem ( 6a ) aims to fairly adjust the clipping threshold parameters for all classes , which is related to the classification model θ ; as for the lower-level problem ( 6b ) , we aim to learn the classification model based on the differential privacy schema with the self-adaptive clipping threshold { Ck } . These two objectives are coupled together , although the model of the lower-level problem is determined only by θ . The effect of clipping is reflected through the DP calculation procedure . Guided by the bias-variance term in ( 6a ) , the parameters of the DP learning can be finely updated simultaneously with the learning process of the classifiers in ( 6b ) . Algorithm 1 : The FairDP Method Input : Instances { ( x1 , y1 ) , · · · , ( xN , yN ) } , objective function L ( θ ; x , y ) , learning rate µt ; 1 Initialize θ0 ; 2 for t← 1 to T do 3 Randomly sample a batch of instances St with probability |S t| N ; Compute gradient 4 for xi ∈ St do 5 Compute gt ( xi ) ← ∇θtL ( yi ; θt , xi ) ; 6 end Minimize bias-variance 7 Ct+1k ← arg minCk τk ( Ck , σ ; θ t ) ; 8 for xi ∈ St and yi = k do Clip gradient 9 ḡt ( xi ) ← g t ( xi ) max ( 1 , ‖gt ( xi ) ‖2 C t+1 k ) ; 10 end Add noise 11 g̃t ← 1n ( ∑ i ḡ t ( xi ) + ξI ) ; Noise Gradient Descent 12 θt+1 ← θt − µtg̃t ; 13 end Output : θT , accumulated privacy cost ( , δ ) .
The paper introduces an algorithm for mitigating disparate impact of private learning (DP-SGD) on different groups of a given population. In each iteration of DP-SGD, instead of using a uniform gradient clipping threshold for all groups, the proposed Fair DP-SGD algorithm uses an optimal clipping threshold (one that minimizes the bias-variance tradeoff) for each group separately. The authors include experimental results to show how well their algorithm performs compared to state-of-the-art algorithms.
SP:53e0d7909b00c88201dc1d7a8da7bd1efa4eb48e
RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
1 INTRODUCTION . Knowledge graphs are collections of real-world facts , which are useful in various applications . Each fact is typically specified as a triplet ( h , r , t ) or equivalently r ( h , t ) , meaning entity h has relation r with entity t. For example , Bill Gates is the Co-founder of Microsoft . As it is impossible to collect all facts , knowledge graphs are incomplete . Therefore , a fundamental problem on knowledge graphs is to predict missing facts by reasoning with existing ones , a.k.a . knowledge graph reasoning . This paper studies learning logic rules for reasoning on knowledge graphs . For example , one may extract a rule ∀X , Y , Z hobby ( X , Y ) ← friend ( X , Z ) ∧ hobby ( Z , Y ) , meaning that if Z is a friend of X and Z has hobby Y , then Y is also likely the hobby of X . Then the rule can be applied to infer new hobbies of people . Such logic rules are able to improve interpretability and precision of reasoning ( Qu & Tang , 2019 ; Zhang et al. , 2020 ) . Moreover , logic rules can also be reused and generalized to other domains and data ( Teru & Hamilton , 2020 ) . However , due to the large search space , inferring high-quality logic rules for reasoning on knowledge graphs is a challenging task . Indeed , a variety of methods have been proposed for learning logic rules from knowledge graphs . Most traditional methods such as path ranking ( Lao & Cohen , 2010 ) and Markov logic networks ( Richardson & Domingos , 2006 ) enumerate relational paths on graphs as candidate logic rules , and then learn a weight for each rule as an assessment of rule qualities . There are also some recent methods based on neural logic programming ( Yang et al. , 2017 ) and neural theorem provers ( Rocktäschel & Riedel , 2017 ) , which are able to learn logic rules and their weights simultaneously in a differentiable way . Though empirically effective for prediction , the search space of these methods is exponentially large , making it hard to identify high-quality logic rules . Besides , some recent efforts ( Xiong et al. , 2017 ) formulate the problem as a sequential decision making process , and use reinforcement learning to search for logic rules , which significantly reduces search complexity . However , due to the large action space and sparse reward in training , the performance of these methods is not yet satisfying . * Equal contribution . In this paper , we propose a principled probabilistic approach called RNNLogic which overcomes the above limitations . Our approach consists of a rule generator as well as a reasoning predictor with logic rules , which are simultaneously trained to enhance each other . The rule generator provides logic rules which are used by the reasoning predictor for reasoning , while the reasoning predictor provides effective reward to train the rule generator , which helps significantly reduce the search space . Specifically , for each query-answer pair , e.g. , q = ( h , r , ? ) and a = t , we model the probability of the answer conditioned on the query and existing knowledge graph G , i.e. , p ( a|G , q ) , where a set of logic rules z 1 is treated as a latent variable . The rule generator defines a prior distribution over logic rules for each query , i.e. , p ( z|q ) , which is parameterized by a recurrent neural network . The reasoning predictor computes the likelihood of the answer conditioned on the logic rules and the existing knowledge graph G , i.e. , p ( a|G , q , z ) . At each training iteration , we first sample a few logic rules from the rule generator , and further update the reasoning predictor to try out these rules for prediction . Then an EM algorithm ( Neal & Hinton , 1998 ) is used to optimize the rule generator . In the E-step , a set of high-quality logic rules are selected from all the generated rules according to their posterior probabilities . In the M-step , the rule generator is updated to imitate the high-quality rules selected in the E-step . Extensive experimental results show that RNNLogic outperforms state-of-the-art methods for knowledge graph reasoning 2 . Besides , RNNLogic is able to generate high-quality logic rules . 2 RELATED WORK . Our work is related to existing efforts on learning logic rules for knowledge graph reasoning . Most traditional methods enumerate relational paths between query entities and answer entities as candidate logic rules , and further learn a scalar weight for each rule to assess the quality . Representative methods include Markov logic networks ( Kok & Domingos , 2005 ; Richardson & Domingos , 2006 ; Khot et al. , 2011 ) , relational dependency networks ( Neville & Jensen , 2007 ; Natarajan et al. , 2010 ) , rule mining algorithms ( Galárraga et al. , 2013 ; Meilicke et al. , 2019 ) , path ranking ( Lao & Cohen , 2010 ; Lao et al. , 2011 ) and probabilistic personalized page rank ( ProPPR ) algorithms ( Wang et al. , 2013 ; 2014a ; b ) . Some recent methods extend the idea by simultaneously learning logic rules and the weights in a differentiable way , and most of them are based on neural logic programming ( Rocktäschel & Riedel , 2017 ; Yang et al. , 2017 ; Cohen et al. , 2018 ; Sadeghian et al. , 2019 ; Yang & Song , 2020 ) or neural theorem provers ( Rocktäschel & Riedel , 2017 ; Minervini et al. , 2020 ) . These methods and our approach are similar in spirit , as they are all able to learn the weights of logic rules efficiently . However , these existing methods try to simultaneously learn logic rules and their weights , which is nontrivial in terms of optimization . The main innovation of our approach is to separate rule generation and rule weight learning by introducing a rule generator and a reasoning predictor respectively , which can mutually enhance each other . The rule generator generates a few high-quality logic rules , and the reasoning predictor only focuses on learning the weights of such high-quality rules , which significantly reduces the search space and leads to better reasoning results . Meanwhile , the reasoning predictor can in turn help identify some useful logic rules to improve the rule generator . The other kind of rule learning method is based on reinforcement learning . The general idea is to train pathfinding agents , which search for reasoning paths in knowledge graphs to answer questions , and then extract logic rules from reasoning paths ( Xiong et al. , 2017 ; Chen et al. , 2018 ; Das et al. , 2018 ; Lin et al. , 2018 ; Shen et al. , 2018 ) . However , training effective pathfinding agents is highly challenging , as the reward signal ( i.e. , whether a path ends at the correct answer ) can be extremely sparse . Although some studies ( Lin et al. , 2018 ) try to get better reward by using embedding-based methods for reward shaping , the performance is still worse than most embedding-based methods . In our approach , the rule generator has a similar role to those pathfinding agents . The major difference is that we simultaneously train the rule generator and a reasoning predictor with logic rules , which mutually enhance each other . The reasoning predictor provides effective reward for training the rule generator , and the rule generator offers high-quality rules to improve the reasoning predictor . Our work is also related to knowledge graph embedding , which solves knowledge graph reasoning by learning entity and relation embeddings in latent spaces ( Bordes et al. , 2013 ; Wang et al. , 2014c ; Yang et al. , 2015 ; Nickel et al. , 2016 ; Trouillon et al. , 2016 ; Cai & Wang , 2018 ; Dettmers et al. , 2018 ; Balazevic et al. , 2019 ; Sun et al. , 2019 ) . With proper architectures , these methods are able to learn 1More precisely , z is a multiset . In this paper , we use “ set ” to refer to “ multiset ” for conciseness . 2The codes of RNNLogic are available : https : //github.com/DeepGraphLearning/RNNLogic High-quality Logic Rules . some simple logic rules . For example , TransE ( Bordes et al. , 2013 ) can learn some composition rules . RotatE ( Sun et al. , 2019 ) can mine some composition rules , symmetric rules and inverse rules . However , these methods can only find some simple rules in an implicit way . In contrast , our approach explicitly trains a rule generator , which is able to generate more complicated logic rules . There are some works studying boosting rule-based models ( Goldberg & Eckstein , 2010 ; Eckstein et al. , 2017 ) , where they dynamically add new rules according to the rule weights learned so far . These methods have been proven effective in binary classification and regression . Compared with them , our approach shares similar ideas , as we dynamically update the rule generator with the feedback from the reasoning predictor , but we focus on a different task , i.e. , reasoning on knowledge graphs . 3 MODEL . In this section , we introduce the proposed approach RNNLogic which learns logic rules for knowledge graph reasoning . We first formally define knowledge graph reasoning and logic rules . Knowledge Graph Reasoning . Let pdata ( G , q , a ) be a training data distribution , where G is a background knowledge graph characterized by a set of ( h , r , t ) -triplets which we may also write as r ( h , t ) , q = ( h , r , ? ) is a query , and a = t is the answer . Given G and the query q , the goal is to predict the correct answer a . More formally , we aim to model the probabilistic distribution p ( a|G , q ) . Logic Rule . We perform knowledge graph reasoning by learning logic rules , where logic rules in this paper have the conjunctive form ∀ { Xi } li=0 r ( X0 , Xl ) ← r1 ( X0 , X1 ) ∧ · · · ∧ rl ( Xl−1 , Xl ) with l being the rule length . This syntactic structure naturally captures composition , and can easily express other common logic rules such as symmetric or inverse rules . For example , let r−1 denote the inverse relation of relation r , then each symmetric rule can be expressed as ∀ { X , Y } r ( X , Y ) ← r−1 ( X , Y ) . In RNNLogic , we treat a set of logic rules which could explain a query as a latent variable we have to infer . To do this , we introduce a rule generator and a reasoning predictor using logic rules . Given a query , the rule generator employs a recurrent neural network to generate a set of logic rules , which are given to the reasoning predictor for prediction . We optimize RNNLogic with an EM-based algorithm . In each iteration , we start with updating the reasoning predictor to try out some logic rules generated by the rule generator . Then in the E-step , we identify a set of high-quality rules from all generated rules via posterior inference , with the prior from the rule generator and likelihood from the reasoning predictor . Finally in the M-step , the rule generator is updated with the identified high-quality rules .
There is a lot of recent work on link-prediction in knowledge graphs. One approach is based on embedding entities and relations in a knowledge graph into vector spaces, and the other is based on finding rules that imply relations, and then using these rules to find new links or facts. This paper takes the latter approach. Within the area of rule-based methods, a number of recent papers have used neural network methods to simultaneously generate rules and to find rule-weights or other related parameters (indicating how important individual rules are). Simultaneously solving for rules and rule-weights is a difficult task. In this paper, the authors propose a method where they separate the rule generation process from the weight/parameter calculation process. More importantly, they add a feedback loop from the weight calculation routine ("reasoning predictor") to the rule generation routine ("rule generator"), which in my opinion is novel, even though there have been a few recent attempts (Xiong et. al. 2017) to use reinforcement learning to search for rules. The rule generator in this paper uses a recurrent neural network (RNN) and the parameters of this RNN are modified by the reasoning predictor. In other words, the iterative process has the feature that new rule generation is influenced by the calculated weights of previously generated rules. The authors perform numerical experiments on 4 standard knowledge graphs to demonstrate the performance of their method
SP:0bdb9aa34e57b33cc411fd2f5ae54623c9ac0159
RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
1 INTRODUCTION . Knowledge graphs are collections of real-world facts , which are useful in various applications . Each fact is typically specified as a triplet ( h , r , t ) or equivalently r ( h , t ) , meaning entity h has relation r with entity t. For example , Bill Gates is the Co-founder of Microsoft . As it is impossible to collect all facts , knowledge graphs are incomplete . Therefore , a fundamental problem on knowledge graphs is to predict missing facts by reasoning with existing ones , a.k.a . knowledge graph reasoning . This paper studies learning logic rules for reasoning on knowledge graphs . For example , one may extract a rule ∀X , Y , Z hobby ( X , Y ) ← friend ( X , Z ) ∧ hobby ( Z , Y ) , meaning that if Z is a friend of X and Z has hobby Y , then Y is also likely the hobby of X . Then the rule can be applied to infer new hobbies of people . Such logic rules are able to improve interpretability and precision of reasoning ( Qu & Tang , 2019 ; Zhang et al. , 2020 ) . Moreover , logic rules can also be reused and generalized to other domains and data ( Teru & Hamilton , 2020 ) . However , due to the large search space , inferring high-quality logic rules for reasoning on knowledge graphs is a challenging task . Indeed , a variety of methods have been proposed for learning logic rules from knowledge graphs . Most traditional methods such as path ranking ( Lao & Cohen , 2010 ) and Markov logic networks ( Richardson & Domingos , 2006 ) enumerate relational paths on graphs as candidate logic rules , and then learn a weight for each rule as an assessment of rule qualities . There are also some recent methods based on neural logic programming ( Yang et al. , 2017 ) and neural theorem provers ( Rocktäschel & Riedel , 2017 ) , which are able to learn logic rules and their weights simultaneously in a differentiable way . Though empirically effective for prediction , the search space of these methods is exponentially large , making it hard to identify high-quality logic rules . Besides , some recent efforts ( Xiong et al. , 2017 ) formulate the problem as a sequential decision making process , and use reinforcement learning to search for logic rules , which significantly reduces search complexity . However , due to the large action space and sparse reward in training , the performance of these methods is not yet satisfying . * Equal contribution . In this paper , we propose a principled probabilistic approach called RNNLogic which overcomes the above limitations . Our approach consists of a rule generator as well as a reasoning predictor with logic rules , which are simultaneously trained to enhance each other . The rule generator provides logic rules which are used by the reasoning predictor for reasoning , while the reasoning predictor provides effective reward to train the rule generator , which helps significantly reduce the search space . Specifically , for each query-answer pair , e.g. , q = ( h , r , ? ) and a = t , we model the probability of the answer conditioned on the query and existing knowledge graph G , i.e. , p ( a|G , q ) , where a set of logic rules z 1 is treated as a latent variable . The rule generator defines a prior distribution over logic rules for each query , i.e. , p ( z|q ) , which is parameterized by a recurrent neural network . The reasoning predictor computes the likelihood of the answer conditioned on the logic rules and the existing knowledge graph G , i.e. , p ( a|G , q , z ) . At each training iteration , we first sample a few logic rules from the rule generator , and further update the reasoning predictor to try out these rules for prediction . Then an EM algorithm ( Neal & Hinton , 1998 ) is used to optimize the rule generator . In the E-step , a set of high-quality logic rules are selected from all the generated rules according to their posterior probabilities . In the M-step , the rule generator is updated to imitate the high-quality rules selected in the E-step . Extensive experimental results show that RNNLogic outperforms state-of-the-art methods for knowledge graph reasoning 2 . Besides , RNNLogic is able to generate high-quality logic rules . 2 RELATED WORK . Our work is related to existing efforts on learning logic rules for knowledge graph reasoning . Most traditional methods enumerate relational paths between query entities and answer entities as candidate logic rules , and further learn a scalar weight for each rule to assess the quality . Representative methods include Markov logic networks ( Kok & Domingos , 2005 ; Richardson & Domingos , 2006 ; Khot et al. , 2011 ) , relational dependency networks ( Neville & Jensen , 2007 ; Natarajan et al. , 2010 ) , rule mining algorithms ( Galárraga et al. , 2013 ; Meilicke et al. , 2019 ) , path ranking ( Lao & Cohen , 2010 ; Lao et al. , 2011 ) and probabilistic personalized page rank ( ProPPR ) algorithms ( Wang et al. , 2013 ; 2014a ; b ) . Some recent methods extend the idea by simultaneously learning logic rules and the weights in a differentiable way , and most of them are based on neural logic programming ( Rocktäschel & Riedel , 2017 ; Yang et al. , 2017 ; Cohen et al. , 2018 ; Sadeghian et al. , 2019 ; Yang & Song , 2020 ) or neural theorem provers ( Rocktäschel & Riedel , 2017 ; Minervini et al. , 2020 ) . These methods and our approach are similar in spirit , as they are all able to learn the weights of logic rules efficiently . However , these existing methods try to simultaneously learn logic rules and their weights , which is nontrivial in terms of optimization . The main innovation of our approach is to separate rule generation and rule weight learning by introducing a rule generator and a reasoning predictor respectively , which can mutually enhance each other . The rule generator generates a few high-quality logic rules , and the reasoning predictor only focuses on learning the weights of such high-quality rules , which significantly reduces the search space and leads to better reasoning results . Meanwhile , the reasoning predictor can in turn help identify some useful logic rules to improve the rule generator . The other kind of rule learning method is based on reinforcement learning . The general idea is to train pathfinding agents , which search for reasoning paths in knowledge graphs to answer questions , and then extract logic rules from reasoning paths ( Xiong et al. , 2017 ; Chen et al. , 2018 ; Das et al. , 2018 ; Lin et al. , 2018 ; Shen et al. , 2018 ) . However , training effective pathfinding agents is highly challenging , as the reward signal ( i.e. , whether a path ends at the correct answer ) can be extremely sparse . Although some studies ( Lin et al. , 2018 ) try to get better reward by using embedding-based methods for reward shaping , the performance is still worse than most embedding-based methods . In our approach , the rule generator has a similar role to those pathfinding agents . The major difference is that we simultaneously train the rule generator and a reasoning predictor with logic rules , which mutually enhance each other . The reasoning predictor provides effective reward for training the rule generator , and the rule generator offers high-quality rules to improve the reasoning predictor . Our work is also related to knowledge graph embedding , which solves knowledge graph reasoning by learning entity and relation embeddings in latent spaces ( Bordes et al. , 2013 ; Wang et al. , 2014c ; Yang et al. , 2015 ; Nickel et al. , 2016 ; Trouillon et al. , 2016 ; Cai & Wang , 2018 ; Dettmers et al. , 2018 ; Balazevic et al. , 2019 ; Sun et al. , 2019 ) . With proper architectures , these methods are able to learn 1More precisely , z is a multiset . In this paper , we use “ set ” to refer to “ multiset ” for conciseness . 2The codes of RNNLogic are available : https : //github.com/DeepGraphLearning/RNNLogic High-quality Logic Rules . some simple logic rules . For example , TransE ( Bordes et al. , 2013 ) can learn some composition rules . RotatE ( Sun et al. , 2019 ) can mine some composition rules , symmetric rules and inverse rules . However , these methods can only find some simple rules in an implicit way . In contrast , our approach explicitly trains a rule generator , which is able to generate more complicated logic rules . There are some works studying boosting rule-based models ( Goldberg & Eckstein , 2010 ; Eckstein et al. , 2017 ) , where they dynamically add new rules according to the rule weights learned so far . These methods have been proven effective in binary classification and regression . Compared with them , our approach shares similar ideas , as we dynamically update the rule generator with the feedback from the reasoning predictor , but we focus on a different task , i.e. , reasoning on knowledge graphs . 3 MODEL . In this section , we introduce the proposed approach RNNLogic which learns logic rules for knowledge graph reasoning . We first formally define knowledge graph reasoning and logic rules . Knowledge Graph Reasoning . Let pdata ( G , q , a ) be a training data distribution , where G is a background knowledge graph characterized by a set of ( h , r , t ) -triplets which we may also write as r ( h , t ) , q = ( h , r , ? ) is a query , and a = t is the answer . Given G and the query q , the goal is to predict the correct answer a . More formally , we aim to model the probabilistic distribution p ( a|G , q ) . Logic Rule . We perform knowledge graph reasoning by learning logic rules , where logic rules in this paper have the conjunctive form ∀ { Xi } li=0 r ( X0 , Xl ) ← r1 ( X0 , X1 ) ∧ · · · ∧ rl ( Xl−1 , Xl ) with l being the rule length . This syntactic structure naturally captures composition , and can easily express other common logic rules such as symmetric or inverse rules . For example , let r−1 denote the inverse relation of relation r , then each symmetric rule can be expressed as ∀ { X , Y } r ( X , Y ) ← r−1 ( X , Y ) . In RNNLogic , we treat a set of logic rules which could explain a query as a latent variable we have to infer . To do this , we introduce a rule generator and a reasoning predictor using logic rules . Given a query , the rule generator employs a recurrent neural network to generate a set of logic rules , which are given to the reasoning predictor for prediction . We optimize RNNLogic with an EM-based algorithm . In each iteration , we start with updating the reasoning predictor to try out some logic rules generated by the rule generator . Then in the E-step , we identify a set of high-quality rules from all generated rules via posterior inference , with the prior from the rule generator and likelihood from the reasoning predictor . Finally in the M-step , the rule generator is updated with the identified high-quality rules .
In this paper, the author proposes RNNLogic for learning FOL rules from the knowledge graph. The proposed method assigns embeddings for each relation type and uses RNN module to generate chain-like rule candidates. Candidates are evaluated with a separate evaluation module that computes the scores. The rule scores are then taken to update the generator module using EM.
SP:0bdb9aa34e57b33cc411fd2f5ae54623c9ac0159
Semi-supervised counterfactual explanations
Counterfactual explanations for machine learning models are used to find minimal interventions to the feature values such that the model changes the prediction to a different output or a target output . A valid counterfactual explanation should have likely feature values . Here , we address the challenge of generating counterfactual explanations that lie in the same data distribution as that of the training data and more importantly , they belong to the target class distribution . This requirement has been addressed through the incorporation of auto-encoder reconstruction loss in the counterfactual search process . Connecting the output behavior of the classifier to the latent space of the auto-encoder has further improved the speed of the counterfactual search process and the interpretability of the resulting counterfactual explanations . Continuing this line of research , we show further improvement in the interpretability of counterfactual explanations when the auto-encoder is trained in a semi-supervised fashion with class tagged input data . We empirically evaluate our approach on several datasets and show considerable improvement in-terms of several metrics . 1 INTRODUCTION . Recently counterfactual explanations have gained popularity as tools of explainability for AI-enabled systems . A counterfactual explanation of a prediction describes the smallest change to the feature values that changes the prediction to a predefined output . A counterfactual explanation usually takes the form of a statement like , “ You were denied a loan because your annual income was 30 , 000 . If your income had been 45 , 000 , you would have been offered a loan ” . Counterfactual explanations are important in the context of AI-based decision-making systems because they provide the data subjects with meaningful explanations for a given decision and the necessary actions to receive a more favorable/desired decision in the future . Application of counterfactual explanations in the areas of financial risk mitigation , medical diagnosis , criminal profiling , and other sensitive socio-economic sectors is increasing and is highly desirable for bias reduction . Apart from the challenges of sparsity , feasibility , and actionability , the primary challenge for counterfactual explanations is their interpretability . Higher levels of interpretability lead to higher adoption of AI-enabled decision-making systems . Higher values of interpretability will improve the trust amongst data subjects on AI-enabled decisions . AI models used for decision making are typically black-box models , the reasons can either be the computational and mathematical complexities associated with the model or the proprietary nature of the technology . In this paper , we address the challenge of generating counterfactual explanations that are more likely and interpretable . A counterfactual explanation is interpretable if it lies within or close to the model ’ s training data distribution . This problem has been addressed by constraining the search for counterfactuals to lie in the training data distribution . This has been achieved by incorporating an auto-encoder reconstruction loss in the counterfactual search process . However , adhering to training data distribution is not sufficient for the counterfactual explanation to be likely . The counterfactual explanation should also belong to the feature distribution of its target class . To understand this , let us consider an example of predicting the risk of diabetes in individuals as high or low . A sparse counterfactual explanation to reduce the risk of diabetes might suggest a decrease in the body mass index ( BMI ) level for an individual while leaving other features unchanged . The model might predict a low risk based on this change and the features of this individual might still be in the data distribution of the model . However , they will not lie in the data distribution of individuals with low risk of diabetes because of other relevant features of low-risk individuals like glucose tolerance , serum insulin , diabetes pedigree , etc . To address this issue , authors in Van Looveren & Klaise ( 2019 ) proposed to connect the output behavior of the classifier to the latent space of the auto-encoder using prototypes . These prototypes guide the counterfactual search process in the latent space and improve the interpretability of the resulting counterfactual explanations . However , the auto-encoder latent space is still unaware of the class tag information . This is highly undesirable , especially when using a prototype guided search for counterfactual explanations on the latent space . In this paper , we propose to build a latent space that is aware of the class tag information through joint training of the auto-encoder and the classifier . Thus the counterfactual explanations generated will not only be faithful to the entire training data distribution but also faithful to the data distribution of the target class . We show that there are considerable improvements in interpretability , sparsity and proximity metrics can be achieved simultaneously , if the auto-encoder trained in a semi-supervised fashion with class tagged input data . Our approach does not rely on the availability of train data used for the black box classifier . It can be easily generalized to a post-hoc explanation method using the semi-supervised learning framework , which relies only on the predictions on the black box model . In the next section we present the related work . Then , in section 3 we present preliminary definitions and approaches necessary to introduce our approach . In section 4 , we present our approach and empirically evaluate it in section 5 . 2 RELATED WORK . Counterfactual analysis is a concept derived from from causal intervention analysis . Counterfactuals refer to model outputs corresponding to certain imaginary scenarios that we have not observed or can not observe . Recently Wachter et al . ( 2017 ) proposed the idea of model agnostic ( without opening the black box ) counterfactual explanations , through simultaneous minimization of the error between model prediction and the desired counterfactual and distance between original instance and their corresponding counterfactual . This idea has been extended for multiple scenarios by Mahajan et al . ( 2019 ) , Ustun et al . ( 2019 ) , Poyiadzi et al . ( 2020 ) based on the incorporation of feasibility constraints , actionability and diversity of counterfactuals . Authors in Mothilal et al . ( 2020 ) proposed a framework for generating diverse set of counterfactual explanations based on determinantal point processes . They argue that a wide range of suggested changes along with a proximity to the original input improves the chances those changes being adopted by data subjects . Causal constraints of our society do not allow the data subjects to reduce their age while increasing their educational qualifications . Such feasibility constraints were addressed by Mahajan et al . ( 2019 ) and Joshi et al . ( 2019 ) using a causal framework . Authors in Mahajan et al . ( 2019 ) addresses the feasibility of counterfactual explanations through causal relationship constraints amongst input features . They present a method that uses structural causal models to generate actionable counterfactuals . Authors in Joshi et al . ( 2019 ) propose to characterize data manifold and then provide an optimization framework to search for actionable counterfactual explanation on the data manifold via its latent representation . Authors in Poyiadzi et al . ( 2020 ) address the issues of feasibility and actionability through feasible paths , which are based on the shortest path distances defined via density-weighted metrics . An important aspect of counterfactual explanations is their interpretability . A counterfactual explanation is more interpretable if it lies within or close to the data distribution of the training data of the black box classifier . To address this issue Dhurandhar et al . ( 2018 ) proposed the use of auto-encoders to generate counterfactual explanations which are “ close ” to the data manifold . They proposed incorporation of an auto-encoder reconstruction loss in counterfactual search process to penalize counterfactual which are not true to the data manifold . This line of research was further extended by Van Looveren & Klaise ( 2019 ) , they proposed to connect the output behaviour of the classifier to the latent space of the auto-encoder using prototypes . These prototypes improved speed of counterfactual search process and the interpretability of the resulting counterfactual explanations . While Van Looveren & Klaise ( 2019 ) connects the output behaviour of the classifier to the latent space through prototypes , the latent space is still unaware of the class tag information . We propose to build a latent space which is aware of the class tag information through joint training of the auto-encoder and the classifier . Thus the counterfactual explanations generated will not only be faithful the entire training data distribution but also faithful the data distribution of the target class . In a post-hoc scenario where access to the training data is not guaranteed , we propose to use the input-output pair data of the black box classifier to jointly train the auto-encoder and classifier in the semi-supervised learning framework . Authors in Zhai & Zhang ( 2016 ) , Gogna et al . ( 2016 ) have explored the use semisupervised auto-encoders for sentiment analysis and analysis of biomedical signal analysis . Authors in Haiyan et al . ( 2015 ) propose a joint framework of representation and supervised learning which guarantees not only the semantics of the original data from representation learning but also fit the training data well via supervised learning . However , as far as our knowledge goes , semi-supervised learning has not been used to generate counterfactual explanations and we experimentally show that semi-supervised learning framework generates more interpretable counterfactual explanations . 3 PRELIMINARIES . Let D = { xi , yi } i=1 ... N be the supervised data set where xi 2 X is d-dimensional input feature space for a classifier and yi 2 Y = { 1 , 2 , . . . , ` } is the set of outputs for a classifier . Throughout this paper we assume the existence of a black box classifier h : X ! Y trained on D such that ŷ = h ( x ) = argmaxc2Y p ( y = c | x , D ) where p ( y = c | x , D ) is prediction score/probability for class c with an input x . Based on Wachter et al . ( 2017 ) , counterfactual explanations can be generated by trading off between prediction loss and sparsity . This is achieved by optimizing a linear combination of the prediction loss ( Lpred ) and loss of sparsity ( Lsparsity ) as L = c·Lpred+Lsparsity . Prediction loss typically measures the distance between current prediction and the target class , whereas sparsity loss function measures the perturbation from the initial instance x0 with class tag t0 . This approach generates counterfactual explanations which can reach their target class with a sparse perturbation to the initial instance . However , they need not necessarily respect the input data distribution of the classifier , hence , resulting in unreasonable values for xcfe . Authors in Dhurandhar et al . ( 2018 ) addressed this issue through incorporation of L2 reconstruction error for xcfe evaluated through an autoencoder ( AE ) trained on the input data X as LXrecon ( x ) = kx AEX ( x ) k22 where AEX represents the auto-encoder trained on entire training dataset X . The auto-encoder loss function LXrecon penalizes counterfactual explanations which do not lie within the data-distribution . However , Van Looveren & Klaise ( 2019 ) illustrated that incorporating LXrecon in L may result in counterfactual explanations which lie inside the input data-distribution but they may not be interpretable . To this end , Van Looveren & Klaise ( 2019 ) proposes addition of a prototype loss function LXproto to L to make xcfe more interpretable and improve the counterfactual search process through prototypes in the latent space of auto-encoder . LXproto is the L2 error between the latent encoding of x and cluster centroid of the target class in the latent space of the encoder defined as protot ( short for target prototype ) as LXproto ( x , protot ) = kENCX ( x ) prototk22 , where ENCX represents encoder part of the auto-encoder AEX and ENCX ( x ) represents the projection of x on to the latent space of the auto-encoder . Given a target t , the corresponding protot can be defined as protot = 1 K KX k=1 ENCX ( x t k ) ( 1 ) where xtk represent the input instances corresponding to the class t such that { ENCX ( xtk ) } k=1 , ... , K are the K nearest neighbors of ENCX ( x0 ) . For applications where target class t is not pre-defined , a suitable replacement for protot is evaluated by finding the nearest prototype protoj of class j 6= t0 to the encoding of x0 , given by j = argmini 6=t0 kENCX ( x0 ) protoik2 . Then prototype loss Lproto can be defined as Lproto ( x , protoj ) = kENCX ( x0 ) protojk22 . According to Van Looveren & Klaise ( 2019 ) the loss function Lproto explicitly guides the encoding of the counterfactual explanation to the target prototype ( or the nearest protoytpe protoi 6=t0 ) . Thus we have a loss function L given by L = c · Lpred + Lsparsity + · LXrecon + ✓ · LXproto ( 2 ) where c , and ✓ are hyper-parameters tuned globally for each data set . For detailed descriptions of these parameters and their impact on the counterfactual search , we refer the readers to Ltd . In this paper , we propose an alternate version of this loss function . The constituent loss functions LXrecon and LXproto are based on an auto-encoder trained in an unsupervised fashion . In the next section we motivate the use of an auto-encoder trained using class tagged data in a semi-supervised fashion . 4 SEMI-SUPERVISED COUNTERFACTUAL EXPLANATIONS x y h ( a ) Classification x z x X 1 X ( b ) Autoencoder x z y x D 1 D ⇠ ( c ) Jointly trained model For the supervised classification data set D = hX , Yi machine learning methods learn a classifier model h : X 7 ! Y ( see figure 1a ) . This process of learning involves minimizing a loss function of the form : Eentropy = P ` j PN i I ( yi = j ) ⇤ log ( p ( y = j | xi , D ) ) . Auto-encoder is a neural network framework which learns a latent space representation z 2 Z for input data x 2 X along with an invertible mapping ( 1X ) ( see figure 1b ) in an unsupervised fashion . The subscript X represents the unsupervised training of X and 1X only on the dataset X . The un-supervised learning framework tries to learn data compression using continuous map X , while minimizing the reconstruction loss : Eautoenc = q 1 N PN i |xi x0i| 2 . In this paper we consider only undercomplete autoencoders that produce a lower dimension representation ( Z ) of an high dimensional space ( X ) , while the decoder network ensures the reconstruction guarantee ( x ⇡ 1 ( ( x ) ) . A traditional undercomplete auto-encoder captures the correlation between the input features for the dimension reduction .
This paper presents a new approach for generating counterfactual explanations. Specifically, the presented method optimizing for a counterfactual explanation using a weighted loss function of L_pred, L_sparsity, L_recon, and L_proto, and differs from previous works in the manner in which the latter two losses are computed. In more detail, whereas prior work computes L_recon and L_proto using the reconstruction and latent space distance of *unsupervised* models, the presented method computes these losses using a semi-supervised setup whereby a model is jointly trained to minimize reconstruction and class-conditional loss. The authors conduct experiments on a number of real- and mixed-valued datasets, which is welcome in a field where broad experimentation is historically lacking.
SP:ec3d792f859916d782bce86107d178f6965fc9b1
Semi-supervised counterfactual explanations
Counterfactual explanations for machine learning models are used to find minimal interventions to the feature values such that the model changes the prediction to a different output or a target output . A valid counterfactual explanation should have likely feature values . Here , we address the challenge of generating counterfactual explanations that lie in the same data distribution as that of the training data and more importantly , they belong to the target class distribution . This requirement has been addressed through the incorporation of auto-encoder reconstruction loss in the counterfactual search process . Connecting the output behavior of the classifier to the latent space of the auto-encoder has further improved the speed of the counterfactual search process and the interpretability of the resulting counterfactual explanations . Continuing this line of research , we show further improvement in the interpretability of counterfactual explanations when the auto-encoder is trained in a semi-supervised fashion with class tagged input data . We empirically evaluate our approach on several datasets and show considerable improvement in-terms of several metrics . 1 INTRODUCTION . Recently counterfactual explanations have gained popularity as tools of explainability for AI-enabled systems . A counterfactual explanation of a prediction describes the smallest change to the feature values that changes the prediction to a predefined output . A counterfactual explanation usually takes the form of a statement like , “ You were denied a loan because your annual income was 30 , 000 . If your income had been 45 , 000 , you would have been offered a loan ” . Counterfactual explanations are important in the context of AI-based decision-making systems because they provide the data subjects with meaningful explanations for a given decision and the necessary actions to receive a more favorable/desired decision in the future . Application of counterfactual explanations in the areas of financial risk mitigation , medical diagnosis , criminal profiling , and other sensitive socio-economic sectors is increasing and is highly desirable for bias reduction . Apart from the challenges of sparsity , feasibility , and actionability , the primary challenge for counterfactual explanations is their interpretability . Higher levels of interpretability lead to higher adoption of AI-enabled decision-making systems . Higher values of interpretability will improve the trust amongst data subjects on AI-enabled decisions . AI models used for decision making are typically black-box models , the reasons can either be the computational and mathematical complexities associated with the model or the proprietary nature of the technology . In this paper , we address the challenge of generating counterfactual explanations that are more likely and interpretable . A counterfactual explanation is interpretable if it lies within or close to the model ’ s training data distribution . This problem has been addressed by constraining the search for counterfactuals to lie in the training data distribution . This has been achieved by incorporating an auto-encoder reconstruction loss in the counterfactual search process . However , adhering to training data distribution is not sufficient for the counterfactual explanation to be likely . The counterfactual explanation should also belong to the feature distribution of its target class . To understand this , let us consider an example of predicting the risk of diabetes in individuals as high or low . A sparse counterfactual explanation to reduce the risk of diabetes might suggest a decrease in the body mass index ( BMI ) level for an individual while leaving other features unchanged . The model might predict a low risk based on this change and the features of this individual might still be in the data distribution of the model . However , they will not lie in the data distribution of individuals with low risk of diabetes because of other relevant features of low-risk individuals like glucose tolerance , serum insulin , diabetes pedigree , etc . To address this issue , authors in Van Looveren & Klaise ( 2019 ) proposed to connect the output behavior of the classifier to the latent space of the auto-encoder using prototypes . These prototypes guide the counterfactual search process in the latent space and improve the interpretability of the resulting counterfactual explanations . However , the auto-encoder latent space is still unaware of the class tag information . This is highly undesirable , especially when using a prototype guided search for counterfactual explanations on the latent space . In this paper , we propose to build a latent space that is aware of the class tag information through joint training of the auto-encoder and the classifier . Thus the counterfactual explanations generated will not only be faithful to the entire training data distribution but also faithful to the data distribution of the target class . We show that there are considerable improvements in interpretability , sparsity and proximity metrics can be achieved simultaneously , if the auto-encoder trained in a semi-supervised fashion with class tagged input data . Our approach does not rely on the availability of train data used for the black box classifier . It can be easily generalized to a post-hoc explanation method using the semi-supervised learning framework , which relies only on the predictions on the black box model . In the next section we present the related work . Then , in section 3 we present preliminary definitions and approaches necessary to introduce our approach . In section 4 , we present our approach and empirically evaluate it in section 5 . 2 RELATED WORK . Counterfactual analysis is a concept derived from from causal intervention analysis . Counterfactuals refer to model outputs corresponding to certain imaginary scenarios that we have not observed or can not observe . Recently Wachter et al . ( 2017 ) proposed the idea of model agnostic ( without opening the black box ) counterfactual explanations , through simultaneous minimization of the error between model prediction and the desired counterfactual and distance between original instance and their corresponding counterfactual . This idea has been extended for multiple scenarios by Mahajan et al . ( 2019 ) , Ustun et al . ( 2019 ) , Poyiadzi et al . ( 2020 ) based on the incorporation of feasibility constraints , actionability and diversity of counterfactuals . Authors in Mothilal et al . ( 2020 ) proposed a framework for generating diverse set of counterfactual explanations based on determinantal point processes . They argue that a wide range of suggested changes along with a proximity to the original input improves the chances those changes being adopted by data subjects . Causal constraints of our society do not allow the data subjects to reduce their age while increasing their educational qualifications . Such feasibility constraints were addressed by Mahajan et al . ( 2019 ) and Joshi et al . ( 2019 ) using a causal framework . Authors in Mahajan et al . ( 2019 ) addresses the feasibility of counterfactual explanations through causal relationship constraints amongst input features . They present a method that uses structural causal models to generate actionable counterfactuals . Authors in Joshi et al . ( 2019 ) propose to characterize data manifold and then provide an optimization framework to search for actionable counterfactual explanation on the data manifold via its latent representation . Authors in Poyiadzi et al . ( 2020 ) address the issues of feasibility and actionability through feasible paths , which are based on the shortest path distances defined via density-weighted metrics . An important aspect of counterfactual explanations is their interpretability . A counterfactual explanation is more interpretable if it lies within or close to the data distribution of the training data of the black box classifier . To address this issue Dhurandhar et al . ( 2018 ) proposed the use of auto-encoders to generate counterfactual explanations which are “ close ” to the data manifold . They proposed incorporation of an auto-encoder reconstruction loss in counterfactual search process to penalize counterfactual which are not true to the data manifold . This line of research was further extended by Van Looveren & Klaise ( 2019 ) , they proposed to connect the output behaviour of the classifier to the latent space of the auto-encoder using prototypes . These prototypes improved speed of counterfactual search process and the interpretability of the resulting counterfactual explanations . While Van Looveren & Klaise ( 2019 ) connects the output behaviour of the classifier to the latent space through prototypes , the latent space is still unaware of the class tag information . We propose to build a latent space which is aware of the class tag information through joint training of the auto-encoder and the classifier . Thus the counterfactual explanations generated will not only be faithful the entire training data distribution but also faithful the data distribution of the target class . In a post-hoc scenario where access to the training data is not guaranteed , we propose to use the input-output pair data of the black box classifier to jointly train the auto-encoder and classifier in the semi-supervised learning framework . Authors in Zhai & Zhang ( 2016 ) , Gogna et al . ( 2016 ) have explored the use semisupervised auto-encoders for sentiment analysis and analysis of biomedical signal analysis . Authors in Haiyan et al . ( 2015 ) propose a joint framework of representation and supervised learning which guarantees not only the semantics of the original data from representation learning but also fit the training data well via supervised learning . However , as far as our knowledge goes , semi-supervised learning has not been used to generate counterfactual explanations and we experimentally show that semi-supervised learning framework generates more interpretable counterfactual explanations . 3 PRELIMINARIES . Let D = { xi , yi } i=1 ... N be the supervised data set where xi 2 X is d-dimensional input feature space for a classifier and yi 2 Y = { 1 , 2 , . . . , ` } is the set of outputs for a classifier . Throughout this paper we assume the existence of a black box classifier h : X ! Y trained on D such that ŷ = h ( x ) = argmaxc2Y p ( y = c | x , D ) where p ( y = c | x , D ) is prediction score/probability for class c with an input x . Based on Wachter et al . ( 2017 ) , counterfactual explanations can be generated by trading off between prediction loss and sparsity . This is achieved by optimizing a linear combination of the prediction loss ( Lpred ) and loss of sparsity ( Lsparsity ) as L = c·Lpred+Lsparsity . Prediction loss typically measures the distance between current prediction and the target class , whereas sparsity loss function measures the perturbation from the initial instance x0 with class tag t0 . This approach generates counterfactual explanations which can reach their target class with a sparse perturbation to the initial instance . However , they need not necessarily respect the input data distribution of the classifier , hence , resulting in unreasonable values for xcfe . Authors in Dhurandhar et al . ( 2018 ) addressed this issue through incorporation of L2 reconstruction error for xcfe evaluated through an autoencoder ( AE ) trained on the input data X as LXrecon ( x ) = kx AEX ( x ) k22 where AEX represents the auto-encoder trained on entire training dataset X . The auto-encoder loss function LXrecon penalizes counterfactual explanations which do not lie within the data-distribution . However , Van Looveren & Klaise ( 2019 ) illustrated that incorporating LXrecon in L may result in counterfactual explanations which lie inside the input data-distribution but they may not be interpretable . To this end , Van Looveren & Klaise ( 2019 ) proposes addition of a prototype loss function LXproto to L to make xcfe more interpretable and improve the counterfactual search process through prototypes in the latent space of auto-encoder . LXproto is the L2 error between the latent encoding of x and cluster centroid of the target class in the latent space of the encoder defined as protot ( short for target prototype ) as LXproto ( x , protot ) = kENCX ( x ) prototk22 , where ENCX represents encoder part of the auto-encoder AEX and ENCX ( x ) represents the projection of x on to the latent space of the auto-encoder . Given a target t , the corresponding protot can be defined as protot = 1 K KX k=1 ENCX ( x t k ) ( 1 ) where xtk represent the input instances corresponding to the class t such that { ENCX ( xtk ) } k=1 , ... , K are the K nearest neighbors of ENCX ( x0 ) . For applications where target class t is not pre-defined , a suitable replacement for protot is evaluated by finding the nearest prototype protoj of class j 6= t0 to the encoding of x0 , given by j = argmini 6=t0 kENCX ( x0 ) protoik2 . Then prototype loss Lproto can be defined as Lproto ( x , protoj ) = kENCX ( x0 ) protojk22 . According to Van Looveren & Klaise ( 2019 ) the loss function Lproto explicitly guides the encoding of the counterfactual explanation to the target prototype ( or the nearest protoytpe protoi 6=t0 ) . Thus we have a loss function L given by L = c · Lpred + Lsparsity + · LXrecon + ✓ · LXproto ( 2 ) where c , and ✓ are hyper-parameters tuned globally for each data set . For detailed descriptions of these parameters and their impact on the counterfactual search , we refer the readers to Ltd . In this paper , we propose an alternate version of this loss function . The constituent loss functions LXrecon and LXproto are based on an auto-encoder trained in an unsupervised fashion . In the next section we motivate the use of an auto-encoder trained using class tagged data in a semi-supervised fashion . 4 SEMI-SUPERVISED COUNTERFACTUAL EXPLANATIONS x y h ( a ) Classification x z x X 1 X ( b ) Autoencoder x z y x D 1 D ⇠ ( c ) Jointly trained model For the supervised classification data set D = hX , Yi machine learning methods learn a classifier model h : X 7 ! Y ( see figure 1a ) . This process of learning involves minimizing a loss function of the form : Eentropy = P ` j PN i I ( yi = j ) ⇤ log ( p ( y = j | xi , D ) ) . Auto-encoder is a neural network framework which learns a latent space representation z 2 Z for input data x 2 X along with an invertible mapping ( 1X ) ( see figure 1b ) in an unsupervised fashion . The subscript X represents the unsupervised training of X and 1X only on the dataset X . The un-supervised learning framework tries to learn data compression using continuous map X , while minimizing the reconstruction loss : Eautoenc = q 1 N PN i |xi x0i| 2 . In this paper we consider only undercomplete autoencoders that produce a lower dimension representation ( Z ) of an high dimensional space ( X ) , while the decoder network ensures the reconstruction guarantee ( x ⇡ 1 ( ( x ) ) . A traditional undercomplete auto-encoder captures the correlation between the input features for the dimension reduction .
This paper continues an emerging line of research to find interpretable (post-hoc) counterfactual explanations of classifier predictions. While prior work has made advances in ensuring that resulting counterfactuals lie in the same data distribution as the original dataset by using auto-encoders, this paper provides a semi-supervised approach in an attempt to also ensure that they lie in the data distribution corresponding to the counterfactual label. The authors show the benefits of their approach on six datasets: German Credit data, Adult Census data, MNIST, COMPAS, PIMA, and Breast Cancer Wisconsin data.
SP:ec3d792f859916d782bce86107d178f6965fc9b1
Empirical or Invariant Risk Minimization? A Sample Complexity Perspective
1 INTRODUCTION . A recent study shows that models trained to detect COVID-19 from chest radiographs rely on spurious factors such as the source of the data rather than the lung pathology ( DeGrave et al. , 2020 ) . This is just one of many alarming examples of spurious correlations failing to hold outside a specific training distribution . In one commonly cited example , Beery et al . ( 2018 ) trained a convolutional neural network ( CNN ) to classify camels from cows . In the training data , most pictures of the cows had green pastures , while most pictures of camels were in the desert . The CNN picked up the spurious correlation and associated green pastures with cows thus failing to classify cows on beaches . Recently , Arjovsky et al . ( 2019 ) proposed a framework called invariant risk minimization ( IRM ) to address the problem of models inheriting spurious correlations . They showed that when data is gathered from multiple environments , one can learn to exploit invariant causal relationships , rather than relying on varying spurious relationships , thus learning robust predictors . More recent work suggests that empirical risk minimization ( ERM ) is still state-of-the-art on many problems requiring OOD generalization ( Gulrajani & Lopez-Paz , 2020 ) . This gives rise to a fundamental question : when is IRM better than ERM ( and vice versa ) ? In this work , we seek to answer this question through a systematic comparison of the sample complexity of the two approaches under different types of train and test distributional mismatches . The distribution shifts ( Ptrain ( X , Y ) 6= Ptest ( X , Y ) ) that we consider informally stated satisfy an invariance condition – there exists a representation Φ∗ of the covariates such that Ptrain ( Y |Φ∗ ( X ) ) = Ptest ( Y |Φ∗ ( X ) ) = P ( Y |Φ∗ ( X ) ) . A special case of this occurs when Φ∗ is identity – Ptrain ( X ) 6= Ptest ( X ) but Ptrain ( Y |X ) = Ptest ( Y |X ) – such a shift is known as a covariate-shift ( Gretton et al. , 2009 ) . In many other settings Φ∗ may not be identity ( denoted as I ) , examples include settings with confounders or anti-causal variables ( Pearl , 2009 ) where covariates appear spuriously correlated with the label and Ptrain ( Y |X ) 6= Ptest ( Y |X ) . We use causal Bayesian networks to illustrate these shifts in Figure 1 . Suppose Xe = [ Xe1 , X e 2 ] represents the image , where X e 1 is the shape of the animal and Xe2 is the background color , Y e is the label of the animal , and e is the index of the environment/domain . In Figure 1a ) Xe2 is independent of ( Y e , Xe1 ) , it represents the covariate shift case ( Φ∗ = I ) . In Figure 1b ) Xe2 is spuriously correlated with Y e through the confounder εe . In Figure 1c ) Xe2 is spuriously correlated with Y e as it is anti-causally related to Y e. In both Figure 1b ) and c ) Φ∗ 6= I ; Φ∗ is a block diagonal matrix that selects Xe1 . Our setup assumes we are given data from multiple training environments satisfying the invariance condition , i.e. , P ( Y |Φ∗ ( X ) ) is the same across all of them . Ideally , we want to learn and predict using E [ Y |Φ∗ ( X ) ] ; this predictor has a desirable OOD behavior as we show later where we prove min-max optimality with respect to ( w.r.t . ) unseen test distributions satisfying the invariance condition . Our goal is to analyze and compare ERM and IRM ’ s ability to learn E [ Y |Φ∗ ( X ) ] from finite training data acquired from a fixed number of training environments . Our analysis has two parts . 1 ) Covariate shift case ( Φ∗ = I ) : ERM and IRM achieve the same asymptotic solution E [ Y |X ] . We prove ( Proposition 4 ) that the sample complexity for both the methods is similar thus there is no clear winner between the two in the finite sample regime . For the setup in Figure 1a ) , both ERM and IRM learn a model that only uses Xe1 . 2 ) Confounder/Anti-causal variable case ( Φ∗ 6= I ) : We consider a family of structural equation models ( linear and polynomial ) that may contain confounders and/or anti-causal variables . For the class of models we consider , the asymptotic solution of ERM is biased and not equal to the desired E [ Y |Φ∗ ( X ) ] . We prove that IRM can learn a solution that is within O ( √ ) distance from E [ Y |Φ∗ ( X ) ] with a sample complexity that increases as O ( 1 2 ) and increases polynomially in the complexity of the model class ( Proposition 5 , 6 ) ; ( defined later ) is the slack in IRM constraints . For the setup in Figure 1b ) and c ) , IRM gets close to only using Xe1 , while ERM even with infinite data ( Proposition 17 in the supplement ) continues to use Xe2 . We summarize the results in Table 1 . Arjovsky et al . ( 2019 ) proposed the colored MNIST ( CMNIST ) dataset ; comparisons on it showed how ERM-based models exploit spurious factors ( background color ) . The CMNIST dataset relied on anti-causal variables . Many supervised learning datasets may not contain anti-causal variables ( e.g . human labeled images ) . Therefore , we propose and analyze three new variants of CMNIST in addition to the original one that map to different real-world settings : i ) covariate shift based CMNIST ( CS-CMNIST ) : relies on selection bias to induce spurious correlations , ii ) confounded CMNIST ( CF-CMNIST ) : relies on confounders to induce spurious correlations , iii ) anti-causal CMNIST ( ACCMNIST ) : this is the original CMNIST proposed by Arjovsky et al . ( 2019 ) , and iv ) anti-causal and confounded ( hybrid ) CMNIST ( HB-CMNIST ) : relies on confounders and anti-causal variables to induce spurious correlations . On the latter three datasets , which belong to the Φ∗ 6= I class described above , IRM has a much better OOD behavior than ERM , which performs poorly regardless of the data size . However , IRM and ERM have a similar performance on CS-CMNIST with no clear winner . These results are consistent with our theory and are also validated in regression experiments . 2 RELATED WORKS . IRM based works . Following the original work IRM from Arjovsky et al . ( 2019 ) , there have been several interesting works — ( Teney et al. , 2020 ; Krueger et al. , 2020 ; Ahuja et al. , 2020 ; Chang et al. , 2020 ; Mahajan et al. , 2020 ) is an incomplete representative list — that build new methods inpired from IRM to address the OOD generalization problem . Arjovsky et al . ( 2019 ) prove OOD guarantees for linear models with access to infinite data from finite environments . We generalize these results in several ways . We provide a first finite sample analysis of IRM . We characterize the impact of hypothesis class complexity , number of environments , weight of IRM penalty on the sample complexity and its distance from the OOD solution for linear and polynomial models . Theory of domain generalization and domain adaption . Following the seminal works ( BenDavid et al. , 2007 ; 2010 ) , there have been many interesting works — ( Muandet et al. , 2013 ; Ajakan et al. , 2014 ; Zhao et al. , 2019 ; Albuquerque et al. , 2019 ; Li et al. , 2017 ; Piratla et al. , 2020 ; Matsuura & Harada , 2020 ; Deng et al. , 2020 ; David et al. , 2010 ; Pagnoni et al. , 2018 ) is an incomplete representative list ( see Redko et al . ( 2019 ) for further references ) — that build the theory of domain adaptation and generalization and construct new methods based on it . While many of these works develop bounds on loss over the target domain using train data and unlabeled target data , some ( Ben-David & Urner , 2012 ; David et al. , 2010 ; Pagnoni et al. , 2018 ) analyze the finite sample ( PAC ) guarantees for domain adaptation under covariate shifts . These works ( Ben-David & Urner , 2012 ; David et al. , 2010 ; Pagnoni et al. , 2018 ) access unlabeled data from a target domain , which we do not . Instead , we have data from multiple training domains ( as in domain generalization ) . In these works , the guarantees are w.r.t . a specific target domain , while we provide ( for linear and polynomial models ) worst-case guarantees w.r.t . all the unseen domains satisfying the invariance condition . Also , we consider a larger family of distribution shifts including covariate shifts . The above two categories are not exhaustive – e.g. , there are some recent works that characterize how some inductive biases favor extrapolation Xu et al . ( 2021 ) and can be better for OOD generalization . 3 SAMPLE COMPLEXITY OF INVARIANT RISK MINIMIZATION . 3.1 INVARIANT RISK MINIMIZATION . We start with some background on IRM ( Arjovsky et al. , 2019 ) . Consider a datasetD = { De } e∈Etr , which is a collection of datasets De = { ( xei , yei , e ) } ne i=1 } obtained from a set of training environments Etr , where e is the index of the environment , i is the index of the data point in the environment , ne is the number of points from environment , xei ∈ X ⊆ Rn is the feature value and yei ∈ Y ⊆ R is the corresponding label . Define a probability distribution { πe } e∈Etr , πe is the probability that a training data point is from environment e. Define a probability distribution of points conditional on environment e as Pe , ( Xe , Y e ) ∼ Pe . Define the joint distribution P̄ , ( Xe , Y e , e ) ∼ P̄ , dP̄ ( Xe , Y e , e ) = πedPe ( Xe , Y e ) . D is a collection of i.i.d . samples from P̄ . Define a predictor f : X → R and the space F of all the possible maps from X → R. Define the risk achieved by f in environment e as Re ( f ) = Ee [ ` ( f ( Xe ) , Y e ) ] , where ` is the loss , f ( Xe ) is the predicted value , Y e is the corresponding label and Ee is the expectation conditional on environment e. The overall expected risk across the training environments is R ( f ) = ∑ e∈Etr π eRe ( f ) . We are interested in two settings : regression ( square loss ) and binary-classification ( cross-entropy loss ) . In the main body , our focus is regression ( square loss ) and we mention wherever the results extend to binary-classification ( cross-entropy ) . We discuss these extensions in the supplement . OOD generalization problem . We want to construct a predictor f that performs well across many unseen environments Eall , where Eall ⊇ Etr . For o ∈ Eall\Etr , the distribution Po can be very different from the train environments . Next we state the OOD problem . min f∈F max e∈Eall Re ( f ) ( 1 ) The above problem is very challenging to solve since we only have access to data from training environments Etr but are required to find the robust solution over all environments Eall . Next , we make assumptions on Eall and characterize the optimal solution to equation 1 . Assumption 1 . Invariance condition . There exists a representation Φ∗ that transforms Xe to Ze = Φ∗ ( Xe ) and ∀e , o ∈ Eall , ∀z ∈ Φ∗ ( X ) satisfies Ee [ Y e|Ze = z ] = Eo [ Y o|Zo = z ] . Also , ∀e ∈ Eall , ∀z ∈ Φ∗ ( X ) , Vare [ Y e|Ze = z ] = ξ2 , where Vare is the conditional variance . The above assumption is inspired from causality ( Pearl , 2009 ) . Φ∗ acts as the causal feature extractor and from the definition of causal features , it follows that Ee [ Y e|Ze = z ] does not vary across environments . When a human labels a cow she uses Φ∗ to extract causal features from the pixels to identify cow while ignoring the background . The first part of the above assumption encompasses a large class of distribution shifts including standard covariate shifts ( Gretton et al. , 2009 ) . Covariate shift assumes ∀e , o ∈ Eall , ∀x ∈ X , P ( Y e|Xe = x ) and P ( Y o|Xo = x ) are equal thus implying Ee [ Y e|Xe = x ] = Eo [ Y o|Xo = x ] . Therefore , for covariate shifts , Φ∗ is identity in Assumption 1 . A simple instance illustrating Assumption 1 with Φ∗ = I is when Y e ← g ( Xe ) +εe , where Ee [ εe ] = 0 , Ee [ ( εe ) 2 ] = σ2 , εe ⊥ Xe . Using Assumption 1 , we define the invariant map m : Φ∗ ( X ) → R as follows ∀z ∈ Φ∗ ( X ) , m ( z ) = Ee [ Y e|Ze = z ] , where Ze = Φ∗ ( Xe ) ( 2 ) Assumption 2 . Existence of an environment where the invariant representation is sufficient . ∃ an environment e ∈ Eall such that Y e ⊥ Xe|Ze Assumption 2 states there exists an environment where the information that Xe has about Y e is also contained in Ze . Define a composition m ◦ Φ∗ , ∀x ∈ X , m ◦ Φ∗ ( x ) = Ee [ Y e|Ze = Φ∗ ( x ) ] . Proposition 1 . If ` is the square loss , and Assumptions 1 and 2 hold , then m ◦ Φ∗ solves the OOD problem ( equation 1 ) . The proofs of all the propositions are in the supplement . A similar result holds for the cross-entropy loss ( discussion in supplement ) . For the rest of the paper , we focus on learning m ◦ Φ∗ as it solves the OOD problem . For covariate shifts Φ∗ = I , m ( x ) = Ee [ Y e|Xe = x ] is the OOD solution . In Arjovsky et al . ( 2019 ) , a proof connecting m ◦ Φ∗ and OOD was not stated . Recently , in Koyama & Yamaguchi ( 2020 ) , a result similar to Proposition 1 was shown but with a few differences . The authors assume conditional probabilities are invariant unlike our assumption that only requires conditional expectations and variances to be invariant . However , their result applies to more losses . m ◦Φ∗ is the target we want to learn . Arjovsky et al . ( 2019 ) proposed IRM since standard min-max optimization over the training environments Etr and ERM fail to learn m ◦ Φ∗ in many cases . The authors in Arjovsky et al . ( 2019 ) identify a crucial property of m ◦ Φ∗ and use it to define an object called invariant predictor that we define next . Invariant predictor and IRM optimization . Define a representation map Φ : X → Z from feature space to representation space Z ⊆ Rq . Define a classifier map , w : Z → R from representation space to real values . DefineHΦ andHw as the spaces of representations and classifiers respectively . A data representation Φ elicits an invariant predictor w ◦ Φ across environments e ∈ Etr if there is a classifier w that achieves the minimum risk simultaneously for all the environments , i.e. , ∀e ∈ Etr , w ∈ arg minw̄∈Hw Re ( w̄ ◦ Φ ) . Observe that if we we transform the data with representation Φ∗ then m will achieve the minimum risk simultaneously in all the environments . If Φ∗ ∈ HΦ and m ∈ Hw , then m ◦ Φ∗ is an invariant predictor . IRM selects the invariant predictor with least sum risk across environments ( results presented later can be adapted if invariant predictor was selected based on the worst-case risk over the environments as well ) as follows : min Φ∈HΦ , w∈Hw R ( w ◦ Φ ) = ∑ e∈Etr πeRe ( w ◦ Φ ) s.t . w ∈ arg min w̄∈Hw Re ( w̄ ◦ Φ ) , ∀e ∈ Etr ( 3 ) From the above discussion we know m ◦ Φ∗ is a feasible solution to equation 3 . It is also the ideal solution we want IRM to find since it solves equation 1 . Later in Propositions 4 , 5 , and 6 , we show that IRM actually solves equation equation 1 . For the setups in Proposition 5 , and 6 , conventional ERM based approaches fail thus justifying the need for above formulation .
The paper investigates the choice of learning paradigms to reach out-of-distribution generalization, namely IRM vs ERM under different scenarios of domain generalization. Technically, generalization bounds and rates are calculated to be able to compare theoretically how each paradigm fares in the different scenarios. Proposed analysis shows that IRM generalizes better than ERM in a number of important cases: presence of confounders and/or anti--causal variables. Numerical experiments on variants of the Colored MNIST benchmark (for each scenario) verify the theoretical findings.
SP:3d8e456a346e4f54909fb689197cbf80c51e601a
Empirical or Invariant Risk Minimization? A Sample Complexity Perspective
1 INTRODUCTION . A recent study shows that models trained to detect COVID-19 from chest radiographs rely on spurious factors such as the source of the data rather than the lung pathology ( DeGrave et al. , 2020 ) . This is just one of many alarming examples of spurious correlations failing to hold outside a specific training distribution . In one commonly cited example , Beery et al . ( 2018 ) trained a convolutional neural network ( CNN ) to classify camels from cows . In the training data , most pictures of the cows had green pastures , while most pictures of camels were in the desert . The CNN picked up the spurious correlation and associated green pastures with cows thus failing to classify cows on beaches . Recently , Arjovsky et al . ( 2019 ) proposed a framework called invariant risk minimization ( IRM ) to address the problem of models inheriting spurious correlations . They showed that when data is gathered from multiple environments , one can learn to exploit invariant causal relationships , rather than relying on varying spurious relationships , thus learning robust predictors . More recent work suggests that empirical risk minimization ( ERM ) is still state-of-the-art on many problems requiring OOD generalization ( Gulrajani & Lopez-Paz , 2020 ) . This gives rise to a fundamental question : when is IRM better than ERM ( and vice versa ) ? In this work , we seek to answer this question through a systematic comparison of the sample complexity of the two approaches under different types of train and test distributional mismatches . The distribution shifts ( Ptrain ( X , Y ) 6= Ptest ( X , Y ) ) that we consider informally stated satisfy an invariance condition – there exists a representation Φ∗ of the covariates such that Ptrain ( Y |Φ∗ ( X ) ) = Ptest ( Y |Φ∗ ( X ) ) = P ( Y |Φ∗ ( X ) ) . A special case of this occurs when Φ∗ is identity – Ptrain ( X ) 6= Ptest ( X ) but Ptrain ( Y |X ) = Ptest ( Y |X ) – such a shift is known as a covariate-shift ( Gretton et al. , 2009 ) . In many other settings Φ∗ may not be identity ( denoted as I ) , examples include settings with confounders or anti-causal variables ( Pearl , 2009 ) where covariates appear spuriously correlated with the label and Ptrain ( Y |X ) 6= Ptest ( Y |X ) . We use causal Bayesian networks to illustrate these shifts in Figure 1 . Suppose Xe = [ Xe1 , X e 2 ] represents the image , where X e 1 is the shape of the animal and Xe2 is the background color , Y e is the label of the animal , and e is the index of the environment/domain . In Figure 1a ) Xe2 is independent of ( Y e , Xe1 ) , it represents the covariate shift case ( Φ∗ = I ) . In Figure 1b ) Xe2 is spuriously correlated with Y e through the confounder εe . In Figure 1c ) Xe2 is spuriously correlated with Y e as it is anti-causally related to Y e. In both Figure 1b ) and c ) Φ∗ 6= I ; Φ∗ is a block diagonal matrix that selects Xe1 . Our setup assumes we are given data from multiple training environments satisfying the invariance condition , i.e. , P ( Y |Φ∗ ( X ) ) is the same across all of them . Ideally , we want to learn and predict using E [ Y |Φ∗ ( X ) ] ; this predictor has a desirable OOD behavior as we show later where we prove min-max optimality with respect to ( w.r.t . ) unseen test distributions satisfying the invariance condition . Our goal is to analyze and compare ERM and IRM ’ s ability to learn E [ Y |Φ∗ ( X ) ] from finite training data acquired from a fixed number of training environments . Our analysis has two parts . 1 ) Covariate shift case ( Φ∗ = I ) : ERM and IRM achieve the same asymptotic solution E [ Y |X ] . We prove ( Proposition 4 ) that the sample complexity for both the methods is similar thus there is no clear winner between the two in the finite sample regime . For the setup in Figure 1a ) , both ERM and IRM learn a model that only uses Xe1 . 2 ) Confounder/Anti-causal variable case ( Φ∗ 6= I ) : We consider a family of structural equation models ( linear and polynomial ) that may contain confounders and/or anti-causal variables . For the class of models we consider , the asymptotic solution of ERM is biased and not equal to the desired E [ Y |Φ∗ ( X ) ] . We prove that IRM can learn a solution that is within O ( √ ) distance from E [ Y |Φ∗ ( X ) ] with a sample complexity that increases as O ( 1 2 ) and increases polynomially in the complexity of the model class ( Proposition 5 , 6 ) ; ( defined later ) is the slack in IRM constraints . For the setup in Figure 1b ) and c ) , IRM gets close to only using Xe1 , while ERM even with infinite data ( Proposition 17 in the supplement ) continues to use Xe2 . We summarize the results in Table 1 . Arjovsky et al . ( 2019 ) proposed the colored MNIST ( CMNIST ) dataset ; comparisons on it showed how ERM-based models exploit spurious factors ( background color ) . The CMNIST dataset relied on anti-causal variables . Many supervised learning datasets may not contain anti-causal variables ( e.g . human labeled images ) . Therefore , we propose and analyze three new variants of CMNIST in addition to the original one that map to different real-world settings : i ) covariate shift based CMNIST ( CS-CMNIST ) : relies on selection bias to induce spurious correlations , ii ) confounded CMNIST ( CF-CMNIST ) : relies on confounders to induce spurious correlations , iii ) anti-causal CMNIST ( ACCMNIST ) : this is the original CMNIST proposed by Arjovsky et al . ( 2019 ) , and iv ) anti-causal and confounded ( hybrid ) CMNIST ( HB-CMNIST ) : relies on confounders and anti-causal variables to induce spurious correlations . On the latter three datasets , which belong to the Φ∗ 6= I class described above , IRM has a much better OOD behavior than ERM , which performs poorly regardless of the data size . However , IRM and ERM have a similar performance on CS-CMNIST with no clear winner . These results are consistent with our theory and are also validated in regression experiments . 2 RELATED WORKS . IRM based works . Following the original work IRM from Arjovsky et al . ( 2019 ) , there have been several interesting works — ( Teney et al. , 2020 ; Krueger et al. , 2020 ; Ahuja et al. , 2020 ; Chang et al. , 2020 ; Mahajan et al. , 2020 ) is an incomplete representative list — that build new methods inpired from IRM to address the OOD generalization problem . Arjovsky et al . ( 2019 ) prove OOD guarantees for linear models with access to infinite data from finite environments . We generalize these results in several ways . We provide a first finite sample analysis of IRM . We characterize the impact of hypothesis class complexity , number of environments , weight of IRM penalty on the sample complexity and its distance from the OOD solution for linear and polynomial models . Theory of domain generalization and domain adaption . Following the seminal works ( BenDavid et al. , 2007 ; 2010 ) , there have been many interesting works — ( Muandet et al. , 2013 ; Ajakan et al. , 2014 ; Zhao et al. , 2019 ; Albuquerque et al. , 2019 ; Li et al. , 2017 ; Piratla et al. , 2020 ; Matsuura & Harada , 2020 ; Deng et al. , 2020 ; David et al. , 2010 ; Pagnoni et al. , 2018 ) is an incomplete representative list ( see Redko et al . ( 2019 ) for further references ) — that build the theory of domain adaptation and generalization and construct new methods based on it . While many of these works develop bounds on loss over the target domain using train data and unlabeled target data , some ( Ben-David & Urner , 2012 ; David et al. , 2010 ; Pagnoni et al. , 2018 ) analyze the finite sample ( PAC ) guarantees for domain adaptation under covariate shifts . These works ( Ben-David & Urner , 2012 ; David et al. , 2010 ; Pagnoni et al. , 2018 ) access unlabeled data from a target domain , which we do not . Instead , we have data from multiple training domains ( as in domain generalization ) . In these works , the guarantees are w.r.t . a specific target domain , while we provide ( for linear and polynomial models ) worst-case guarantees w.r.t . all the unseen domains satisfying the invariance condition . Also , we consider a larger family of distribution shifts including covariate shifts . The above two categories are not exhaustive – e.g. , there are some recent works that characterize how some inductive biases favor extrapolation Xu et al . ( 2021 ) and can be better for OOD generalization . 3 SAMPLE COMPLEXITY OF INVARIANT RISK MINIMIZATION . 3.1 INVARIANT RISK MINIMIZATION . We start with some background on IRM ( Arjovsky et al. , 2019 ) . Consider a datasetD = { De } e∈Etr , which is a collection of datasets De = { ( xei , yei , e ) } ne i=1 } obtained from a set of training environments Etr , where e is the index of the environment , i is the index of the data point in the environment , ne is the number of points from environment , xei ∈ X ⊆ Rn is the feature value and yei ∈ Y ⊆ R is the corresponding label . Define a probability distribution { πe } e∈Etr , πe is the probability that a training data point is from environment e. Define a probability distribution of points conditional on environment e as Pe , ( Xe , Y e ) ∼ Pe . Define the joint distribution P̄ , ( Xe , Y e , e ) ∼ P̄ , dP̄ ( Xe , Y e , e ) = πedPe ( Xe , Y e ) . D is a collection of i.i.d . samples from P̄ . Define a predictor f : X → R and the space F of all the possible maps from X → R. Define the risk achieved by f in environment e as Re ( f ) = Ee [ ` ( f ( Xe ) , Y e ) ] , where ` is the loss , f ( Xe ) is the predicted value , Y e is the corresponding label and Ee is the expectation conditional on environment e. The overall expected risk across the training environments is R ( f ) = ∑ e∈Etr π eRe ( f ) . We are interested in two settings : regression ( square loss ) and binary-classification ( cross-entropy loss ) . In the main body , our focus is regression ( square loss ) and we mention wherever the results extend to binary-classification ( cross-entropy ) . We discuss these extensions in the supplement . OOD generalization problem . We want to construct a predictor f that performs well across many unseen environments Eall , where Eall ⊇ Etr . For o ∈ Eall\Etr , the distribution Po can be very different from the train environments . Next we state the OOD problem . min f∈F max e∈Eall Re ( f ) ( 1 ) The above problem is very challenging to solve since we only have access to data from training environments Etr but are required to find the robust solution over all environments Eall . Next , we make assumptions on Eall and characterize the optimal solution to equation 1 . Assumption 1 . Invariance condition . There exists a representation Φ∗ that transforms Xe to Ze = Φ∗ ( Xe ) and ∀e , o ∈ Eall , ∀z ∈ Φ∗ ( X ) satisfies Ee [ Y e|Ze = z ] = Eo [ Y o|Zo = z ] . Also , ∀e ∈ Eall , ∀z ∈ Φ∗ ( X ) , Vare [ Y e|Ze = z ] = ξ2 , where Vare is the conditional variance . The above assumption is inspired from causality ( Pearl , 2009 ) . Φ∗ acts as the causal feature extractor and from the definition of causal features , it follows that Ee [ Y e|Ze = z ] does not vary across environments . When a human labels a cow she uses Φ∗ to extract causal features from the pixels to identify cow while ignoring the background . The first part of the above assumption encompasses a large class of distribution shifts including standard covariate shifts ( Gretton et al. , 2009 ) . Covariate shift assumes ∀e , o ∈ Eall , ∀x ∈ X , P ( Y e|Xe = x ) and P ( Y o|Xo = x ) are equal thus implying Ee [ Y e|Xe = x ] = Eo [ Y o|Xo = x ] . Therefore , for covariate shifts , Φ∗ is identity in Assumption 1 . A simple instance illustrating Assumption 1 with Φ∗ = I is when Y e ← g ( Xe ) +εe , where Ee [ εe ] = 0 , Ee [ ( εe ) 2 ] = σ2 , εe ⊥ Xe . Using Assumption 1 , we define the invariant map m : Φ∗ ( X ) → R as follows ∀z ∈ Φ∗ ( X ) , m ( z ) = Ee [ Y e|Ze = z ] , where Ze = Φ∗ ( Xe ) ( 2 ) Assumption 2 . Existence of an environment where the invariant representation is sufficient . ∃ an environment e ∈ Eall such that Y e ⊥ Xe|Ze Assumption 2 states there exists an environment where the information that Xe has about Y e is also contained in Ze . Define a composition m ◦ Φ∗ , ∀x ∈ X , m ◦ Φ∗ ( x ) = Ee [ Y e|Ze = Φ∗ ( x ) ] . Proposition 1 . If ` is the square loss , and Assumptions 1 and 2 hold , then m ◦ Φ∗ solves the OOD problem ( equation 1 ) . The proofs of all the propositions are in the supplement . A similar result holds for the cross-entropy loss ( discussion in supplement ) . For the rest of the paper , we focus on learning m ◦ Φ∗ as it solves the OOD problem . For covariate shifts Φ∗ = I , m ( x ) = Ee [ Y e|Xe = x ] is the OOD solution . In Arjovsky et al . ( 2019 ) , a proof connecting m ◦ Φ∗ and OOD was not stated . Recently , in Koyama & Yamaguchi ( 2020 ) , a result similar to Proposition 1 was shown but with a few differences . The authors assume conditional probabilities are invariant unlike our assumption that only requires conditional expectations and variances to be invariant . However , their result applies to more losses . m ◦Φ∗ is the target we want to learn . Arjovsky et al . ( 2019 ) proposed IRM since standard min-max optimization over the training environments Etr and ERM fail to learn m ◦ Φ∗ in many cases . The authors in Arjovsky et al . ( 2019 ) identify a crucial property of m ◦ Φ∗ and use it to define an object called invariant predictor that we define next . Invariant predictor and IRM optimization . Define a representation map Φ : X → Z from feature space to representation space Z ⊆ Rq . Define a classifier map , w : Z → R from representation space to real values . DefineHΦ andHw as the spaces of representations and classifiers respectively . A data representation Φ elicits an invariant predictor w ◦ Φ across environments e ∈ Etr if there is a classifier w that achieves the minimum risk simultaneously for all the environments , i.e. , ∀e ∈ Etr , w ∈ arg minw̄∈Hw Re ( w̄ ◦ Φ ) . Observe that if we we transform the data with representation Φ∗ then m will achieve the minimum risk simultaneously in all the environments . If Φ∗ ∈ HΦ and m ∈ Hw , then m ◦ Φ∗ is an invariant predictor . IRM selects the invariant predictor with least sum risk across environments ( results presented later can be adapted if invariant predictor was selected based on the worst-case risk over the environments as well ) as follows : min Φ∈HΦ , w∈Hw R ( w ◦ Φ ) = ∑ e∈Etr πeRe ( w ◦ Φ ) s.t . w ∈ arg min w̄∈Hw Re ( w̄ ◦ Φ ) , ∀e ∈ Etr ( 3 ) From the above discussion we know m ◦ Φ∗ is a feasible solution to equation 3 . It is also the ideal solution we want IRM to find since it solves equation 1 . Later in Propositions 4 , 5 , and 6 , we show that IRM actually solves equation equation 1 . For the setups in Proposition 5 , and 6 , conventional ERM based approaches fail thus justifying the need for above formulation .
This paper considers a learning scenario where training data $(X,Y)$ comes from a mixture, where membership in each mixture component (“environment”) is clearly labeled. Generalization is required not just under the same mixture, but potentially under changing mixing distributions. This is captured via several alternative formulations. The key underlying assumption that makes the extrapolation possible is the existence of a a representation $\Phi(X)$ that leads to an environment-invariant statistics (by focusing on quadratic loss and linear predictors, the paper asks for invariant $E[Y|\Phi(X)]$ and constant $var[Y|\Phi(X)]$). A range of other assumptions are made to enable analysis, such as sufficiency of the representation in at least one environment and boundedness of the loss and its gradient.
SP:3d8e456a346e4f54909fb689197cbf80c51e601a
Quantifying Differences in Reward Functions
For many tasks , the reward function is inaccessible to introspection or too complex to be specified procedurally , and must instead be learned from user data . Prior work has evaluated learned reward functions by evaluating policies optimized for the learned reward . However , this method can not distinguish between the learned reward function failing to reflect user preferences and the policy optimization process failing to optimize the learned reward . Moreover , this method can only tell us about behavior in the evaluation environment , but the reward may incentivize very different behavior in even a slightly different deployment environment . To address these problems , we introduce the Equivalent-Policy Invariant Comparison ( EPIC ) distance to quantify the difference between two reward functions directly , without a policy optimization step . We prove EPIC is invariant on an equivalence class of reward functions that always induce the same optimal policy . Furthermore , we find EPIC can be efficiently approximated and is more robust than baselines to the choice of coverage distribution . Finally , we show that EPIC distance bounds the regret of optimal policies even under different transition dynamics , and we confirm empirically that it predicts policy training success . Our source code is available at https : //github.com/HumanCompatibleAI/evaluating-rewards . 1 INTRODUCTION . Reinforcement learning ( RL ) has reached or surpassed human performance in many domains with clearly defined reward functions , such as games [ 20 ; 15 ; 23 ] and narrowly scoped robotic manipulation tasks [ 16 ] . Unfortunately , the reward functions for most real-world tasks are difficult or impossible to specify procedurally . Even a task as simple as peg insertion from pixels has a non-trivial reward function that must usually be learned [ 22 , IV.A ] . Tasks involving human interaction can have far more complex reward functions that users may not even be able to introspect on . These challenges have inspired work on learning a reward function , whether from demonstrations [ 13 ; 17 ; 26 ; 8 ; 3 ] , preferences [ 1 ; 25 ; 6 ; 18 ; 27 ] or both [ 10 ; 4 ] . Prior work has usually evaluated the learned reward function R̂ using the “ rollout method ” : training a policy πR̂ to optimize R̂ and then examining rollouts from πR̂ . Unfortunately , using RL to compute πR̂ is often computationally expensive . Furthermore , the method produces false negatives when the reward R̂ matches user preferences but the RL algorithm fails to optimize with respect to R̂ . The rollout method also produces false positives . Of the many reward functions that induce the desired rollout in a given environment , only a small subset align with the user ’ s preferences . For example , suppose the agent can reach states { A , B , C } . If the user prefers A > B > C , but the agent instead learns A > C > B , the agent will still go to the correct state A . However , if the initial state distribution or transition dynamics change , misaligned rewards may induce undesirable policies . For example , if A is no longer reachable at deployment , the previously reliable agent would misbehave by going to the least-favoured state C. We propose instead to evaluate learned rewards via their distance from other reward functions , and summarize our desiderata for reward function distances in Table 1 . For benchmarks , it is usually possible to directly compare a learned reward R̂ to the true reward function R. Alternatively , benchmark creators can train a “ proxy ” reward function from a large human data set . This proxy can then be used as a stand-in for the true reward R when evaluating algorithms trained on a different or smaller data set . ∗Work partially conducted while at DeepMind . Distance Pseudometric Invariant Efficient Robust Predictive . Comparison with a ground-truth reward function is rarely possible outside of benchmarks . However , even in this challenging case , comparisons can at least be used to cluster reward models trained using different techniques or data . Larger clusters are more likely to be correct , since multiple methods arrived at a similar result . Moreover , our regret bound ( Theorem 4.9 ) suggests we could use interpretability methods [ 12 ] on one model and get some guarantees for models in the same cluster . We introduce the Equivalent-Policy Invariant Comparison ( EPIC ) distance that meets all the criteria in Table 1 . We believe EPIC is the first method to quantitatively evaluate reward functions without training a policy . EPIC ( section 4 ) canonicalizes the reward functions ’ potential-based shaping [ 14 ] , then takes the correlation between the canonical rewards over a coverage distribution D of transitions . We also introduce baselines NPEC and ERC ( section 5 ) which partially satisfy the criteria . EPIC works best when D has support on all realistic transitions . We achieve this in our experiments by using uninformative priors , such as rollouts of a policy taking random actions . Moreover , we find EPIC is robust to the exact choice of distribution D , producing similar results across a range of distributions , whereas ERC and especially NPEC are highly sensitive to the choice of D ( section 6.2 ) . Moreover , low EPIC distance between a learned reward R̂ and the true reward R predicts low regret . That is , the policies πR̂ and πR optimized for R̂ and R obtain similar returns under R. Theorem 4.9 bounds the regret even in unseen environments ; by contrast , the rollout method can only determine regret in the evaluation environment . We also confirm this result empirically ( section 6.3 ) . 2 RELATED WORK . There exist a variety of methods to learn reward functions . Inverse reinforcement learning ( IRL ) [ 13 ] is a common approach that works by inferring a reward function from demonstrations . The IRL problem is inherently underconstrained : many different reward functions lead to the same demonstrations . Bayesian IRL [ 17 ] handles this ambiguity by inferring a posterior over reward functions . By contrast , Maximum Entropy IRL [ 26 ] selects the highest entropy reward function consistent with the demonstrations ; this method has scaled to high-dimensional environments [ 7 ; 8 ] . An alternative approach is to learn from preference comparisons between two trajectories [ 1 ; 25 ; 6 ; 18 ] . T-REX [ 4 ] is a hybrid approach , learning from a ranked set of demonstrations . More directly , Cabi et al . [ 5 ] learn from “ sketches ” of cumulative reward over an episode . To the best of our knowledge , there is no prior work that focuses on evaluating reward functions directly . The most closely related work is Ng et al . [ 14 ] , identifying reward transformations guaranteed to not change the optimal policy . However , a variety of ad-hoc methods have been developed to evaluate reward functions . The rollout method – evaluating rollouts of a policy trained on the learned reward – is evident in the earliest work on IRL [ 13 ] . Fu et al . [ 8 ] refined the rollout method by testing on a transfer environment , inspiring our experiment in section 6.3 . Recent work has compared reward functions by scatterplotting returns [ 10 ; 4 ] , inspiring our ERC baseline ( section 5.1 ) . 3 BACKGROUND . This section introduces material needed for the distances defined in subsequent sections . We start by introducing the Markov Decision Process ( MDP ) formalism , then describe when reward functions induce the same optimal policies in an MDP , and finally define the notion of a distance metric . Definition 3.1 . A Markov Decision Process ( MDP ) M = ( S , A , γ , d0 , T , R ) consists of a set of states S and a set of actions A ; a discount factor γ ∈ [ 0 , 1 ] ; an initial state distribution d0 ( s ) ; a transition distribution T ( s′ | s , a ) specifying the probability of transitioning to s′ from s after taking action a ; and a reward function R ( s , a , s′ ) specifying the reward upon taking action a in state s and transitioning to state s′ . A trajectory τ = ( s0 , a0 , s1 , a1 , · · · ) consists of a sequence of states si ∈ S and actions ai ∈ A . The return on a trajectory is defined as the sum of discounted rewards , g ( τ ; R ) = ∑|τ | t=0 γ tR ( st , at , st+1 ) , where the length of the trajectory |τ | may be infinite . In the following , we assume a discounted ( γ < 1 ) infinite-horizon MDP . The results can be generalized to undiscounted ( γ = 1 ) MDPs subject to regularity conditions needed for convergence . A stochastic policy π ( a | s ) assigns probabilities to taking action a ∈ A in state s ∈ S . The objective of an MDP is to find a policy π that maximizes the expected return G ( π ) = Eτ ( π ) [ g ( τ ; R ) ] , where τ ( π ) is a trajectory generated by sampling the initial state s0 from d0 , each action at from the policy π ( at | st ) and successor states st+1 from the transition distribution T ( st+1 | st , at ) . An MDP M has a set of optimal policies π∗ ( M ) that maximize the expected return , π∗ ( M ) = arg maxπ G ( π ) . In this paper , we consider the case where we only have access to an MDP\R , M− = ( S , A , γ , d0 , T ) . The unknown reward function R must be learned from human data . Typically , only the state space S , action space A and discount factor γ are known exactly , with the initial state distribution d0 and transition dynamics T only observable from interacting with the environment M− . Below , we describe an equivalence class whose members are guaranteed to have the same optimal policy set in any MDP\R M− with fixed S , A and γ ( allowing the unknown T and d0 to take arbitrary values ) . Definition 3.2 . Let γ ∈ [ 0 , 1 ] be the discount factor , and Φ : S → R a real-valued function . Then R ( s , a , s′ ) = γΦ ( s′ ) − Φ ( s ) is a potential shaping reward , with potential Φ [ 14 ] . Definition 3.3 ( Reward Equivalence ) . We define two bounded reward functions RA and RB to be equivalent , RA ≡ RB , for a fixed ( S , A , γ ) if and only if there exists a constant λ > 0 and a bounded potential function Φ : S → R such that for all s , s′ ∈ S and a ∈ A : RB ( s , a , s ′ ) = λRA ( s , a , s ′ ) + γΦ ( s′ ) − Φ ( s ) . ( 1 ) Proposition 3.4 . The binary relation≡ is an equivalence relation . LetRA , RB , RC : S×A×S → R be bounded reward functions . Then ≡ is reflexive , RA ≡ RA ; symmetric , RA ≡ RB implies RB ≡ RA ; and transitive , ( RA ≡ RB ) ∧ ( RB ≡ RC ) implies RA ≡ RC . Proof . See section A.3.1 in supplementary material . The expected return of potential shaping γΦ ( s′ ) − Φ ( s ) on a trajectory segment ( s0 , · · · , sT ) is γTΦ ( sT ) − Φ ( s0 ) . The first term γTΦ ( sT ) → 0 as T → ∞ , while the second term Φ ( s0 ) only depends on the initial state , and so potential shaping does not change the set of optimal policies . Moreover , any additive transformation that is not potential shaping will , for some reward R and transition distribution T , produce a set of optimal policies that is disjoint from the original [ 14 ] . The set of optimal policies is invariant to constant shifts c ∈ R in the reward , however this can already be obtained by shifting Φ by cγ−1 . * Scaling a reward function by a positive factor λ > 0 scales the expected return of all trajectories by λ , also leaving the set of optimal policies unchanged . If RA ≡ RB for some fixed ( S , A , γ ) , then for any MDP\R M− = ( S , A , γ , d0 , T ) we have π∗ ( ( M− , RA ) ) = π ∗ ( ( M− , RB ) ) , where ( M− , R ) denotes the MDP specified by M− with reward function R. In other words , RA and RB induce the same optimal policies for all initial state distributions d0 and transition dynamics T . Definition 3.5 . Let X be a set and d : X ×X → [ 0 , ∞ ) a function . d is a premetric if d ( x , x ) = 0 for all x ∈ X . d is a pseudometric if , furthermore , it is symmetric , d ( x , y ) = d ( y , x ) for all x , y ∈ X ; and satisfies the triangle inequality , d ( x , z ) ≤ d ( x , y ) + d ( y , z ) for all x , y , z ∈ X . d is a metric if , furthermore , for all x , y ∈ X , d ( x , y ) = 0 =⇒ x = y . We wish for d ( RA , RB ) = 0 whenever the rewards are equivalent , RA ≡ RB , even if they are not identical , RA 6= RB . This is forbidden in a metric but permitted in a pseudometric , while retaining * Note constant shifts in the reward of an undiscounted MDP would cause the value function to diverge . Fortunately , the shaping γΦ ( s′ ) − Φ ( s ) is unchanged by constant shifts to Φ when γ = 1. other guarantees such as symmetry and triangle inequality that a metric provides . Accordingly , a pseudometric is usually the best choice for a distance d over reward functions .
The paper introduces a pseudometric on reward functions, EPIC (Equivalent-Policy Invariant Comparison), based on the potential-based reward shaping (Ng. 2020). It formally analyzes the EPIC distance in detail and demonstrates its usefulness in comparing learned reward functions without the necessity of optimizing reward-specific policies. The empirical results show that the EPIC is more predictive of the policy returns than some baseline variants and robust to visitation distributions, even in unseen test environments.
SP:7f95a11596f1b1321a691b1b45cff3de69027aaf
Quantifying Differences in Reward Functions
For many tasks , the reward function is inaccessible to introspection or too complex to be specified procedurally , and must instead be learned from user data . Prior work has evaluated learned reward functions by evaluating policies optimized for the learned reward . However , this method can not distinguish between the learned reward function failing to reflect user preferences and the policy optimization process failing to optimize the learned reward . Moreover , this method can only tell us about behavior in the evaluation environment , but the reward may incentivize very different behavior in even a slightly different deployment environment . To address these problems , we introduce the Equivalent-Policy Invariant Comparison ( EPIC ) distance to quantify the difference between two reward functions directly , without a policy optimization step . We prove EPIC is invariant on an equivalence class of reward functions that always induce the same optimal policy . Furthermore , we find EPIC can be efficiently approximated and is more robust than baselines to the choice of coverage distribution . Finally , we show that EPIC distance bounds the regret of optimal policies even under different transition dynamics , and we confirm empirically that it predicts policy training success . Our source code is available at https : //github.com/HumanCompatibleAI/evaluating-rewards . 1 INTRODUCTION . Reinforcement learning ( RL ) has reached or surpassed human performance in many domains with clearly defined reward functions , such as games [ 20 ; 15 ; 23 ] and narrowly scoped robotic manipulation tasks [ 16 ] . Unfortunately , the reward functions for most real-world tasks are difficult or impossible to specify procedurally . Even a task as simple as peg insertion from pixels has a non-trivial reward function that must usually be learned [ 22 , IV.A ] . Tasks involving human interaction can have far more complex reward functions that users may not even be able to introspect on . These challenges have inspired work on learning a reward function , whether from demonstrations [ 13 ; 17 ; 26 ; 8 ; 3 ] , preferences [ 1 ; 25 ; 6 ; 18 ; 27 ] or both [ 10 ; 4 ] . Prior work has usually evaluated the learned reward function R̂ using the “ rollout method ” : training a policy πR̂ to optimize R̂ and then examining rollouts from πR̂ . Unfortunately , using RL to compute πR̂ is often computationally expensive . Furthermore , the method produces false negatives when the reward R̂ matches user preferences but the RL algorithm fails to optimize with respect to R̂ . The rollout method also produces false positives . Of the many reward functions that induce the desired rollout in a given environment , only a small subset align with the user ’ s preferences . For example , suppose the agent can reach states { A , B , C } . If the user prefers A > B > C , but the agent instead learns A > C > B , the agent will still go to the correct state A . However , if the initial state distribution or transition dynamics change , misaligned rewards may induce undesirable policies . For example , if A is no longer reachable at deployment , the previously reliable agent would misbehave by going to the least-favoured state C. We propose instead to evaluate learned rewards via their distance from other reward functions , and summarize our desiderata for reward function distances in Table 1 . For benchmarks , it is usually possible to directly compare a learned reward R̂ to the true reward function R. Alternatively , benchmark creators can train a “ proxy ” reward function from a large human data set . This proxy can then be used as a stand-in for the true reward R when evaluating algorithms trained on a different or smaller data set . ∗Work partially conducted while at DeepMind . Distance Pseudometric Invariant Efficient Robust Predictive . Comparison with a ground-truth reward function is rarely possible outside of benchmarks . However , even in this challenging case , comparisons can at least be used to cluster reward models trained using different techniques or data . Larger clusters are more likely to be correct , since multiple methods arrived at a similar result . Moreover , our regret bound ( Theorem 4.9 ) suggests we could use interpretability methods [ 12 ] on one model and get some guarantees for models in the same cluster . We introduce the Equivalent-Policy Invariant Comparison ( EPIC ) distance that meets all the criteria in Table 1 . We believe EPIC is the first method to quantitatively evaluate reward functions without training a policy . EPIC ( section 4 ) canonicalizes the reward functions ’ potential-based shaping [ 14 ] , then takes the correlation between the canonical rewards over a coverage distribution D of transitions . We also introduce baselines NPEC and ERC ( section 5 ) which partially satisfy the criteria . EPIC works best when D has support on all realistic transitions . We achieve this in our experiments by using uninformative priors , such as rollouts of a policy taking random actions . Moreover , we find EPIC is robust to the exact choice of distribution D , producing similar results across a range of distributions , whereas ERC and especially NPEC are highly sensitive to the choice of D ( section 6.2 ) . Moreover , low EPIC distance between a learned reward R̂ and the true reward R predicts low regret . That is , the policies πR̂ and πR optimized for R̂ and R obtain similar returns under R. Theorem 4.9 bounds the regret even in unseen environments ; by contrast , the rollout method can only determine regret in the evaluation environment . We also confirm this result empirically ( section 6.3 ) . 2 RELATED WORK . There exist a variety of methods to learn reward functions . Inverse reinforcement learning ( IRL ) [ 13 ] is a common approach that works by inferring a reward function from demonstrations . The IRL problem is inherently underconstrained : many different reward functions lead to the same demonstrations . Bayesian IRL [ 17 ] handles this ambiguity by inferring a posterior over reward functions . By contrast , Maximum Entropy IRL [ 26 ] selects the highest entropy reward function consistent with the demonstrations ; this method has scaled to high-dimensional environments [ 7 ; 8 ] . An alternative approach is to learn from preference comparisons between two trajectories [ 1 ; 25 ; 6 ; 18 ] . T-REX [ 4 ] is a hybrid approach , learning from a ranked set of demonstrations . More directly , Cabi et al . [ 5 ] learn from “ sketches ” of cumulative reward over an episode . To the best of our knowledge , there is no prior work that focuses on evaluating reward functions directly . The most closely related work is Ng et al . [ 14 ] , identifying reward transformations guaranteed to not change the optimal policy . However , a variety of ad-hoc methods have been developed to evaluate reward functions . The rollout method – evaluating rollouts of a policy trained on the learned reward – is evident in the earliest work on IRL [ 13 ] . Fu et al . [ 8 ] refined the rollout method by testing on a transfer environment , inspiring our experiment in section 6.3 . Recent work has compared reward functions by scatterplotting returns [ 10 ; 4 ] , inspiring our ERC baseline ( section 5.1 ) . 3 BACKGROUND . This section introduces material needed for the distances defined in subsequent sections . We start by introducing the Markov Decision Process ( MDP ) formalism , then describe when reward functions induce the same optimal policies in an MDP , and finally define the notion of a distance metric . Definition 3.1 . A Markov Decision Process ( MDP ) M = ( S , A , γ , d0 , T , R ) consists of a set of states S and a set of actions A ; a discount factor γ ∈ [ 0 , 1 ] ; an initial state distribution d0 ( s ) ; a transition distribution T ( s′ | s , a ) specifying the probability of transitioning to s′ from s after taking action a ; and a reward function R ( s , a , s′ ) specifying the reward upon taking action a in state s and transitioning to state s′ . A trajectory τ = ( s0 , a0 , s1 , a1 , · · · ) consists of a sequence of states si ∈ S and actions ai ∈ A . The return on a trajectory is defined as the sum of discounted rewards , g ( τ ; R ) = ∑|τ | t=0 γ tR ( st , at , st+1 ) , where the length of the trajectory |τ | may be infinite . In the following , we assume a discounted ( γ < 1 ) infinite-horizon MDP . The results can be generalized to undiscounted ( γ = 1 ) MDPs subject to regularity conditions needed for convergence . A stochastic policy π ( a | s ) assigns probabilities to taking action a ∈ A in state s ∈ S . The objective of an MDP is to find a policy π that maximizes the expected return G ( π ) = Eτ ( π ) [ g ( τ ; R ) ] , where τ ( π ) is a trajectory generated by sampling the initial state s0 from d0 , each action at from the policy π ( at | st ) and successor states st+1 from the transition distribution T ( st+1 | st , at ) . An MDP M has a set of optimal policies π∗ ( M ) that maximize the expected return , π∗ ( M ) = arg maxπ G ( π ) . In this paper , we consider the case where we only have access to an MDP\R , M− = ( S , A , γ , d0 , T ) . The unknown reward function R must be learned from human data . Typically , only the state space S , action space A and discount factor γ are known exactly , with the initial state distribution d0 and transition dynamics T only observable from interacting with the environment M− . Below , we describe an equivalence class whose members are guaranteed to have the same optimal policy set in any MDP\R M− with fixed S , A and γ ( allowing the unknown T and d0 to take arbitrary values ) . Definition 3.2 . Let γ ∈ [ 0 , 1 ] be the discount factor , and Φ : S → R a real-valued function . Then R ( s , a , s′ ) = γΦ ( s′ ) − Φ ( s ) is a potential shaping reward , with potential Φ [ 14 ] . Definition 3.3 ( Reward Equivalence ) . We define two bounded reward functions RA and RB to be equivalent , RA ≡ RB , for a fixed ( S , A , γ ) if and only if there exists a constant λ > 0 and a bounded potential function Φ : S → R such that for all s , s′ ∈ S and a ∈ A : RB ( s , a , s ′ ) = λRA ( s , a , s ′ ) + γΦ ( s′ ) − Φ ( s ) . ( 1 ) Proposition 3.4 . The binary relation≡ is an equivalence relation . LetRA , RB , RC : S×A×S → R be bounded reward functions . Then ≡ is reflexive , RA ≡ RA ; symmetric , RA ≡ RB implies RB ≡ RA ; and transitive , ( RA ≡ RB ) ∧ ( RB ≡ RC ) implies RA ≡ RC . Proof . See section A.3.1 in supplementary material . The expected return of potential shaping γΦ ( s′ ) − Φ ( s ) on a trajectory segment ( s0 , · · · , sT ) is γTΦ ( sT ) − Φ ( s0 ) . The first term γTΦ ( sT ) → 0 as T → ∞ , while the second term Φ ( s0 ) only depends on the initial state , and so potential shaping does not change the set of optimal policies . Moreover , any additive transformation that is not potential shaping will , for some reward R and transition distribution T , produce a set of optimal policies that is disjoint from the original [ 14 ] . The set of optimal policies is invariant to constant shifts c ∈ R in the reward , however this can already be obtained by shifting Φ by cγ−1 . * Scaling a reward function by a positive factor λ > 0 scales the expected return of all trajectories by λ , also leaving the set of optimal policies unchanged . If RA ≡ RB for some fixed ( S , A , γ ) , then for any MDP\R M− = ( S , A , γ , d0 , T ) we have π∗ ( ( M− , RA ) ) = π ∗ ( ( M− , RB ) ) , where ( M− , R ) denotes the MDP specified by M− with reward function R. In other words , RA and RB induce the same optimal policies for all initial state distributions d0 and transition dynamics T . Definition 3.5 . Let X be a set and d : X ×X → [ 0 , ∞ ) a function . d is a premetric if d ( x , x ) = 0 for all x ∈ X . d is a pseudometric if , furthermore , it is symmetric , d ( x , y ) = d ( y , x ) for all x , y ∈ X ; and satisfies the triangle inequality , d ( x , z ) ≤ d ( x , y ) + d ( y , z ) for all x , y , z ∈ X . d is a metric if , furthermore , for all x , y ∈ X , d ( x , y ) = 0 =⇒ x = y . We wish for d ( RA , RB ) = 0 whenever the rewards are equivalent , RA ≡ RB , even if they are not identical , RA 6= RB . This is forbidden in a metric but permitted in a pseudometric , while retaining * Note constant shifts in the reward of an undiscounted MDP would cause the value function to diverge . Fortunately , the shaping γΦ ( s′ ) − Φ ( s ) is unchanged by constant shifts to Φ when γ = 1. other guarantees such as symmetry and triangle inequality that a metric provides . Accordingly , a pseudometric is usually the best choice for a distance d over reward functions .
The paper introduced Equivalent-Policy Invariant Comparison (EPIC) pseudometric to compare different reward functions directly without training a policy function. The authors provide an interesting direction for inverse reinforcement learning. The EPIC distance gives a bound on the regret between policies optimizing for one of the two reward functions relative to the other. The authors also conduct a didactic example to demonstrate efficacy.
SP:7f95a11596f1b1321a691b1b45cff3de69027aaf
Reinforcement Learning for Control with Probabilistic Stability Guarantee
1 INTRODUCTION . Reinforcement learning ( RL ) has achieved superior performance on some complicated control tasks ( Kumar et al. , 2016 ; Xie et al. , 2019 ; Hwangbo et al. , 2019 ) for which the traditional control engineering methods can be hardly applicable ( Åström and Wittenmark , 1973 ; Morari and Zafiriou , 1989 ; Slotine et al. , 1991 ) . The dynamical system to be controlled is often highly stochastic and nonlinear which is typically modeled by Markov decision process ( MDP ) , i.e. , st+1 ∼ P ( st+1|st , at ) , ∀t ∈ Z+ ( 1 ) where s ∈ S ⊂ Rn denotes the state , a ∈ A ⊂ Rm denotes the action and P ( st+1|st , at ) is the transition probability function . An optimal controller can be learned from samples through “ trial and error ” by memorizing what has been experienced ( Kaelbling et al. , 1996 ; Bertsekas , 2019 ) . However , there is a major caveat that prevents the real-world application of learning methods for control engineering applications . Without using a mathematical model , the current sample-based RL methods can not guarantee the stability of the closed-loop system , which is the most important property of any control system as in control theory . The most useful and general approach for studying the stability of a dynamical system is Lyapunov ’ s method Lyapunov ( 1892 ) , which is dominant in control engineering Jiang and Jiang ( 2012 ) ; Lewis et al . ( 2012 ) ; Boukas and Liu ( 2000 ) . In Lyapunov ’ s method , a suitable “ energy-like ” Lyapunov function L ( s ) is selected and its derivative along the system trajectories is ensured to be negative semi-definite , i.e. , L ( st+1 ) − L ( st ) < 0 for all time instants and states , so that the state goes in the direction of decreasing the value of Lyapunov function and eventually converges to the origin or a sub-level set of the Lyapunov function . In the traditional control engineering methods , a mathematical model must be given , i.e. , the transition probability function in ( 1 ) is known . Thus the stability can be analyzed without the need to assess all possible trajectories . However , in learning methods , as the dynamic model is unknown , the “ energy decreasing ” condition has to be verified by trying out all possible consecutive data pairs in the state space , i.e. , to verify infinite inequalities L ( st+1 ) − L ( st ) < 0 . Obviously , the “ infinity ” requirement makes it impractical to directly exploit Lyapunov ’ s method in a model-free framework . In this paper , we show that the mean square stability of the system can be analyzed based on a finite number of samples without knowing the model of the system . The contributions of this paper are summarized as follows : 1 . Instead of verifying an infinite number of inequalities over the state space , it is possible to analyze the stability through a sampling-based method where only one inequality is needed . 2 . Instead of using infinite sample pairs { st+1 , st } , a finite-sample stability theorem is proposed to provide a probabilistic stability guarantee for the system , and the probability is an increasing function of the number M and length T of sampled trajectories and converging to 1 as M and T grow . 3 . As an independent interest , we also derive the policy gradient theorem for learning stabilizing policy with sample pairs and the corresponding algorithm . We further reveal that the classic REINFORCE algorithm ( Williams , 1992 ) is a special case of the proposed algorithm for the stabilization problem . We also conclude two takeaways for the paper : • Samples of a finite number M and length T of trajectories can be used for stability analysis with a certain probability . The probability is monotonically converging to 1 when M and T grow . • There is a lower bound on T and the probability is much more demanding for M than T . • The REINFORCE like algorithm can learn the controller and Lyapunov function simultane- ously . The paper is organized as follows : In Section 2 , related works are introduced . In Section 3 , the definition of mean-square stability ( MSS ) and the problem statement is given . In Section 4 , the samplebased MSS theorem is proposed . In Section 5 , we propose the probabilistic stability guarantee when only a finite number of samples are accessible and the probabilistic bound in a relation to the number and length of sampled trajectories is derived . In Section 6 , based on the stability theorems , the policy gradient is derived and a model-free RL algorithm ( L-REINFORCE ) is given . Finally , a simulated Cartpole stabilization task is considered to demonstrate the effectiveness of the proposed method.In Section 7 , the vanilla version of L-REINFORCE is tested on a simulated Cartpole stabilization task to demonstrate the effectiveness ; it is further incorporated with the maximum entropy framework to control the more high-dimensional and stochastic systems , including a legged robot , HalfCheetah , and the molecular synthetic biological gene regulatory networks ( GRN ) corrupted by the additive and multiplicative uniform noises . 2 RELATED WORKS . Lyapunov ’ s Method As a basic tool in control theory , the construction/learning of the Lyapunov function is not trivial and many works are devoted to this problem ( Noroozi et al. , 2008 ; Prokhorov , 1994 ; Serpen , 2005 ; Prokhorov and Feldkamp , 1999 ) . In Perkins and Barto ( 2002 ) , the RL agent controls the switch between designed controllers using Lyapunov domain knowledge so that any policy is safe and reliable . Petridis and Petridis ( 2006 ) proposes a straightforward approach to construct the Lyapunov functions for nonlinear systems using neural networks . Richards et al . ( 2018 ) proposes a learning-based approach for constructing Lyapunov neural networks with the maximized region of attraction . However , these approaches require the model of the system dynamics explicitly . Stability analysis in a model-free manner has not been addressed . In Berkenkamp et al . ( 2017 ) , local stability is analyzed by validating the “ energy decreasing ” condition on discretized points in the subset of state space with the help of a learned model , meaning that only a finite number of inequalities need to be checked . This approach is further extended by using a Noise Contrastive Prior Bayesian RNN in Gallieri et al . ( 2019 ) . Nevertheless , the discretization technique may become infeasible as the dimension and space of interest increases , limiting its application to rather simple and low-dimensional systems . Reinforcement Learning In model-free reinforcement learning ( RL ) , stability is rarely addressed due to the formidable challenge of analyzing and designing the closed-loop system dynamics by solely using samples Buşoniu et al . ( 2018 ) , and the associated stability theory in model-free RL remains as an open problem Buşoniu et al . ( 2018 ) ; Gorges ( 2017 ) . Recently , Lyapunov analysis is used in model-free RL to solve control problems with safety constraints Chow et al . ( 2018 ; 2019 ) . In Chow et al . ( 2018 ) , a Lyapunov-based approach for solving constrained Markov decision processes is proposed with a novel way of constructing the Lyapunov function through linear programming . In Chow et al . ( 2019 ) , the above results were further generalized to continuous control tasks . It should be noted that even though Lyapunov-based methods were adopted in these results , neither of them addressed the stability of the system . In Postoyan et al . ( 2017 ) , an initial result is proposed for the stability analysis of deterministic nonlinear systems with optimal controller for infinite-horizon discounted cost , based on the assumption that discount is sufficiently close to 1 . However , in practice , it is rather difficult to guarantee the optimality of the learned policy unless certain assumptions on the system dynamics are made Murray et al . ( 2003 ) ; Abu-Khalaf and Lewis ( 2005 ) ; Jiang and Jiang ( 2015 ) . Furthermore , the exploitation of multi-layer neural networks as function approximations Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2015 ) only adds to the impracticality of this requirement . Given certain information on the model , Adaptive dynamic programming ( ADP ) can guarantee convergence to the optimal solution , and thus stability is naturally ensured Balakrishnan et al . ( 2008 ) . For nonlinear systems with input-affine structure , model-free ADP algorithms can guarantee the stability of the closed-loop system Murray et al . ( 2003 ) ; Abu-Khalaf and Lewis ( 2005 ) ; Shih et al . ( 2007 ) ; Jiang and Jiang ( 2015 ) ; Deptula et al . ( 2018 ) . This paper steps beyond the scope of controlaffine systems and are devoted to learning a controller with a stability guarantee for the general stochastic nonlinear system . To the best of the author ’ s knowledge , the finite sample-based approach for the stability analysis of stochastic nonlinear systems considered in this paper is still missing . For the model-based approaches , promising results on stability analysis are reported but generally based on certain model assumptions . Model predictive control ( MPC ) has long been studying the issue of optimal control of various dynamical systems without violating state and action constraints , and Lyapunov stability is naturally guaranteed ( Mayne and Michalska , 1990 ; Michalska and Mayne , 1993 ; Mayne et al. , 2000 ) . Favorable as it may seem , the nice properties above are built upon the accurate and concise modeling of the dynamics , which narrows its scope to certain fields . In Ostafew et al . ( 2014 ) , a learning-based nonlinear MPC algorithm is proposed to learn the disturbance model online and improve the tracking performance of field robots , but first , a priori model is required . Aswani et al . ( 2013 ) proposed a new learning-based MPC scheme that can provide deterministic guarantees on robustness while performance is improved by identifying a richer model . However , it is limited to the case that a linear model with known uncertainty bound is available . Other results concerning learning-based MPC are referred to Aswani et al . ( 2011 ) ; Bouffard et al . ( 2012 ) ; Di Cairano et al . ( 2013 ) . In Bobiti ( 2017 ) ; Bobiti and Lazar ( 2018 ) , a sampling-based approach for stability analysis and domain of attraction estimation is proposed for deterministic nonlinear systems . The reliability of the estimation is addressed with a probabilistic bound on the number of samples , however , based on the assumption that all the samples are independently distributed . This infers that given multiple state trajectories , only the first-step data are applicable for the stability analysis , which is inefficient in a model-free framework and will be improved in this paper . Nevertheless , the aforementioned approach can be favorable in a model-based setup ( Gallieri et al. , 2019 ) , given that 1-step predictions can be performed in parallel . It should also be noted that this paper is to address the stability analysis and control of stochastic systems , while the results above are focused on the deterministic nonlinear systems .
This paper studies the probabilistic stability guarantee of control systems. In general, hard stability guarantee is difficult with only finite samples. The authors instead focus on developing probabilistic stability conditions. High probability bound is derived in terms of the number of trajectories and the length of them. This also leads to a practical policy gradient style algorithm, which is applied to the Cartpole task with desired performance.
SP:fd270313e05eb0d810f7e0fc8f807ae3bcdb9dd0
Reinforcement Learning for Control with Probabilistic Stability Guarantee
1 INTRODUCTION . Reinforcement learning ( RL ) has achieved superior performance on some complicated control tasks ( Kumar et al. , 2016 ; Xie et al. , 2019 ; Hwangbo et al. , 2019 ) for which the traditional control engineering methods can be hardly applicable ( Åström and Wittenmark , 1973 ; Morari and Zafiriou , 1989 ; Slotine et al. , 1991 ) . The dynamical system to be controlled is often highly stochastic and nonlinear which is typically modeled by Markov decision process ( MDP ) , i.e. , st+1 ∼ P ( st+1|st , at ) , ∀t ∈ Z+ ( 1 ) where s ∈ S ⊂ Rn denotes the state , a ∈ A ⊂ Rm denotes the action and P ( st+1|st , at ) is the transition probability function . An optimal controller can be learned from samples through “ trial and error ” by memorizing what has been experienced ( Kaelbling et al. , 1996 ; Bertsekas , 2019 ) . However , there is a major caveat that prevents the real-world application of learning methods for control engineering applications . Without using a mathematical model , the current sample-based RL methods can not guarantee the stability of the closed-loop system , which is the most important property of any control system as in control theory . The most useful and general approach for studying the stability of a dynamical system is Lyapunov ’ s method Lyapunov ( 1892 ) , which is dominant in control engineering Jiang and Jiang ( 2012 ) ; Lewis et al . ( 2012 ) ; Boukas and Liu ( 2000 ) . In Lyapunov ’ s method , a suitable “ energy-like ” Lyapunov function L ( s ) is selected and its derivative along the system trajectories is ensured to be negative semi-definite , i.e. , L ( st+1 ) − L ( st ) < 0 for all time instants and states , so that the state goes in the direction of decreasing the value of Lyapunov function and eventually converges to the origin or a sub-level set of the Lyapunov function . In the traditional control engineering methods , a mathematical model must be given , i.e. , the transition probability function in ( 1 ) is known . Thus the stability can be analyzed without the need to assess all possible trajectories . However , in learning methods , as the dynamic model is unknown , the “ energy decreasing ” condition has to be verified by trying out all possible consecutive data pairs in the state space , i.e. , to verify infinite inequalities L ( st+1 ) − L ( st ) < 0 . Obviously , the “ infinity ” requirement makes it impractical to directly exploit Lyapunov ’ s method in a model-free framework . In this paper , we show that the mean square stability of the system can be analyzed based on a finite number of samples without knowing the model of the system . The contributions of this paper are summarized as follows : 1 . Instead of verifying an infinite number of inequalities over the state space , it is possible to analyze the stability through a sampling-based method where only one inequality is needed . 2 . Instead of using infinite sample pairs { st+1 , st } , a finite-sample stability theorem is proposed to provide a probabilistic stability guarantee for the system , and the probability is an increasing function of the number M and length T of sampled trajectories and converging to 1 as M and T grow . 3 . As an independent interest , we also derive the policy gradient theorem for learning stabilizing policy with sample pairs and the corresponding algorithm . We further reveal that the classic REINFORCE algorithm ( Williams , 1992 ) is a special case of the proposed algorithm for the stabilization problem . We also conclude two takeaways for the paper : • Samples of a finite number M and length T of trajectories can be used for stability analysis with a certain probability . The probability is monotonically converging to 1 when M and T grow . • There is a lower bound on T and the probability is much more demanding for M than T . • The REINFORCE like algorithm can learn the controller and Lyapunov function simultane- ously . The paper is organized as follows : In Section 2 , related works are introduced . In Section 3 , the definition of mean-square stability ( MSS ) and the problem statement is given . In Section 4 , the samplebased MSS theorem is proposed . In Section 5 , we propose the probabilistic stability guarantee when only a finite number of samples are accessible and the probabilistic bound in a relation to the number and length of sampled trajectories is derived . In Section 6 , based on the stability theorems , the policy gradient is derived and a model-free RL algorithm ( L-REINFORCE ) is given . Finally , a simulated Cartpole stabilization task is considered to demonstrate the effectiveness of the proposed method.In Section 7 , the vanilla version of L-REINFORCE is tested on a simulated Cartpole stabilization task to demonstrate the effectiveness ; it is further incorporated with the maximum entropy framework to control the more high-dimensional and stochastic systems , including a legged robot , HalfCheetah , and the molecular synthetic biological gene regulatory networks ( GRN ) corrupted by the additive and multiplicative uniform noises . 2 RELATED WORKS . Lyapunov ’ s Method As a basic tool in control theory , the construction/learning of the Lyapunov function is not trivial and many works are devoted to this problem ( Noroozi et al. , 2008 ; Prokhorov , 1994 ; Serpen , 2005 ; Prokhorov and Feldkamp , 1999 ) . In Perkins and Barto ( 2002 ) , the RL agent controls the switch between designed controllers using Lyapunov domain knowledge so that any policy is safe and reliable . Petridis and Petridis ( 2006 ) proposes a straightforward approach to construct the Lyapunov functions for nonlinear systems using neural networks . Richards et al . ( 2018 ) proposes a learning-based approach for constructing Lyapunov neural networks with the maximized region of attraction . However , these approaches require the model of the system dynamics explicitly . Stability analysis in a model-free manner has not been addressed . In Berkenkamp et al . ( 2017 ) , local stability is analyzed by validating the “ energy decreasing ” condition on discretized points in the subset of state space with the help of a learned model , meaning that only a finite number of inequalities need to be checked . This approach is further extended by using a Noise Contrastive Prior Bayesian RNN in Gallieri et al . ( 2019 ) . Nevertheless , the discretization technique may become infeasible as the dimension and space of interest increases , limiting its application to rather simple and low-dimensional systems . Reinforcement Learning In model-free reinforcement learning ( RL ) , stability is rarely addressed due to the formidable challenge of analyzing and designing the closed-loop system dynamics by solely using samples Buşoniu et al . ( 2018 ) , and the associated stability theory in model-free RL remains as an open problem Buşoniu et al . ( 2018 ) ; Gorges ( 2017 ) . Recently , Lyapunov analysis is used in model-free RL to solve control problems with safety constraints Chow et al . ( 2018 ; 2019 ) . In Chow et al . ( 2018 ) , a Lyapunov-based approach for solving constrained Markov decision processes is proposed with a novel way of constructing the Lyapunov function through linear programming . In Chow et al . ( 2019 ) , the above results were further generalized to continuous control tasks . It should be noted that even though Lyapunov-based methods were adopted in these results , neither of them addressed the stability of the system . In Postoyan et al . ( 2017 ) , an initial result is proposed for the stability analysis of deterministic nonlinear systems with optimal controller for infinite-horizon discounted cost , based on the assumption that discount is sufficiently close to 1 . However , in practice , it is rather difficult to guarantee the optimality of the learned policy unless certain assumptions on the system dynamics are made Murray et al . ( 2003 ) ; Abu-Khalaf and Lewis ( 2005 ) ; Jiang and Jiang ( 2015 ) . Furthermore , the exploitation of multi-layer neural networks as function approximations Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2015 ) only adds to the impracticality of this requirement . Given certain information on the model , Adaptive dynamic programming ( ADP ) can guarantee convergence to the optimal solution , and thus stability is naturally ensured Balakrishnan et al . ( 2008 ) . For nonlinear systems with input-affine structure , model-free ADP algorithms can guarantee the stability of the closed-loop system Murray et al . ( 2003 ) ; Abu-Khalaf and Lewis ( 2005 ) ; Shih et al . ( 2007 ) ; Jiang and Jiang ( 2015 ) ; Deptula et al . ( 2018 ) . This paper steps beyond the scope of controlaffine systems and are devoted to learning a controller with a stability guarantee for the general stochastic nonlinear system . To the best of the author ’ s knowledge , the finite sample-based approach for the stability analysis of stochastic nonlinear systems considered in this paper is still missing . For the model-based approaches , promising results on stability analysis are reported but generally based on certain model assumptions . Model predictive control ( MPC ) has long been studying the issue of optimal control of various dynamical systems without violating state and action constraints , and Lyapunov stability is naturally guaranteed ( Mayne and Michalska , 1990 ; Michalska and Mayne , 1993 ; Mayne et al. , 2000 ) . Favorable as it may seem , the nice properties above are built upon the accurate and concise modeling of the dynamics , which narrows its scope to certain fields . In Ostafew et al . ( 2014 ) , a learning-based nonlinear MPC algorithm is proposed to learn the disturbance model online and improve the tracking performance of field robots , but first , a priori model is required . Aswani et al . ( 2013 ) proposed a new learning-based MPC scheme that can provide deterministic guarantees on robustness while performance is improved by identifying a richer model . However , it is limited to the case that a linear model with known uncertainty bound is available . Other results concerning learning-based MPC are referred to Aswani et al . ( 2011 ) ; Bouffard et al . ( 2012 ) ; Di Cairano et al . ( 2013 ) . In Bobiti ( 2017 ) ; Bobiti and Lazar ( 2018 ) , a sampling-based approach for stability analysis and domain of attraction estimation is proposed for deterministic nonlinear systems . The reliability of the estimation is addressed with a probabilistic bound on the number of samples , however , based on the assumption that all the samples are independently distributed . This infers that given multiple state trajectories , only the first-step data are applicable for the stability analysis , which is inefficient in a model-free framework and will be improved in this paper . Nevertheless , the aforementioned approach can be favorable in a model-based setup ( Gallieri et al. , 2019 ) , given that 1-step predictions can be performed in parallel . It should also be noted that this paper is to address the stability analysis and control of stochastic systems , while the results above are focused on the deterministic nonlinear systems .
**General overview:** The paper studies guaranteeing the closed-loop stability of a Markov decision process (MDP) using a given a policy, based on a finite number of trajectories each containing finite number of steps. Both the state and the action spaces are assumed to be subsets of finite dimensional Euclidean spaces. Particularly, the concept of mean square stability (MSS) is used which is guaranteed based on the properties of certain Lyapunov functions. Theoretical results on probabilistic MSS guarantees are provided based on finite samples. A variant of the standard REINFORCE policy gradient method is also presented which searches for a policy having MSS guarantees. Finally, numerical experiments on a simulated cart-pole problem comparing the suggested L-REINFORCE method with the soft actor-critic (SAC) off-policy RL algorithm are shown.
SP:fd270313e05eb0d810f7e0fc8f807ae3bcdb9dd0
Momentum Contrastive Autoencoder
1 INTRODUCTION . The main goal of generative modeling is to learn a given data distribution while facilitating an efficient way to draw samples from them . Popular algorithms such as variational autoencoders ( VAE , Kingma & Welling ( 2013 ) ) and generative adversarial networks ( GAN , Goodfellow et al . ( 2014 ) ) are theoretically-grounded models designed to meet this goal . However , they come with some challenges . For instance , VAEs suffer from the posterior collapse problem ( Chen et al. , 2016 ; Zhao et al. , 2017 ; Van Den Oord et al. , 2017 ) , and a mismatch between the posterior and prior distribution ( Kingma et al. , 2016 ; Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . GANs are known to have the mode collapse problem ( Che et al. , 2016 ; Dumoulin et al. , 2016 ; Donahue et al. , 2016 ) and optimization instability ( Arjovsky & Bottou , 2017 ) due to their saddle point problem formulation . With the Wasserstein autoencoder ( WAE ) , Tolstikhin et al . ( 2017 ) propose a general theoretical framework that can potentially avoid these challenges . They show that the divergence between two distributions is equivalent to the minimum reconstruction error , under the constraint that the marginal distribution of the latent space is identical to a prior distribution . The core challenge of this framework is to match the latent space distribution to a prior distribution that is easy to sample from . If this challenge is addressed appropriately , WAE can avoid many of the aforementioned challenges of VAE and GANs . Tolstikhin et al . ( 2017 ) investigate GANs and maximum mean discrepancy ( MMD , Gretton et al . ( 2012 ) ) for this task and empirically find that the GAN-based approach yields better performance despite its instability . Others have proposed solutions to overcome this challenge ( Kolouri et al. , 2018 ; Knop et al. , 2018 ) , but they come with their own pitfalls ( see Section 2 ) . This paper aims to design a generative model that avoids the aforementioned challenges of existing approaches . To do so , we build on the WAE framework . In order to tackle the latent space distribution matching problem , we make a simple observation that allows us to use the contrastive learning framework to solve this problem . Contrastive learning achieves state-of-the-art results in selfsupervised representation learning tasks ( He et al. , 2020 ; Chen et al. , 2020 ) by forcing the latent representations to be 1 ) augmentation invariant ; 2 ) distinct for different data samples . It has been shown that the contrastive learning objective corresponding to the latter goal pushes the learned representations to achieve maximum entropy over the unit hyper-sphere ( Wang & Isola , 2020 ) . We observe that applying this contrastive loss term to the latent representation of an AE therefore matches it to the uniform distribution over the unit hyper-sphere . This approach avoids the aforementioned optimization challenges of existing methods , thus resulting in a simple and scalable algorithm for generative modeling that we call Momentum Contrastive Autoencoder ( MoCA ) . 2 RELATED WORK . There are many autoencoder based generative models in existing literature . One of the earliest model in this category is the de-noising autoencoder ( Vincent et al. , 2008 ) . Bengio et al . ( 2013b ) show that training an autoencoder to de-noise a corrupted input leads to the learning of a Markov chain whose stationary distribution is the original data distribution it is trained on . However , this results in inefficient sampling and mode mixing problems ( Bengio et al. , 2013b ; Alain & Bengio , 2014 ) . Variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) overcome these challenges by maximizing a variational lower bound of the data likelihood , which involves a KL term minimizing the divergence between the latent ’ s posterior distribution and a prior distribution . This allows for efficient approximate likelihood estimation as well as posterior inference through ancestral sampling once the model is trained . Despite these advantages , followup works have identified a few important drawbacks of VAEs . The poor sample qualities of VAE has been attributed to a mismatch between the prior ( which is used for drawing samples ) and the posterior ( Kingma et al. , 2016 ; Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . The VAE objective is also at the risk of posterior collapse – learning a latent space distribution which is independent of the input distribution if the KL term dominates the reconstruction term ( Chen et al. , 2016 ; Zhao et al. , 2017 ; Van Den Oord et al. , 2017 ) . Dai & Wipf ( 2019 ) claim that the reason behind poor sample quality of VAEs is a mismatch between the prior and posterior , arising from the latent space dimension of the autoencoder being different from the intrinsic dimensionality of the data manifold ( which is typically unknown ) . To overcome this mismatch , they propose to learn a two stage VAE in which the second stage learns a VAE on the latent space samples of the first . They show that this two stage training and sampling significantly improves the quality of generated samples . However , training a second VAE is computationally expensive and introduces some of the same challenges mentioned above . Ghosh et al . ( 2019 ) observe that VAEs can be interpreted as deterministic autoencoders with noise injected in the latent space as a form of regularization . Based on this observation , they introduce deterministic autoencoders and empirically investigate various other regularizations . The further introduce a post-hoc density estimation for the latent space since the autoencoding step does not match it to a prior . In this context , one can view our proposed algorithm as a way to regularize deterministic autoencoders while simultaneously learning a latent space distribution which can be easily sampled from . Tolstikhin et al . ( 2017 ) make the observation that the optimal transport problem can be equivalently framed as an autoencoder objective ( WAE ) under the constraint that the latent space distribution matches a prior distribution . They experiment with two alternatives to satisfy this constraint in the form of a penalty – MMD ( Gretton et al. , 2012 ) and GAN ( Goodfellow et al. , 2014 ) ) loss , and they find that the latter works better in practice . Training an autoencoder with an adversarial loss was also proposed earlier in adversarial autoencoders ( Makhzani et al. , 2015 ) . Our algorithm builds on the aforementioned WAE theoretical framework due to its theoretical advantages . There has been research that aims at avoiding the latent space distribution matching problem all together by making use of sliced distances . For instance , Kolouri et al . ( 2018 ) observe that Wasserstein distance for one dimensional distributions have a closed form solution . Motivated by this , they propose to use sliced-Wasserstein distance , which involves a large number of projections of the high dimensional distribution onto one dimensional spaces which allows approximating the original Wasserstein distance with the average of one dimensional Wasserstein distances . A similar idea using the sliced-Cramer distance is introduced in Knop et al . ( 2018 ) . However , the number of required random projections becomes prohibitively high when the data lives on a low dimensional manifold in a high dimensional space , making this approach computationally inefficient or otherwise inaccurate ( Liutkus et al. , 2019 ) . 3 MOMENTUM CONTRASTIVE AUTOENCODER . We present the proposed algorithm in this section . We begin by restating the WAE theorem that connects the autoencoder loss with the Wasserstein distance between two distributions . Let X ∼ PX be a random variable sampled from the real data distribution on X , Z ∼ Q ( Z|X ) be its latent representation in Z ⊆ Rd , and X̂ = g ( Z ) be its reconstruction by a deterministic decoder/generator g : Z → X . Note that the encoder Q ( Z|X ) can also be deterministic in the WAE framework , and we let f ( X ) dist= Q ( Z|X ) for some deterministic f : X → Z . Theorem 1 . ( Bousquet et al. , 2017 ; Tolstikhin et al. , 2017 ) Let PZ be a prior distribution on Z , let Pg = g # PZ be the push-forward of PZ under g ( i.e . the distribution of X̂ = g ( Z ) when g ∼ PZ ) , and let QZ = f # PX be the push-forward of PX under f . Then , Wc ( PX , Pg ) = inf Q : QZ=PZ E X∼PX Z∼Q ( Z|X ) [ c ( X , g ( Z ) ) ] = inf f : f # PX=PZ E X∼PX [ c ( X , g ( f ( X ) ) ] ( 1 ) where Wc denotes the Wasserstein distance for some measurable cost function c. The above theorem states that the Wasserstein distance between the true ( PX ) and generated ( Pg ) data distributions can be equivalently computed by finding the minimum ( w.r.t . f ) reconstruction loss , under the constraint that the marginal distribution of the latent variable QZ matches the prior distribution PZ . Thus the Wasserstein distance itself can be minimized by jointly minimizing the reconstruction loss w.r.t . both f ( encoder ) and g ( decoder/generator ) as long as the constraint is met . In this work , we parameterize the encoder network f : X → Rd such that latent variable Z = f ( X ) has unit ` 2 norm . Our goal is then to match the distribution of this Z to the uniform distribution over the unit hyper-sphere Sd = { z ∈ Rd : ‖z‖2 = 1 } . To do so , we study the so-called “ negative sampling ” component of the contrastive loss used in self-supervised learning , Lneg ( f ; τ , K ) = E x∼PX { x−i } K i=1∼PX log 1 K K∑ j=1 ef ( x ) T f ( x−j ) /τ ( 2 ) Here , f : X → Sd is a neural network whose output has unit ` 2 norm , τ is the temperature hyperparameter , and K is the number of samples ( another hyper-parameter ) . Theorem 1 of Wang & Isola ( 2020 ) shows that for any fixed t , when K →∞ , lim K→∞ ( Lneg ( f ; τ , K ) − logK ) = E x∼PX [ log E x−∼PX [ ef ( x ) T f ( x− ) /τ ] ] ( 3 ) Crucially , this limit is minimized exactly when the push-forward f # PX ( i.e . the distribution of the latent random variable Z = f ( X ) when X ∼ PX ) is uniform on Sd . Moreover , even the Monte Carlo approximation of Eq . 2 ( with mini-batch size B and some K such that B ≤ K < ∞ ) LMCneg ( f ; τ , K , B ) = 1 B B∑ i=1 log 1 K K∑ j=1 ef ( xi ) T f ( xj ) /τ ( 4 ) is a consistent estimator ( up to a constant ) of the entropy of f # PX called the redistribution estimate ( Ahmad & Lin , 1976 ) . This follows if we notice that k ( xi ; τ , K ) : = ∑K j=1 e f ( xi ) T f ( xj ) /τ is the un-normalized kernel density estimate of f ( xi ) using the i.i.d . samples { xj } Kj=1 , so −LMCneg ( f ; τ , K , B ) = − 1B ∑B i=1 log k ( xi ; τ , K ) ( Wang & Isola , 2020 ) . So minimizing Lneg ( and importantly LMCneg ) maximizes the entropy of f # PX . Tolstikhin et al . ( 2017 ) attempted to enforce the constraint that f # PX and PZ were matching distributions by regularizing the reconstruction loss with the MMD or a GAN-based estimate of the divergence between f # PX and PZ . By letting PZ be the uniform distribution over the unit hyper-sphere Sd , the insights above allow us to instead minimize the much simpler regularized loss L ( f , g ; λ , τ , B , K ) = 1 B B∑ i=1 ‖xi − g ( f ( xi ) ) ‖22 + λLMCneg ( f ; τ , K , B ) ( 5 ) Training : For simplicity , we will now use the notation Enc ( · ) and Dec ( · ) to respectively denote the encoder and decoder network of the autoencoder . Further , the d-dimensional output of Enc ( · ) is ` 2 normalized , i.e. , ‖Enc ( x ) ‖2 = 1 ∀x . Based on the theory above , we aim to minimize the Algorithm 1 PyTorch-like pseudocode of Momentum Contrastive Autoencoder algorithm # Enc_q , Enc_k : encoder networks for query and key . Their outputs are L2 normalized # Dec : decoder network # Q : dictionary as a queue of K randomly initialized keys ( dxK ) # m : momentum # lambda : regularization coefficient for entropy maximization # tau : logit temperature for x in data_loader : # load a minibatch x with B samples z_q = Enc_q ( x ) # queries : Bxd z_k = Enc_k ( x ) .detach ( ) # keys : Bxd , no gradient through keys x_rec = Dec ( z_q ) # reconstructed input # positive logits : Bx1 l_pos = bmm ( z_q.view ( B,1 , d ) , z_k.view ( B , d,1 ) ) # negative logits : BxK l_neg = mm ( z_q.view ( B , d ) , Q.view ( d , K ) ) # logits : Bx ( 1+K ) logits = cat ( [ l_pos , l_neg ] , dim=1 ) # compute loss labels = zeros ( B ) # positive elements are in the 0-th index L_con = CrossEntropyLoss ( logits/tau , labels ) # contrastive loss maximizing entropy of z_q L_rec = ( ( x_rec - x ) * * 2 ) .sum ( ) / B # reconstruction loss L = L_rec + lambda * L_con # momentum contrastive autoencoder loss # update Enc_q and Dec networks L.backward ( ) update ( Enc_q.params ) update ( Dec.params ) # update Enc_k Enc_k.params = m * Enc_k.params + ( 1-m ) * Enc_q.params # update dictionary enqueue ( Q , z_k ) # enqueue the current minibatch dequeue ( Q ) # dequeue the earliest minibatch bmm : batch matrix multiplication ; mm : matrix multiplication ; cat : concatenation . enqueue appends Q with the keys zk ∈ RB×d from the current batch ; dequeue removes the oldest B keys from Q loss L ( Enc , Dec ; λ , τ , B , K ) , where λ is the regularization coefficient , τ is the temperature hyperparameter , B is the mini-batch size , and K ≥ B is the number of samples used to estimate Lneg . In practice , we propose to use the momentum contrast ( MoCo , He et al . ( 2020 ) ) framework to implement Lneg . Let Enct be parameterized by θt at step t of training . Then , we let Enc′t be the same encoder parameterized by the exponential moving average θ̃t = ( 1−m ) ∑t i=1m t−iθi . Letting x1 , . . . , xK be the K most recent training examples , and letting t ( j ) = t − bj/Bc be the time at which xj appeared in a training mini-batch , we replace LMCneg at time step t with LMoCo = 1 B B∑ i=1 log 1 K K∑ j=1 exp ( Enct ( xi ) TEnc′t ( j ) ( xj ) τ ) − 1 B B∑ i=1 Enct ( xi ) TEnc′t ( xi ) τ ( 6 ) This approach allows us to use the latent vectors of inputs outside the current mini-batch without re-computing them , offering substantial computational advantages over other contrastive learning frameworks such as SimCLR ( Chen et al. , 2020 ) . Forcing the parameters of Enc′ to evolve according to an exponential moving average is necessary for training stability , as is the second term encouraging the similarity of Enct ( xi ) and Enc′t ( xi ) ( so-called “ positive samples ” in the terminology of contrastive learning ) . Note that we do not use any data augmentations in our algorithm , but this similarity term is still non-trivial since the networks Enct and Enc′t are not identical . Pseudo-code of our final algorithm , which we call Momentum Contrastive Autoencoder ( MoCA ) , is shown in Algorithm 1 ( pseudo-code style adapted from He et al . ( 2020 ) ) . Finally , in all our experiments , inspired by Grill et al . ( 2020 ) we set the exponential moving average parameter m for updating the Enc′ network at the tth iteration as m = 1− ( 1−m0 ) · ( cos ( πt/T ) + 1 ) /2 , where T is the total number of training iterations , and m0 is the base momentum hyper-parameter . Inference : Once the model is trained , the marginal distribution of the latent space ( i.e . the pushforward Enc # PX ) should be close to a uniform distribution over the unit hyper-sphere . We can therefore draw samples from the learned distribution as follows : we first sample z ∼ N ( 0 , I ) from the standard multivariate normal distribution in Rd and then generate a sample xg : = Dec ( z/‖z‖2 ) .
Description: The authors propose momentum contrastive Wasserstein autoencoders (MoCA), which is an extension of the Wasserstein autoencoder (WAE) that aims to match the prior p(z) and aggregate variational posterior q(z) through the use of contrastive learning, as opposed to earlier proposed techniques (MMD, GAN). The use of contrastive learning here is theoretically motivated by the fact that -- as shown in Wang & Isola (2020) -- in the limit of infinitely many negative samples, the distribution induced by the contrastive encoder is uniform on the hypersphere. Therefore, if p(z) is the unit + uniform hypersphere, then we can leverage contrastive learning to drive the aggregate variational posterior q(z) to be as close to it as possible. This provides a principled way to train WAEs since its corresponding optimisation assumes that P_Z == Q_Z.
SP:42784c9a7dce81ab38635d4619569766934abff7
Momentum Contrastive Autoencoder
1 INTRODUCTION . The main goal of generative modeling is to learn a given data distribution while facilitating an efficient way to draw samples from them . Popular algorithms such as variational autoencoders ( VAE , Kingma & Welling ( 2013 ) ) and generative adversarial networks ( GAN , Goodfellow et al . ( 2014 ) ) are theoretically-grounded models designed to meet this goal . However , they come with some challenges . For instance , VAEs suffer from the posterior collapse problem ( Chen et al. , 2016 ; Zhao et al. , 2017 ; Van Den Oord et al. , 2017 ) , and a mismatch between the posterior and prior distribution ( Kingma et al. , 2016 ; Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . GANs are known to have the mode collapse problem ( Che et al. , 2016 ; Dumoulin et al. , 2016 ; Donahue et al. , 2016 ) and optimization instability ( Arjovsky & Bottou , 2017 ) due to their saddle point problem formulation . With the Wasserstein autoencoder ( WAE ) , Tolstikhin et al . ( 2017 ) propose a general theoretical framework that can potentially avoid these challenges . They show that the divergence between two distributions is equivalent to the minimum reconstruction error , under the constraint that the marginal distribution of the latent space is identical to a prior distribution . The core challenge of this framework is to match the latent space distribution to a prior distribution that is easy to sample from . If this challenge is addressed appropriately , WAE can avoid many of the aforementioned challenges of VAE and GANs . Tolstikhin et al . ( 2017 ) investigate GANs and maximum mean discrepancy ( MMD , Gretton et al . ( 2012 ) ) for this task and empirically find that the GAN-based approach yields better performance despite its instability . Others have proposed solutions to overcome this challenge ( Kolouri et al. , 2018 ; Knop et al. , 2018 ) , but they come with their own pitfalls ( see Section 2 ) . This paper aims to design a generative model that avoids the aforementioned challenges of existing approaches . To do so , we build on the WAE framework . In order to tackle the latent space distribution matching problem , we make a simple observation that allows us to use the contrastive learning framework to solve this problem . Contrastive learning achieves state-of-the-art results in selfsupervised representation learning tasks ( He et al. , 2020 ; Chen et al. , 2020 ) by forcing the latent representations to be 1 ) augmentation invariant ; 2 ) distinct for different data samples . It has been shown that the contrastive learning objective corresponding to the latter goal pushes the learned representations to achieve maximum entropy over the unit hyper-sphere ( Wang & Isola , 2020 ) . We observe that applying this contrastive loss term to the latent representation of an AE therefore matches it to the uniform distribution over the unit hyper-sphere . This approach avoids the aforementioned optimization challenges of existing methods , thus resulting in a simple and scalable algorithm for generative modeling that we call Momentum Contrastive Autoencoder ( MoCA ) . 2 RELATED WORK . There are many autoencoder based generative models in existing literature . One of the earliest model in this category is the de-noising autoencoder ( Vincent et al. , 2008 ) . Bengio et al . ( 2013b ) show that training an autoencoder to de-noise a corrupted input leads to the learning of a Markov chain whose stationary distribution is the original data distribution it is trained on . However , this results in inefficient sampling and mode mixing problems ( Bengio et al. , 2013b ; Alain & Bengio , 2014 ) . Variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) overcome these challenges by maximizing a variational lower bound of the data likelihood , which involves a KL term minimizing the divergence between the latent ’ s posterior distribution and a prior distribution . This allows for efficient approximate likelihood estimation as well as posterior inference through ancestral sampling once the model is trained . Despite these advantages , followup works have identified a few important drawbacks of VAEs . The poor sample qualities of VAE has been attributed to a mismatch between the prior ( which is used for drawing samples ) and the posterior ( Kingma et al. , 2016 ; Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . The VAE objective is also at the risk of posterior collapse – learning a latent space distribution which is independent of the input distribution if the KL term dominates the reconstruction term ( Chen et al. , 2016 ; Zhao et al. , 2017 ; Van Den Oord et al. , 2017 ) . Dai & Wipf ( 2019 ) claim that the reason behind poor sample quality of VAEs is a mismatch between the prior and posterior , arising from the latent space dimension of the autoencoder being different from the intrinsic dimensionality of the data manifold ( which is typically unknown ) . To overcome this mismatch , they propose to learn a two stage VAE in which the second stage learns a VAE on the latent space samples of the first . They show that this two stage training and sampling significantly improves the quality of generated samples . However , training a second VAE is computationally expensive and introduces some of the same challenges mentioned above . Ghosh et al . ( 2019 ) observe that VAEs can be interpreted as deterministic autoencoders with noise injected in the latent space as a form of regularization . Based on this observation , they introduce deterministic autoencoders and empirically investigate various other regularizations . The further introduce a post-hoc density estimation for the latent space since the autoencoding step does not match it to a prior . In this context , one can view our proposed algorithm as a way to regularize deterministic autoencoders while simultaneously learning a latent space distribution which can be easily sampled from . Tolstikhin et al . ( 2017 ) make the observation that the optimal transport problem can be equivalently framed as an autoencoder objective ( WAE ) under the constraint that the latent space distribution matches a prior distribution . They experiment with two alternatives to satisfy this constraint in the form of a penalty – MMD ( Gretton et al. , 2012 ) and GAN ( Goodfellow et al. , 2014 ) ) loss , and they find that the latter works better in practice . Training an autoencoder with an adversarial loss was also proposed earlier in adversarial autoencoders ( Makhzani et al. , 2015 ) . Our algorithm builds on the aforementioned WAE theoretical framework due to its theoretical advantages . There has been research that aims at avoiding the latent space distribution matching problem all together by making use of sliced distances . For instance , Kolouri et al . ( 2018 ) observe that Wasserstein distance for one dimensional distributions have a closed form solution . Motivated by this , they propose to use sliced-Wasserstein distance , which involves a large number of projections of the high dimensional distribution onto one dimensional spaces which allows approximating the original Wasserstein distance with the average of one dimensional Wasserstein distances . A similar idea using the sliced-Cramer distance is introduced in Knop et al . ( 2018 ) . However , the number of required random projections becomes prohibitively high when the data lives on a low dimensional manifold in a high dimensional space , making this approach computationally inefficient or otherwise inaccurate ( Liutkus et al. , 2019 ) . 3 MOMENTUM CONTRASTIVE AUTOENCODER . We present the proposed algorithm in this section . We begin by restating the WAE theorem that connects the autoencoder loss with the Wasserstein distance between two distributions . Let X ∼ PX be a random variable sampled from the real data distribution on X , Z ∼ Q ( Z|X ) be its latent representation in Z ⊆ Rd , and X̂ = g ( Z ) be its reconstruction by a deterministic decoder/generator g : Z → X . Note that the encoder Q ( Z|X ) can also be deterministic in the WAE framework , and we let f ( X ) dist= Q ( Z|X ) for some deterministic f : X → Z . Theorem 1 . ( Bousquet et al. , 2017 ; Tolstikhin et al. , 2017 ) Let PZ be a prior distribution on Z , let Pg = g # PZ be the push-forward of PZ under g ( i.e . the distribution of X̂ = g ( Z ) when g ∼ PZ ) , and let QZ = f # PX be the push-forward of PX under f . Then , Wc ( PX , Pg ) = inf Q : QZ=PZ E X∼PX Z∼Q ( Z|X ) [ c ( X , g ( Z ) ) ] = inf f : f # PX=PZ E X∼PX [ c ( X , g ( f ( X ) ) ] ( 1 ) where Wc denotes the Wasserstein distance for some measurable cost function c. The above theorem states that the Wasserstein distance between the true ( PX ) and generated ( Pg ) data distributions can be equivalently computed by finding the minimum ( w.r.t . f ) reconstruction loss , under the constraint that the marginal distribution of the latent variable QZ matches the prior distribution PZ . Thus the Wasserstein distance itself can be minimized by jointly minimizing the reconstruction loss w.r.t . both f ( encoder ) and g ( decoder/generator ) as long as the constraint is met . In this work , we parameterize the encoder network f : X → Rd such that latent variable Z = f ( X ) has unit ` 2 norm . Our goal is then to match the distribution of this Z to the uniform distribution over the unit hyper-sphere Sd = { z ∈ Rd : ‖z‖2 = 1 } . To do so , we study the so-called “ negative sampling ” component of the contrastive loss used in self-supervised learning , Lneg ( f ; τ , K ) = E x∼PX { x−i } K i=1∼PX log 1 K K∑ j=1 ef ( x ) T f ( x−j ) /τ ( 2 ) Here , f : X → Sd is a neural network whose output has unit ` 2 norm , τ is the temperature hyperparameter , and K is the number of samples ( another hyper-parameter ) . Theorem 1 of Wang & Isola ( 2020 ) shows that for any fixed t , when K →∞ , lim K→∞ ( Lneg ( f ; τ , K ) − logK ) = E x∼PX [ log E x−∼PX [ ef ( x ) T f ( x− ) /τ ] ] ( 3 ) Crucially , this limit is minimized exactly when the push-forward f # PX ( i.e . the distribution of the latent random variable Z = f ( X ) when X ∼ PX ) is uniform on Sd . Moreover , even the Monte Carlo approximation of Eq . 2 ( with mini-batch size B and some K such that B ≤ K < ∞ ) LMCneg ( f ; τ , K , B ) = 1 B B∑ i=1 log 1 K K∑ j=1 ef ( xi ) T f ( xj ) /τ ( 4 ) is a consistent estimator ( up to a constant ) of the entropy of f # PX called the redistribution estimate ( Ahmad & Lin , 1976 ) . This follows if we notice that k ( xi ; τ , K ) : = ∑K j=1 e f ( xi ) T f ( xj ) /τ is the un-normalized kernel density estimate of f ( xi ) using the i.i.d . samples { xj } Kj=1 , so −LMCneg ( f ; τ , K , B ) = − 1B ∑B i=1 log k ( xi ; τ , K ) ( Wang & Isola , 2020 ) . So minimizing Lneg ( and importantly LMCneg ) maximizes the entropy of f # PX . Tolstikhin et al . ( 2017 ) attempted to enforce the constraint that f # PX and PZ were matching distributions by regularizing the reconstruction loss with the MMD or a GAN-based estimate of the divergence between f # PX and PZ . By letting PZ be the uniform distribution over the unit hyper-sphere Sd , the insights above allow us to instead minimize the much simpler regularized loss L ( f , g ; λ , τ , B , K ) = 1 B B∑ i=1 ‖xi − g ( f ( xi ) ) ‖22 + λLMCneg ( f ; τ , K , B ) ( 5 ) Training : For simplicity , we will now use the notation Enc ( · ) and Dec ( · ) to respectively denote the encoder and decoder network of the autoencoder . Further , the d-dimensional output of Enc ( · ) is ` 2 normalized , i.e. , ‖Enc ( x ) ‖2 = 1 ∀x . Based on the theory above , we aim to minimize the Algorithm 1 PyTorch-like pseudocode of Momentum Contrastive Autoencoder algorithm # Enc_q , Enc_k : encoder networks for query and key . Their outputs are L2 normalized # Dec : decoder network # Q : dictionary as a queue of K randomly initialized keys ( dxK ) # m : momentum # lambda : regularization coefficient for entropy maximization # tau : logit temperature for x in data_loader : # load a minibatch x with B samples z_q = Enc_q ( x ) # queries : Bxd z_k = Enc_k ( x ) .detach ( ) # keys : Bxd , no gradient through keys x_rec = Dec ( z_q ) # reconstructed input # positive logits : Bx1 l_pos = bmm ( z_q.view ( B,1 , d ) , z_k.view ( B , d,1 ) ) # negative logits : BxK l_neg = mm ( z_q.view ( B , d ) , Q.view ( d , K ) ) # logits : Bx ( 1+K ) logits = cat ( [ l_pos , l_neg ] , dim=1 ) # compute loss labels = zeros ( B ) # positive elements are in the 0-th index L_con = CrossEntropyLoss ( logits/tau , labels ) # contrastive loss maximizing entropy of z_q L_rec = ( ( x_rec - x ) * * 2 ) .sum ( ) / B # reconstruction loss L = L_rec + lambda * L_con # momentum contrastive autoencoder loss # update Enc_q and Dec networks L.backward ( ) update ( Enc_q.params ) update ( Dec.params ) # update Enc_k Enc_k.params = m * Enc_k.params + ( 1-m ) * Enc_q.params # update dictionary enqueue ( Q , z_k ) # enqueue the current minibatch dequeue ( Q ) # dequeue the earliest minibatch bmm : batch matrix multiplication ; mm : matrix multiplication ; cat : concatenation . enqueue appends Q with the keys zk ∈ RB×d from the current batch ; dequeue removes the oldest B keys from Q loss L ( Enc , Dec ; λ , τ , B , K ) , where λ is the regularization coefficient , τ is the temperature hyperparameter , B is the mini-batch size , and K ≥ B is the number of samples used to estimate Lneg . In practice , we propose to use the momentum contrast ( MoCo , He et al . ( 2020 ) ) framework to implement Lneg . Let Enct be parameterized by θt at step t of training . Then , we let Enc′t be the same encoder parameterized by the exponential moving average θ̃t = ( 1−m ) ∑t i=1m t−iθi . Letting x1 , . . . , xK be the K most recent training examples , and letting t ( j ) = t − bj/Bc be the time at which xj appeared in a training mini-batch , we replace LMCneg at time step t with LMoCo = 1 B B∑ i=1 log 1 K K∑ j=1 exp ( Enct ( xi ) TEnc′t ( j ) ( xj ) τ ) − 1 B B∑ i=1 Enct ( xi ) TEnc′t ( xi ) τ ( 6 ) This approach allows us to use the latent vectors of inputs outside the current mini-batch without re-computing them , offering substantial computational advantages over other contrastive learning frameworks such as SimCLR ( Chen et al. , 2020 ) . Forcing the parameters of Enc′ to evolve according to an exponential moving average is necessary for training stability , as is the second term encouraging the similarity of Enct ( xi ) and Enc′t ( xi ) ( so-called “ positive samples ” in the terminology of contrastive learning ) . Note that we do not use any data augmentations in our algorithm , but this similarity term is still non-trivial since the networks Enct and Enc′t are not identical . Pseudo-code of our final algorithm , which we call Momentum Contrastive Autoencoder ( MoCA ) , is shown in Algorithm 1 ( pseudo-code style adapted from He et al . ( 2020 ) ) . Finally , in all our experiments , inspired by Grill et al . ( 2020 ) we set the exponential moving average parameter m for updating the Enc′ network at the tth iteration as m = 1− ( 1−m0 ) · ( cos ( πt/T ) + 1 ) /2 , where T is the total number of training iterations , and m0 is the base momentum hyper-parameter . Inference : Once the model is trained , the marginal distribution of the latent space ( i.e . the pushforward Enc # PX ) should be close to a uniform distribution over the unit hyper-sphere . We can therefore draw samples from the learned distribution as follows : we first sample z ∼ N ( 0 , I ) from the standard multivariate normal distribution in Rd and then generate a sample xg : = Dec ( z/‖z‖2 ) .
This paper considers training autoencoders with a hyperspherical latent space distribution. The encoder maps inputs to latent variables with a unit norm, and a contrastive loss with momentum is used to encourage the latent variables to be distributed uniformly over the surface of the unit-hypersphere. Generations of this autoencoder model are compared against wasserstein auto-encoders and several VAEs and evaluated with FID scores on CIFAR10 and CelebA. On CelebA the model is outperformed by both WAE-GAN (Wasserstein auto-encoders trained with a GAN objective to match the latent distributions) and a 2-stage VAE. On CIFAR10 the model outperforms related work, but only when using a different encoder/decoder architecture.
SP:42784c9a7dce81ab38635d4619569766934abff7
FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization
1 INTRODUCTION . Applications of reinforcement learning ( RL ) in real-world problems have been proven successful in many domains such as games ( Silver et al. , 2017 ; Vinyals et al. , 2019 ; Ye et al. , 2020 ) and robot control ( Johannink et al. , 2019 ) . However , the implementations so far usually rely on interactions with either real or simulated environments . In other areas like healthcare ( Gottesman et al. , 2019 ) , autonomous driving ( Shalev-Shwartz et al. , 2016 ) and controlled-environment agriculture ( Binas et al. , 2019 ) where RL shows promise conceptually or in theory , exploration in real environments is evidently risky , and building a high-fidelity simulator can be costly . Therefore a key step towards more practical RL algorithms is the ability to learn from static data . Such paradigm , termed ” offline RL ” or ” batch RL ” , would enable better generalization by incorporating diverse prior experience . Moreover , by leveraging and reusing previously collected data , off-policy algorithms such as SAC ( Haarnoja et al. , 2018 ) has been shown to achieve far better sample efficiency than on-policy methods . The same applies to offline RL algorithms since they are by nature off-policy . The aforementioned design principles motivated a surge of recent works on offline/batch RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) . These papers propose remedies by regularizing the learner to stay close to the logged transitions of the training datasets , namely the behavior policy , in order to mitigate the effect of bootstrapping error ( Kumar et al. , 2019 ) , where evaluation errors of out-of-distribution state-action pairs are never corrected and hence easily diverge due to inability to collect new data samples for feedback . There exist claims that offline RL ∗Correspondence to : Lanqing Li < lanqingli1993 @ gmail.com > , Dijun Luo < dijunluo @ tencent.com > †Work done while an intern at Tencent AI Lab . 1Source code : https : //github.com/FOCAL-ICLR/FOCAL-ICLR/ can be implemented successfully without explicit correction for distribution mismatch given sufficiently large and diverse training data ( Agarwal et al. , 2020 ) . However , we find such assumption unrealistic in many practices , including our experiments . In this paper , to tackle the out-of-distribution problem in offline RL in general , we adopt the proposal of behavior regularization by Wu et al . ( 2019 ) . For practical RL , besides the ability to learn without exploration , it ’ s also ideal to have an algorithm that can generalize to various scenarios . To solve real-world challenges in multi-task setting , such as treating different diseases , driving under various road conditions or growing diverse crops in autonomous greenhouses , a robust agent is expected to quickly transfer and adapt to unseen tasks , especially when the tasks share common structures . Meta-learning methods ( Vilalta & Drissi , 2002 ; Thrun & Pratt , 2012 ) address this problem by learning an inductive bias from experience collected across a distribution of tasks , which can be naturally extended to the context of reinforcement learning . Under the umbrella of this so-called meta-RL , almost all current methods require on-policy data during either both meta-training and testing phases ( Wang et al. , 2016 ; Duan et al. , 2016 ; Finn et al. , 2017 ) or at least testing stage ( Rakelly et al. , 2019 ) for adaptation . An efficient and robust method which incorporates both fully-offline learning and meta-learning in RL , despite few attempts ( Li et al. , 2019b ; Dorfman & Tamar , 2020 ) , has not been fully developed and validated . In this paper , under the first principle of maximizing practicality of RL algorithm , we propose an efficient method that integrates task inference with RL algorithms in a fully-offline fashion . Our fully-offline context-based actor-critic meta-RL algorithm , or FOCAL , achieves excellent sample efficiency and fast adaptation with limited logged experience , on a range of deterministic continuous control meta-environments . The primary contribution of this work is designing the first end-to-end and model-free offline meta-RL algorithm which is computationally efficient and effective without any prior knowledge of task identity or reward/dynamics . To achieve efficient task inference , we propose an inverse-power loss for effective learning and clustering of task latent variables , in analogy to coulomb potential in electromagnetism , which is also unseen in previous work . We also shed light on the specific design choices customized for OMRL problem by theoretical and empirical analyses . 2 RELATED WORK . Meta-RL Our work FOCAL builds upon the meta-learning framework in the context of reinforcement learning . Among all paradigms of meta-RL , this paper is most related to the context-based and metric-based approaches . Context-based meta-RL employs models with memory such as recurrent ( Duan et al. , 2016 ; Wang et al. , 2016 ; Fakoor et al. , 2019 ) , recursive ( Mishra et al. , 2017 ) or probabilistic ( Rakelly et al. , 2019 ) structures to achieve fast adaptation by aggregating experience into a latent representation on which the policy is conditioned . The design of the context usually leverages the temporal or Markov properties of RL problems . Metric-based meta-RL focuses on learning effective task representations to facilitate task inference and conditioned control policies , by employing techniques such as distance metric learning ( Yang & Jin , 2006 ) . Koch et al . ( 2015 ) proposed the first metric-based meta-algorithm for few-shot learning , in which a Siamese network ( Chopra et al. , 2005 ) is trained with triplet loss to compare the similarity between a query and supports in the embedding space . Many metric-based meta-RL algorithms extend these works ( Snell et al. , 2017 ; Sung et al. , 2018 ; Li et al. , 2019a ) . Among all aforementioned meta-learning approaches , this paper is most related to the contextbased PEARL algorithm ( Rakelly et al. , 2019 ) and metric-based prototypical networks ( Snell et al. , 2017 ) . PEARL achieves SOTA performance for off-policy meta-RL by introducing a probabilistic permutation-invariant context encoder , along with a design which disentangles task inference and control by different sampling strategies . However , it requires exploration during meta-testing . The prototypical networks employ similar design of context encoder as well as an Euclidean distance metric on deterministic embedding space , but tackles meta-learning of classification tasks with squared distance loss as opposed to the inverse-power loss in FOCAL for the more complex OMRL problem . Offline/Batch RL To address the bootstrapping error ( Kumar et al. , 2019 ) problem of offline RL , this paper adopts behavior regularization directly from Wu et al . ( 2019 ) , which provides a relatively unified framework of several recent offline or off-policy RL methods ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . It incorporates a divergence function between distributions over state-actions in the actor-critic objectives . As with SAC ( Haarnoja et al. , 2018 ) , one limitation of the algorithm is its sensitivity to reward scale and regularization strength . In our experiments , we indeed observed wide spread of optimal hyper-parameters across different meta-RL environments , shown in Table 4 . Offline Meta-RL To the best of our knowledge , despite attracting more and more attention , the offline meta-RL problem is still understudied . We are aware of a few papers that tackle the same problem from different angles ( Li et al. , 2019b ; Dorfman & Tamar , 2020 ) . Li et al . ( 2019b ) focuses on a specific scenario where biased datasets make the task inference module prone to overfit the state-action distributions , ignoring the reward/dynamics information . This so-called MDP ambiguity problem occurs when datasets of different tasks do not have significant overlap in their stateaction visitation frequencies , and is exacerbated by sparse rewards . Their method MBML requires training of offline BCQ ( Fujimoto et al. , 2019 ) and reward/dynamics models for each task , which are computationally demanding , whereas our method is end-to-end and model-free . Dorfman & Tamar ( 2020 ) on the other hand , formulate the OMRL as a Bayesian RL ( Ghavamzadeh et al. , 2016 ) problem and employs a probabilistic approach for Bayes-optimal exploration . Therefore we consider their methodology tangential to ours . 3 PRELIMINARIES . 3.1 NOTATIONS AND PROBLEM STATEMENT . We consider fully-observed Markov Decision Process ( MDP ) ( Puterman , 2014 ) in deterministic environments such as MuJoCo ( Todorov et al. , 2012 ) . An MDP can be modeled as M = ( S , A , P , R , ρ0 , γ ) with state space S , action space A , transition function P ( s′|s , a ) , bounded reward function R ( s , a ) , initial state distribution ρ0 ( s ) and discount factor γ ∈ ( 0 , 1 ) . The goal is to find a policy π ( a|s ) to maximize the cumulative discounted reward starting from any state . We introduce the notion of multi-step state marginal of policy π as µtπ ( s ) , which denotes the distribution over state space after rolling out π for t steps starting from state s. The notation Rπ ( s ) denotes the expected reward at state s when following policy π : Rπ ( s ) = Ea∼π [ R ( s , a ) ] . The state-value function ( a.k.a . value function ) and action-value function ( a.k.a Q-function ) are therefore Vπ ( s ) = ∞∑ t=0 γtEst∼µtπ ( s ) [ R ( st ) ] ( 1 ) Qπ ( s , a ) = R ( s , a ) + γEs′∼P ( s′|s , a ) [ Vπ ( s′ ) ] ( 2 ) Q-learning algorithms are implemented by iterating the Bellman optimality operator B , defined as : ( BQ̂ ) ( s , a ) : = R ( s , a ) + γEP ( s′|s , a ) [ max a′ Q̂ ( s′ , a′ ) ] ( 3 ) When the state space is large/continuous , Q̂ is used as a hypothesis from the set of function approximators ( e.g . neural networks ) . In the offline context of this work , given a distribution of tasks p ( T ) where every task is an MDP , we study off-policy meta-learning from collections of static datasets of transitions Di = { ( si , t , ai , t , s′i , t , ri , t ) |t = 1 , ... , N } generated by a set of behavior policies { βi ( a|s ) } associated with each task index i . A key underlying assumption of meta-learning is that the tasks share some common structures . By definition of MDP , in this paper we restrict our attention to tasks with shared state and action space , but differ in transition and reward functions . We define the meta-optimization objective as L ( θ ) = ETi∼p ( T ) [ LTi ( θ ) ] ( 4 ) where LTi ( θ ) is the objective evaluated on transition samples drawn from task Ti . A common choice of p ( T ) is the uniform distribution on the set of given tasks { Ti|i = 1 , ... , n } . In this case , the meta-training procedure turns into minimizing the average losses across all training tasks θ̂meta = arg min θ 1 n n∑ k=1 E [ Lk ( θ ) ] ( 5 )
This paper tackles the problem of offline meta reinforcement learning, where an agent aims to learn a policy which can adapt to an unseen task (dynamics/reward), but from entirely offline data. As a result of being fully offline, the agent can no longer explore in the new task at test time, but instead receives randomly sampled transitions from the new task, from which it must infer the task. They then propose a method for learning task inference from fully offline data, as well as a policy conditioned on this task encoding from offline data built off behavior regularized actor critic (BRAC). Results indicate that in this outperforms PEARL as well as multi-task offline RL with BCQ.
SP:4bafcd10c07834bb7e4008f5d089edda2c7a01e5
FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization
1 INTRODUCTION . Applications of reinforcement learning ( RL ) in real-world problems have been proven successful in many domains such as games ( Silver et al. , 2017 ; Vinyals et al. , 2019 ; Ye et al. , 2020 ) and robot control ( Johannink et al. , 2019 ) . However , the implementations so far usually rely on interactions with either real or simulated environments . In other areas like healthcare ( Gottesman et al. , 2019 ) , autonomous driving ( Shalev-Shwartz et al. , 2016 ) and controlled-environment agriculture ( Binas et al. , 2019 ) where RL shows promise conceptually or in theory , exploration in real environments is evidently risky , and building a high-fidelity simulator can be costly . Therefore a key step towards more practical RL algorithms is the ability to learn from static data . Such paradigm , termed ” offline RL ” or ” batch RL ” , would enable better generalization by incorporating diverse prior experience . Moreover , by leveraging and reusing previously collected data , off-policy algorithms such as SAC ( Haarnoja et al. , 2018 ) has been shown to achieve far better sample efficiency than on-policy methods . The same applies to offline RL algorithms since they are by nature off-policy . The aforementioned design principles motivated a surge of recent works on offline/batch RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) . These papers propose remedies by regularizing the learner to stay close to the logged transitions of the training datasets , namely the behavior policy , in order to mitigate the effect of bootstrapping error ( Kumar et al. , 2019 ) , where evaluation errors of out-of-distribution state-action pairs are never corrected and hence easily diverge due to inability to collect new data samples for feedback . There exist claims that offline RL ∗Correspondence to : Lanqing Li < lanqingli1993 @ gmail.com > , Dijun Luo < dijunluo @ tencent.com > †Work done while an intern at Tencent AI Lab . 1Source code : https : //github.com/FOCAL-ICLR/FOCAL-ICLR/ can be implemented successfully without explicit correction for distribution mismatch given sufficiently large and diverse training data ( Agarwal et al. , 2020 ) . However , we find such assumption unrealistic in many practices , including our experiments . In this paper , to tackle the out-of-distribution problem in offline RL in general , we adopt the proposal of behavior regularization by Wu et al . ( 2019 ) . For practical RL , besides the ability to learn without exploration , it ’ s also ideal to have an algorithm that can generalize to various scenarios . To solve real-world challenges in multi-task setting , such as treating different diseases , driving under various road conditions or growing diverse crops in autonomous greenhouses , a robust agent is expected to quickly transfer and adapt to unseen tasks , especially when the tasks share common structures . Meta-learning methods ( Vilalta & Drissi , 2002 ; Thrun & Pratt , 2012 ) address this problem by learning an inductive bias from experience collected across a distribution of tasks , which can be naturally extended to the context of reinforcement learning . Under the umbrella of this so-called meta-RL , almost all current methods require on-policy data during either both meta-training and testing phases ( Wang et al. , 2016 ; Duan et al. , 2016 ; Finn et al. , 2017 ) or at least testing stage ( Rakelly et al. , 2019 ) for adaptation . An efficient and robust method which incorporates both fully-offline learning and meta-learning in RL , despite few attempts ( Li et al. , 2019b ; Dorfman & Tamar , 2020 ) , has not been fully developed and validated . In this paper , under the first principle of maximizing practicality of RL algorithm , we propose an efficient method that integrates task inference with RL algorithms in a fully-offline fashion . Our fully-offline context-based actor-critic meta-RL algorithm , or FOCAL , achieves excellent sample efficiency and fast adaptation with limited logged experience , on a range of deterministic continuous control meta-environments . The primary contribution of this work is designing the first end-to-end and model-free offline meta-RL algorithm which is computationally efficient and effective without any prior knowledge of task identity or reward/dynamics . To achieve efficient task inference , we propose an inverse-power loss for effective learning and clustering of task latent variables , in analogy to coulomb potential in electromagnetism , which is also unseen in previous work . We also shed light on the specific design choices customized for OMRL problem by theoretical and empirical analyses . 2 RELATED WORK . Meta-RL Our work FOCAL builds upon the meta-learning framework in the context of reinforcement learning . Among all paradigms of meta-RL , this paper is most related to the context-based and metric-based approaches . Context-based meta-RL employs models with memory such as recurrent ( Duan et al. , 2016 ; Wang et al. , 2016 ; Fakoor et al. , 2019 ) , recursive ( Mishra et al. , 2017 ) or probabilistic ( Rakelly et al. , 2019 ) structures to achieve fast adaptation by aggregating experience into a latent representation on which the policy is conditioned . The design of the context usually leverages the temporal or Markov properties of RL problems . Metric-based meta-RL focuses on learning effective task representations to facilitate task inference and conditioned control policies , by employing techniques such as distance metric learning ( Yang & Jin , 2006 ) . Koch et al . ( 2015 ) proposed the first metric-based meta-algorithm for few-shot learning , in which a Siamese network ( Chopra et al. , 2005 ) is trained with triplet loss to compare the similarity between a query and supports in the embedding space . Many metric-based meta-RL algorithms extend these works ( Snell et al. , 2017 ; Sung et al. , 2018 ; Li et al. , 2019a ) . Among all aforementioned meta-learning approaches , this paper is most related to the contextbased PEARL algorithm ( Rakelly et al. , 2019 ) and metric-based prototypical networks ( Snell et al. , 2017 ) . PEARL achieves SOTA performance for off-policy meta-RL by introducing a probabilistic permutation-invariant context encoder , along with a design which disentangles task inference and control by different sampling strategies . However , it requires exploration during meta-testing . The prototypical networks employ similar design of context encoder as well as an Euclidean distance metric on deterministic embedding space , but tackles meta-learning of classification tasks with squared distance loss as opposed to the inverse-power loss in FOCAL for the more complex OMRL problem . Offline/Batch RL To address the bootstrapping error ( Kumar et al. , 2019 ) problem of offline RL , this paper adopts behavior regularization directly from Wu et al . ( 2019 ) , which provides a relatively unified framework of several recent offline or off-policy RL methods ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . It incorporates a divergence function between distributions over state-actions in the actor-critic objectives . As with SAC ( Haarnoja et al. , 2018 ) , one limitation of the algorithm is its sensitivity to reward scale and regularization strength . In our experiments , we indeed observed wide spread of optimal hyper-parameters across different meta-RL environments , shown in Table 4 . Offline Meta-RL To the best of our knowledge , despite attracting more and more attention , the offline meta-RL problem is still understudied . We are aware of a few papers that tackle the same problem from different angles ( Li et al. , 2019b ; Dorfman & Tamar , 2020 ) . Li et al . ( 2019b ) focuses on a specific scenario where biased datasets make the task inference module prone to overfit the state-action distributions , ignoring the reward/dynamics information . This so-called MDP ambiguity problem occurs when datasets of different tasks do not have significant overlap in their stateaction visitation frequencies , and is exacerbated by sparse rewards . Their method MBML requires training of offline BCQ ( Fujimoto et al. , 2019 ) and reward/dynamics models for each task , which are computationally demanding , whereas our method is end-to-end and model-free . Dorfman & Tamar ( 2020 ) on the other hand , formulate the OMRL as a Bayesian RL ( Ghavamzadeh et al. , 2016 ) problem and employs a probabilistic approach for Bayes-optimal exploration . Therefore we consider their methodology tangential to ours . 3 PRELIMINARIES . 3.1 NOTATIONS AND PROBLEM STATEMENT . We consider fully-observed Markov Decision Process ( MDP ) ( Puterman , 2014 ) in deterministic environments such as MuJoCo ( Todorov et al. , 2012 ) . An MDP can be modeled as M = ( S , A , P , R , ρ0 , γ ) with state space S , action space A , transition function P ( s′|s , a ) , bounded reward function R ( s , a ) , initial state distribution ρ0 ( s ) and discount factor γ ∈ ( 0 , 1 ) . The goal is to find a policy π ( a|s ) to maximize the cumulative discounted reward starting from any state . We introduce the notion of multi-step state marginal of policy π as µtπ ( s ) , which denotes the distribution over state space after rolling out π for t steps starting from state s. The notation Rπ ( s ) denotes the expected reward at state s when following policy π : Rπ ( s ) = Ea∼π [ R ( s , a ) ] . The state-value function ( a.k.a . value function ) and action-value function ( a.k.a Q-function ) are therefore Vπ ( s ) = ∞∑ t=0 γtEst∼µtπ ( s ) [ R ( st ) ] ( 1 ) Qπ ( s , a ) = R ( s , a ) + γEs′∼P ( s′|s , a ) [ Vπ ( s′ ) ] ( 2 ) Q-learning algorithms are implemented by iterating the Bellman optimality operator B , defined as : ( BQ̂ ) ( s , a ) : = R ( s , a ) + γEP ( s′|s , a ) [ max a′ Q̂ ( s′ , a′ ) ] ( 3 ) When the state space is large/continuous , Q̂ is used as a hypothesis from the set of function approximators ( e.g . neural networks ) . In the offline context of this work , given a distribution of tasks p ( T ) where every task is an MDP , we study off-policy meta-learning from collections of static datasets of transitions Di = { ( si , t , ai , t , s′i , t , ri , t ) |t = 1 , ... , N } generated by a set of behavior policies { βi ( a|s ) } associated with each task index i . A key underlying assumption of meta-learning is that the tasks share some common structures . By definition of MDP , in this paper we restrict our attention to tasks with shared state and action space , but differ in transition and reward functions . We define the meta-optimization objective as L ( θ ) = ETi∼p ( T ) [ LTi ( θ ) ] ( 4 ) where LTi ( θ ) is the objective evaluated on transition samples drawn from task Ti . A common choice of p ( T ) is the uniform distribution on the set of given tasks { Ti|i = 1 , ... , n } . In this case , the meta-training procedure turns into minimizing the average losses across all training tasks θ̂meta = arg min θ 1 n n∑ k=1 E [ Lk ( θ ) ] ( 5 )
The paper studies meta-reinforcement learning in the fully offline setting, and proposes a novel algorithm 'FOCAL'. Given offline datasets for tasks sampled from some prior, the algorithm learns a context encoder using distance-based metrics. The encoder is used for inferring the task-latent $z$, which is used to condition the policy rollout $\pi(a | s, z)$. They demonstrate experiments where FOCAL outperforms baselines like PEARL.
SP:4bafcd10c07834bb7e4008f5d089edda2c7a01e5
An information-theoretic framework for learning models of instance-independent label noise
1 INTRODUCTION . Real-world datasets are inherently noisy . Although there are numerous existing methods for learning classifiers in the presence of label noise ( e.g . Han et al . ( 2018 ) ; Hendrycks et al . ( 2018 ) ; Natarajan et al . ( 2013 ) ; Tanaka et al . ( 2018 ) ) , there is still a gap between empirical success and theoretical understanding of conditions required for these methods to work . For instance-independent label noise , all methods with theoretical performance guarantees require a good estimation of the noise transition matrix as a key indispensable step ( Cheng et al. , 2017 ; Jindal et al. , 2016 ; Patrini et al. , 2017 ; Thekumparampil et al. , 2018 ; Xia et al. , 2019 ) . Recall that for any dataset D with label noise , we can associate to it a noise transition matrix QD , whose entries are conditional probabilities p ( y|z ) that a randomly selected instance of D has the given label y , under the condition that its correct label is z . Many algorithms for estimating QD either require that a small clean subset Dclean of D is provided ( Liu & Tao , 2015 ; Scott , 2015 ) , or assume that the noise model is a mixture model ( Ramaswamy et al. , 2016 ; Yu et al. , 2018 ) , where at least some anchor points are known for every component . Here , “ anchor points ” refer to datapoints belonging to exactly one component of the mixture model almost surely ( cf . Vandermeulen et al . ( 2019 ) ) , while “ clean ” refers to instances with correct labels . Recently , it was shown that the knowledge of anchor points or Dclean is not required for estimating QD . The proposed approach , known as T-Revision ( Xia et al. , 2019 ) , learns a classifier from D and simultaneously identifies anchor-like instances in D , which are used iteratively to estimate QD , which in turn is used to improve the classifier . Hence for T-Revision , a good estimation for QD is inextricably tied to learning a classifier with high classification accuracy . In this paper , we propose a framework for estimating QD solely from D , without requiring anchor points , a clean subset , or even anchor-like instances . In particular , we show that high classification accuracy is not required for a good estimation of QD . Our framework is able to robustly estimate QD at all noise levels , even in extreme scenarios where anchor points are removed from D , or where D is imbalanced . Our key starting point is that Shannon entropy and other related information-theoretic concepts can be defined analogously for datasets with label noise . Suppose we have a discriminator Φ that takes any dataset D′ as its input , and gives a binary output that predicts whether D′ has maximum entropy . Given D , a dataset with label noise , we shall synthesize multiple new datasets D̂ by inserting additional label noise into D , using different noise levels for different label classes . Intuitively , the more label noise that D initially has , the lower the minimum amount of additional label noise we need to insert into D to reach near-maximum entropy . We show that among those datasets D̂ that are predicted by Φ to have maximum entropy , their associated levels of additional label noise can be used to compute a single estimate for QD . Our estimator is statistically consistent : We prove that by repeating this method , the average of the estimates would converge to the true QD . As a concrete realization of this idea , we shall construct Φ using the notion of Local Intrinsic Dimensionality ( LID ) ( Houle , 2013 ; 2017a ; b ) . Intuitively , the LID computed at a feature vector v is an approximation of the dimension of a smooth manifold containing v that would “ best ” fit the distribution D in the vicinity of v. LID plays a fundamental role in an important 2018 breakthrough in noise detection ( Ma et al. , 2018c ) , wherein it was empirically shown that sequences of LID scores could be used to distinguish clean datasets from datasets with label noise . Roughly speaking , the training data for Φ consists of LID sequences that correspond to multiple datasets synthesized from D. In particular , we show that Φ can be trained without needing any clean data . Since we are optimizing the predictive accuracy of Φ , rather than optimizing the classification accuracy for D , we also do not require state-of-the-art architectures . For example , in our experiments on the CIFAR-10 dataset ( Krizhevsky et al. , 2009 ) , we found that LID sequences generated by training on shallow “ vanilla ” convolutional neural networks ( CNNs ) , were sufficient for training Φ . Our contributions are summarized as follows : • We introduce an information-theoretic-based framework for estimating the noise transition matrix of any datasetD with instance-independent label noise . We do not make any assumptions on the structure of the noise transition matrix . • We prove that our noise transition matrix estimator is consistent . This is the first-ever estimator that is proven to be consistent without needing to optimize classification accuracy . Notably , our consistency proof does not require anchor points , a clean subset , or any anchor-like instances . • We construct an LID-based discriminator Φ and show experimentally that training a shallow CNN to generate LID sequences is sufficient for obtaining high predictive accuracy for Φ . Using our LID-based discriminator Φ , our proposed estimator outperforms the state-of-the-art methods , especially in the case when anchor-like instances are removed from D. • Given access to a clean subset Dclean , we show that our method can be used to further improve existing competitive estimation methods . 2 PROPOSED INFORMATION-THEORETIC FRAMEWORK . Our framework hinges on a simple yet crucial observation : Datasets with different label noise levels have different entropies . Although the entropy of any given dataset D is ( initially ) unknown to us , we do know , crucially , that a complete uniformly random relabeling of D would yield a new dataset with maximum entropy ( which we call “ baseline datasets ” ) , and we can easily generate multiple such datasets . We could also use partial relabelings to generate a spectrum of new datasets whose entropies range from the entropy of D , to the maximum possible entropy . We call them “ α-increment datasets ” , where α is a parameter that we control . The minimum value αmin for α , such that an α-increment dataset reaches maximum entropy , would depend on the original entropy of D. See Fig . 1 for a visualization of the spectrum of entropies for α-increment datasets and baseline datasets . Our main idea is to train a discriminator Φ that recognizes datasets with maximum entropy , and then use Φ to determine this minimum value αmin . Once this value is estimated , we are then able to estimate QD . Specific realizations of our framework correspond to specific designs for Φ . An illustration of our framework using LID-based discriminators is given in Fig . 2 ; details on LID-based discriminators can be found in Section 3 , and will be further elaborated in the appendix . Throughout this paper , given any discrete random variablesX , Y , we shall write pX ( x ) and pX|Y ( x|y ) to mean Pr ( X = x ) and Pr ( X = x|Y = y ) respectively . We assume that the reader is familiar with the basics of information theory ; see Cover & Thomas ( 2012 ) for an excellent introduction . 2.1 ENTROPY OF DATASETS WITH LABEL NOISE . Given D a dataset with instance-independent label noise ( DILN ) , let A be its set of all label classes , and let Y ( resp . Z ) be the given ( resp . correct ) label of a randomly selected instance X of D.1 For convenience , we say that D is a DILN with noise model ( Y |Z ; A ) . The noise transition matrix of D is a matrix QD whose ( i , j ) -th entry is qDi , j : = pY |Z ( j , i ) . We shall define the entropy of D by H ( D ) : = − ∑ i∈A pZ ( i ) ∑ j∈A qDi , j log q D i , j . Notice that H ( D ) is precisely the conditional entropy of Y given Z . ( We use the convention that 0 log 0 = 0 . ) Hence , it is easy to prove that 0 ≤ H ( D ) ≤ log |A| . In particular , H ( D ) = 0 if and only if every pair of instances of D in the same class have the same given labels . Note also that D has maximum entropy log |A| if and only if every entry of QD equals 1|A| ( i.e . the given labels of D are completely noisy ) . Thus , H ( D ) could be interpreted as a measure of the label noise level of D. 1Every datapoint of D is a pair ( x , y ) , where x is an instance , and y is its given label , which may differ from the correct label z associated to x . Note that Z is a function of X , and Y is a random function of Z . By instance-independent label noise , we mean that Pr ( Y = y|Z = z , X = x ) = Pr ( Y = y|Z = z ) . A more detailed treatment of DILNs can be found in Appendix A . In particular , a DILN includes its noise model . A derived DILN of D shall mean a DILN D′ with noise model ( Y ′|Z ; A ) for some Y ′ independent of Z , such that both D , D′ have the same underlying set of instances , given in the same sequential order . For example , D′ could be “ derived ” from D by inserting additional instance-independent label noise , in which case D′ can be interpreted as a partial relabeling of D. For convenience , we say that D′ is a Y ′-derived DILN of D . 2.2 SYNTHESIS OF NEW DATASETS FROM D. Let D be a DILN with noise model ( Y |Z ; A ) . Without loss of generality , assume A = { 1 , . . . , k } , assume that D has N instances , and write QD = [ qi , j ] 1≤i , j≤k . The correct labels for the instances are fixed and unknown to us , hence all entries of QD are fixed constants with unknown values . Our goal is to estimate QD . As alluded to earlier , we shall be synthesizing two types of datasets from D. The first type is what we call a baseline dataset , described as follows : Let D be a random dataset obtained from D by replacing the given label of each instance of D by a label chosen uniformly at random from A . Hence D is a random DILN with expected entropy log k ( i.e . maximum entropy ) , which we shall denote by Dmax : = E [ D ] . The noise transition matrix QD = [ Q′i , j ] 1≤i , j≤k of D is a random matrix whose entries Q′i , j = 1 k ( 1 + E ′ i , j ) are random variables , where each “ error ” E ′ i , j is a random variable with mean 0 . Any observed D is called a baseline DILN of D. The second type is what we call an α-increment dataset , where α = ( α1 , . . . , αk ) is a vector whose entries satisfy 0 ≤ αi ≤ 1 for all i . Let Dα be obtained from D as follows : For each 1 ≤ i ≤ k , select uniformly at random αi × 100 % of the instances with given label i , and reassign each selected given label to one of the remaining k−1 classes , chosen uniformly at random . HenceDα is a random DILN , and its noise transition matrix QDα = [ Q ′′ i , j ] 1≤i , j≤k is a random matrix whose entries Q′′i , j = qi , j ( 1− αj ) + ∑ 1≤t≤k t6=j qi , tαt 1 k−1 ( 1 + E ′′ i , j ) ( for all 1 ≤ i , j ≤ k ) ( 1 ) are random variables , where each “ error ” E′′i , j is a random variable with mean 0 . Any observed Dα is called an α-increment DILN of D .
The paper considers the problem of estimating instance-independent label noise. More formally, it is assumed that the true labels for any data point are modified based on a noise transition matrix, and the goal is to estimate this noise transition matrix. The paper proposed an information-theoretic approach for this task, the key idea behind which is to estimate if a particular dataset has maximum entropy with respect to the labels. This estimation problem is solved using a recent discovery that the training dynamics of a neural network can be used to infer the presence of label noise.
SP:e2179d114157200f5a22928ba5ae5b41ff1342f4
An information-theoretic framework for learning models of instance-independent label noise
1 INTRODUCTION . Real-world datasets are inherently noisy . Although there are numerous existing methods for learning classifiers in the presence of label noise ( e.g . Han et al . ( 2018 ) ; Hendrycks et al . ( 2018 ) ; Natarajan et al . ( 2013 ) ; Tanaka et al . ( 2018 ) ) , there is still a gap between empirical success and theoretical understanding of conditions required for these methods to work . For instance-independent label noise , all methods with theoretical performance guarantees require a good estimation of the noise transition matrix as a key indispensable step ( Cheng et al. , 2017 ; Jindal et al. , 2016 ; Patrini et al. , 2017 ; Thekumparampil et al. , 2018 ; Xia et al. , 2019 ) . Recall that for any dataset D with label noise , we can associate to it a noise transition matrix QD , whose entries are conditional probabilities p ( y|z ) that a randomly selected instance of D has the given label y , under the condition that its correct label is z . Many algorithms for estimating QD either require that a small clean subset Dclean of D is provided ( Liu & Tao , 2015 ; Scott , 2015 ) , or assume that the noise model is a mixture model ( Ramaswamy et al. , 2016 ; Yu et al. , 2018 ) , where at least some anchor points are known for every component . Here , “ anchor points ” refer to datapoints belonging to exactly one component of the mixture model almost surely ( cf . Vandermeulen et al . ( 2019 ) ) , while “ clean ” refers to instances with correct labels . Recently , it was shown that the knowledge of anchor points or Dclean is not required for estimating QD . The proposed approach , known as T-Revision ( Xia et al. , 2019 ) , learns a classifier from D and simultaneously identifies anchor-like instances in D , which are used iteratively to estimate QD , which in turn is used to improve the classifier . Hence for T-Revision , a good estimation for QD is inextricably tied to learning a classifier with high classification accuracy . In this paper , we propose a framework for estimating QD solely from D , without requiring anchor points , a clean subset , or even anchor-like instances . In particular , we show that high classification accuracy is not required for a good estimation of QD . Our framework is able to robustly estimate QD at all noise levels , even in extreme scenarios where anchor points are removed from D , or where D is imbalanced . Our key starting point is that Shannon entropy and other related information-theoretic concepts can be defined analogously for datasets with label noise . Suppose we have a discriminator Φ that takes any dataset D′ as its input , and gives a binary output that predicts whether D′ has maximum entropy . Given D , a dataset with label noise , we shall synthesize multiple new datasets D̂ by inserting additional label noise into D , using different noise levels for different label classes . Intuitively , the more label noise that D initially has , the lower the minimum amount of additional label noise we need to insert into D to reach near-maximum entropy . We show that among those datasets D̂ that are predicted by Φ to have maximum entropy , their associated levels of additional label noise can be used to compute a single estimate for QD . Our estimator is statistically consistent : We prove that by repeating this method , the average of the estimates would converge to the true QD . As a concrete realization of this idea , we shall construct Φ using the notion of Local Intrinsic Dimensionality ( LID ) ( Houle , 2013 ; 2017a ; b ) . Intuitively , the LID computed at a feature vector v is an approximation of the dimension of a smooth manifold containing v that would “ best ” fit the distribution D in the vicinity of v. LID plays a fundamental role in an important 2018 breakthrough in noise detection ( Ma et al. , 2018c ) , wherein it was empirically shown that sequences of LID scores could be used to distinguish clean datasets from datasets with label noise . Roughly speaking , the training data for Φ consists of LID sequences that correspond to multiple datasets synthesized from D. In particular , we show that Φ can be trained without needing any clean data . Since we are optimizing the predictive accuracy of Φ , rather than optimizing the classification accuracy for D , we also do not require state-of-the-art architectures . For example , in our experiments on the CIFAR-10 dataset ( Krizhevsky et al. , 2009 ) , we found that LID sequences generated by training on shallow “ vanilla ” convolutional neural networks ( CNNs ) , were sufficient for training Φ . Our contributions are summarized as follows : • We introduce an information-theoretic-based framework for estimating the noise transition matrix of any datasetD with instance-independent label noise . We do not make any assumptions on the structure of the noise transition matrix . • We prove that our noise transition matrix estimator is consistent . This is the first-ever estimator that is proven to be consistent without needing to optimize classification accuracy . Notably , our consistency proof does not require anchor points , a clean subset , or any anchor-like instances . • We construct an LID-based discriminator Φ and show experimentally that training a shallow CNN to generate LID sequences is sufficient for obtaining high predictive accuracy for Φ . Using our LID-based discriminator Φ , our proposed estimator outperforms the state-of-the-art methods , especially in the case when anchor-like instances are removed from D. • Given access to a clean subset Dclean , we show that our method can be used to further improve existing competitive estimation methods . 2 PROPOSED INFORMATION-THEORETIC FRAMEWORK . Our framework hinges on a simple yet crucial observation : Datasets with different label noise levels have different entropies . Although the entropy of any given dataset D is ( initially ) unknown to us , we do know , crucially , that a complete uniformly random relabeling of D would yield a new dataset with maximum entropy ( which we call “ baseline datasets ” ) , and we can easily generate multiple such datasets . We could also use partial relabelings to generate a spectrum of new datasets whose entropies range from the entropy of D , to the maximum possible entropy . We call them “ α-increment datasets ” , where α is a parameter that we control . The minimum value αmin for α , such that an α-increment dataset reaches maximum entropy , would depend on the original entropy of D. See Fig . 1 for a visualization of the spectrum of entropies for α-increment datasets and baseline datasets . Our main idea is to train a discriminator Φ that recognizes datasets with maximum entropy , and then use Φ to determine this minimum value αmin . Once this value is estimated , we are then able to estimate QD . Specific realizations of our framework correspond to specific designs for Φ . An illustration of our framework using LID-based discriminators is given in Fig . 2 ; details on LID-based discriminators can be found in Section 3 , and will be further elaborated in the appendix . Throughout this paper , given any discrete random variablesX , Y , we shall write pX ( x ) and pX|Y ( x|y ) to mean Pr ( X = x ) and Pr ( X = x|Y = y ) respectively . We assume that the reader is familiar with the basics of information theory ; see Cover & Thomas ( 2012 ) for an excellent introduction . 2.1 ENTROPY OF DATASETS WITH LABEL NOISE . Given D a dataset with instance-independent label noise ( DILN ) , let A be its set of all label classes , and let Y ( resp . Z ) be the given ( resp . correct ) label of a randomly selected instance X of D.1 For convenience , we say that D is a DILN with noise model ( Y |Z ; A ) . The noise transition matrix of D is a matrix QD whose ( i , j ) -th entry is qDi , j : = pY |Z ( j , i ) . We shall define the entropy of D by H ( D ) : = − ∑ i∈A pZ ( i ) ∑ j∈A qDi , j log q D i , j . Notice that H ( D ) is precisely the conditional entropy of Y given Z . ( We use the convention that 0 log 0 = 0 . ) Hence , it is easy to prove that 0 ≤ H ( D ) ≤ log |A| . In particular , H ( D ) = 0 if and only if every pair of instances of D in the same class have the same given labels . Note also that D has maximum entropy log |A| if and only if every entry of QD equals 1|A| ( i.e . the given labels of D are completely noisy ) . Thus , H ( D ) could be interpreted as a measure of the label noise level of D. 1Every datapoint of D is a pair ( x , y ) , where x is an instance , and y is its given label , which may differ from the correct label z associated to x . Note that Z is a function of X , and Y is a random function of Z . By instance-independent label noise , we mean that Pr ( Y = y|Z = z , X = x ) = Pr ( Y = y|Z = z ) . A more detailed treatment of DILNs can be found in Appendix A . In particular , a DILN includes its noise model . A derived DILN of D shall mean a DILN D′ with noise model ( Y ′|Z ; A ) for some Y ′ independent of Z , such that both D , D′ have the same underlying set of instances , given in the same sequential order . For example , D′ could be “ derived ” from D by inserting additional instance-independent label noise , in which case D′ can be interpreted as a partial relabeling of D. For convenience , we say that D′ is a Y ′-derived DILN of D . 2.2 SYNTHESIS OF NEW DATASETS FROM D. Let D be a DILN with noise model ( Y |Z ; A ) . Without loss of generality , assume A = { 1 , . . . , k } , assume that D has N instances , and write QD = [ qi , j ] 1≤i , j≤k . The correct labels for the instances are fixed and unknown to us , hence all entries of QD are fixed constants with unknown values . Our goal is to estimate QD . As alluded to earlier , we shall be synthesizing two types of datasets from D. The first type is what we call a baseline dataset , described as follows : Let D be a random dataset obtained from D by replacing the given label of each instance of D by a label chosen uniformly at random from A . Hence D is a random DILN with expected entropy log k ( i.e . maximum entropy ) , which we shall denote by Dmax : = E [ D ] . The noise transition matrix QD = [ Q′i , j ] 1≤i , j≤k of D is a random matrix whose entries Q′i , j = 1 k ( 1 + E ′ i , j ) are random variables , where each “ error ” E ′ i , j is a random variable with mean 0 . Any observed D is called a baseline DILN of D. The second type is what we call an α-increment dataset , where α = ( α1 , . . . , αk ) is a vector whose entries satisfy 0 ≤ αi ≤ 1 for all i . Let Dα be obtained from D as follows : For each 1 ≤ i ≤ k , select uniformly at random αi × 100 % of the instances with given label i , and reassign each selected given label to one of the remaining k−1 classes , chosen uniformly at random . HenceDα is a random DILN , and its noise transition matrix QDα = [ Q ′′ i , j ] 1≤i , j≤k is a random matrix whose entries Q′′i , j = qi , j ( 1− αj ) + ∑ 1≤t≤k t6=j qi , tαt 1 k−1 ( 1 + E ′′ i , j ) ( for all 1 ≤ i , j ≤ k ) ( 1 ) are random variables , where each “ error ” E′′i , j is a random variable with mean 0 . Any observed Dα is called an α-increment DILN of D .
The paper aims to develop an information-theoretic framework for learning the underlying model from data with instance-independent label noises. It first formalizes the problem and relevant definitions with information measures, such as conditional entropy and conditional KL divergence, then argues that if the underlying model is well-separated, the proposed algorithm will be consistent in learning the noise-labeling matrix. At the heart of the algorithm is a discriminator being able to measure the labelings' noise level. A class of discriminators based on 'local intrinsic dimension (LID)' is then provided and evaluated empirically. The resulting algorithm performs well on the tested instances compared to existing ones.
SP:e2179d114157200f5a22928ba5ae5b41ff1342f4
Unbiased Teacher for Semi-Supervised Object Detection
1 INTRODUCTION . The availability of large-scale datasets and computational resources has allowed deep neural networks to achieve strong performance on a wide variety of tasks . However , training these networks requires a large number of labeled examples that are expensive to annotate and acquire . As an alternative , Semi-Supervised Learning ( SSL ) methods have received growing attention ( Sohn et al. , 2020a ; Berthelot et al. , 2020 ; 2019 ; Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Sajjadi et al. , 2016 ; Lee , 2013 ; Grandvalet & Bengio , 2005 ) . Yet , these advances have primarily focused on image classification , rather than object detection where bounding box annotations require more effort . In this work , we revisit object detection under the SSL setting ( Figure 1 ) : an object detector is trained with a single dataset where only a small amount of labeled bounding boxes and a large amount of unlabeled data are provided , or an object detector is jointly trained with a large labeled dataset as well as a large external unlabeled dataset . A straightforward way to address Semi-Supervised Object Detection ( SS-OD ) is to adapt from existing advanced semi-supervised image classification methods ( Sohn et al. , 2020a ) . Unfortunately , object detection has some unique characteristics that interact poorly with such methods . For example , the nature of class-imbalance in object detection tasks impedes the usage of pseudo-labeling . In object detection , there exists foreground-background imbalance and foreground classes imbalance ( see Section 3.3 ) . These imbalances make models trained in SSL settings prone to generate biased predictions . Pseudo-labeling methods , one of the most successful SSL methods in image classification ( Lee , 2013 ; Sohn et al. , 2020a ) , may thus be biased towards dominant and overly confident classes ( background ) while ignoring minor and less confident classes ( foreground ) . As a result , adding biased pseudo-labels into the semi-supervised training aggravates the class-imbalance issue and introduces severe overfitting . As shown in Figure 2 , taking a two-stage object detector as an example , there exists heavy overfitting on the fore- ∗Work done partially while interning at Facebook . 1Code : https : //github.com/facebookresearch/unbiased-teacher . ground/background classification in the RPN and multi-class classification in the ROIhead ( but not on bounding box regression ) . To overcome these issues , we propose a general framework – Unbiased Teacher : an approach that jointly trains a Student and a slowly progressing Teacher in a mutually-beneficial manner , in which the Teacher generates pseudo-labels to train the Student , and the Student gradually updates the Teacher via Exponential Moving Average ( EMA ) 2 , while the Teacher and Student are given different augmented input images ( see Figure 3 ) . Inside this framework , ( i ) we utilize the pseudo-labels as explicit supervision for both RPN and ROIhead and thus alleviate the overfitting issues in both RPN and ROIhead . ( ii ) We also prevent detrimental effects due to noisy pseudo-labels by exploiting the Teacher-Student dual models ( see further discussion and analysis in Section 4.2 ) . ( iii ) With the use of EMA training and the Focal loss ( Lin et al. , 2017b ) , we can address the pseudo-labeling bias problem caused by class-imbalance and thus improve the quality of pseudo-labels . As the result , our object detector achieves significant performance improvements . We benchmark Unbiased Teacher with SSL setting using the MS-COCO and PASCAL VOC datasets , namely COCO-standard , COCO-additional , and VOC . When using only 1 % labeled data from MS-COCO ( COCO-standard ) , Unbiased Teacher achieves 6.8 absolute mAP improvement against the state-of-the-art method , STAC ( Sohn et al. , 2020b ) . Unbiased Teacher consistently achieves around 10 absolute mAP improvements when using only 0.5 , 1 , 2 , 5 % of labeled data compared to supervised baseline . We highlight the contributions of this paper as follows : • By analyzing object detectors trained with limited-supervision , we identify that the nature of class-imbalance in object detection tasks impedes the effectiveness of pseudo-labeling method on SS-OD task . • We thus proposed a simple yet effective method , Unbiased Teacher , to address the pseudolabeling bias issue caused by class-imbalance existing in ground-truth labels and the overfitting issue caused by the scarcity of labeled data . • Our Unbiased Teacher achieves state-of-the-art performance on SS-OD across COCOstandard , COCO-additional , and VOC datasets . We also provide an ablation study to verify the effectiveness of each proposed component . 2 RELATED WORKS . Semi-Supervised Learning . The majority of the recent SSL methods typically consist of ( 1 ) input augmentations and perturbations , and ( 2 ) consistency regularization . They regularize the model to be invariant and robust to certain augmentations on the input , which requires the outputs given the original and augmented inputs to be consistent . For example , existing approaches apply convention data augmentations ( Berthelot et al. , 2019 ; Laine & Aila , 2017 ; Sajjadi et al. , 2016 ; Tarvainen & 2Note that there have been many works that leverages EMA , e.g. , ADAM optimization ( Kingma & Ba , 2015 ) , Batch Normalization ( Ioffe & Szegedy , 2015 ) , self-supervised learning ( He et al. , 2020 ; Grill et al. , 2020 ) , and SSL image classification ( Tarvainen & Valpola , 2017 ) . We , for the first time , show its effectiveness in combating class imbalance issues and detrimental effect of pseudo-labels for the object detection task . Valpola , 2017 ) to generate different transformations of the semantically identical images , perturb the input images along the adversarial direction ( Miyato et al. , 2018 ; Yu et al. , 2019 ) , utilize multiple networks to generate various views of the same input data ( Qiao et al. , 2018 ) , mix input data to generate augmented training data and labels ( Zhang et al. , 2018 ; Yun et al. , 2019 ; Guo et al. , 2019 ; Hendrycks et al. , 2020 ) , or learn augmented prototypes in feature space instead of the image space ( Kuo et al. , 2020 ) . However , the complexities in architecture design of object detectors hinder the transfer of existing semi-supervised techniques from image classification to object detection . Semi-Supervised Object Detection . Object detection is one of the most important computer vision tasks and has gained enormous attention ( Lin et al. , 2017a ; He et al. , 2017 ; Redmon & Farhadi , 2017 ; Liu et al. , 2016 ) . While existing works have made significant progress over the years , they have primarily focused on training object detectors with fully-labeled datasets . On the other hand , there exist several semi-supervised object detection works that focus on training object detector with a combination of labeled , weakly-labeled , or unlabeled data . This line of work began even before the resurgence of deep learning ( Rosenberg et al. , 2005 ) . Later , along with the success of deep learning , Hoffman et al . ( 2014 ) and Gao et al . ( 2019 ) trained object detectors on data with bounding box labels for some classes and image-level class labels for other classes , enabling detection for categories that lack bounding box annotations . Tang et al . ( 2016 ) adapted the image-level classifier of a weakly labeled category ( no bounding boxes ) into a detector via similarity-based knowledge transfer . Misra et al . ( 2015 ) exploited a few sparsely labeled objects and bounding boxes in some video frames and localized unknown objects in the following videos . Unlike their settings , we follow the standard SSL setting and adapt it to the object detection task , in which the training contains a small set of labeled data and another set of completely unlabeled data ( i.e. , only images ) . In this setting , Jeong et al . ( 2019 ) proposed a consistency-based method , which enforces the predictions of an input image and its flipped version to be consistent . Sohn et al . ( 2020b ) pre-trained a detector using a small amount labeled data and generates pseudo-labels on unlabeled data to fine-tune the pre-trained detector . Their pseudo-labels are generated only once and are fixed through out the rest of training . While they can improve the performance against the model trained on labeled data , imbalance issue is not considered in existing SS-OD works . In contrast , our method not only improve the pseudo-label generation model via teacher-student mutual learning regimen ( Sec . 3.2 ) but address the crucial imbalance issue in generated pseudo-labels ( Sec . 3.3 ) . 3 UNBIASED TEACHER . Problem definition . Our goal is to address object detection in a semi-supervised setting , where a set of labeled imagesDs = { xsi , ysi } Ns i=1 and a set of unlabeled imagesDu = { xui } Nu i=1 are available for training . Ns and Nu are the number of supervised and unsupervised data . For each labeled image xs , the annotations ys contain locations , sizes , and object categories of all bounding boxes . Overview . As shown in Figure 3 , our Unbiased Teacher consists of two training stages , the BurnIn stage and the Teacher-Student Mutual Learning stage . In the Burn-In stage ( Sec . 3.1 ) , we simply train the object detector using the available supervised data to initialize the detector . At the beginning of the Teacher-Student Mutual Learning stage ( Sec . 3.2 ) , we duplicate the initialized detector into two models ( Teacher and Student models ) . Our Teacher-Student Mutual Learning stage aims at evolving both Teacher and Student models via a mutual learning mechanism , where the Teacher generates pseudo-labels to train the Student , and the Student updates the knowledge it learned back to the Teacher ; hence , the pseudo-labels used to train the Student itself are improved . Lastly , there exists class-imbalance and foreground-background imbalance problems in object detection , which impedes the effectiveness of semi-supervised techniques of image classification ( e.g. , pseudo-labeling ) being used directly on SS-OD . Therefore , in Sec . 3.3 , we also discuss how Focal loss ( Lin et al. , 2017b ) and EMA training alleviate the imbalanced pseudo-label issue . 3.1 BURN-IN . It is important to have a good initialization for both Student and Teacher models , as we will rely on the Teacher to generate pseudo-labels to train the Student in the later stage . To do so , we first use the available supervised data to optimize our model θ with the supervised loss Lsup . With the supervised data Ds = { xsi , ysi } Ns i=1 , the supervised loss of object detection consists of four losses : the RPN classification loss Lrpncls , the RPN regression loss Lrpnreg , the ROI classification loss Lroicls , and the ROI regression loss Lroireg ( Ren et al. , 2015 ) , Lsup = ∑ i Lrpncls ( x s i , y s i ) + Lrpnreg ( xsi , ysi ) + Lroicls ( xsi , ysi ) + Lroireg ( xsi , ysi ) . ( 1 ) After Burn-In , we duplicate the trained weights θ for both the Teacher and the Student models ( θt ← θ , θs ← θ ) . Starting from this trained detector , we further utilize the unsupervised data to improve the object detector via the following proposed training regimen .
This paper focuses on the pseudo-labeling bias issue in semi-supervised object detection (SS-OD), and proposes an Unbiased Teacher framework to address this issue. More specifically, the unbiased teacher framework combines the Mean Teacher model for semi-supervised image classification (Tarvainen and Valpola 2017) and Focal Loss (Lin et al. 2017) for fully supervised object detection to address the bias issue. Experiments on COCO and PASCAL VOC show that the proposed method obtains the state-of-the-art semi-supervised object detection results.
SP:17933143c0bc4e7957ce9ea48d8b50d55dd98570
Unbiased Teacher for Semi-Supervised Object Detection
1 INTRODUCTION . The availability of large-scale datasets and computational resources has allowed deep neural networks to achieve strong performance on a wide variety of tasks . However , training these networks requires a large number of labeled examples that are expensive to annotate and acquire . As an alternative , Semi-Supervised Learning ( SSL ) methods have received growing attention ( Sohn et al. , 2020a ; Berthelot et al. , 2020 ; 2019 ; Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Sajjadi et al. , 2016 ; Lee , 2013 ; Grandvalet & Bengio , 2005 ) . Yet , these advances have primarily focused on image classification , rather than object detection where bounding box annotations require more effort . In this work , we revisit object detection under the SSL setting ( Figure 1 ) : an object detector is trained with a single dataset where only a small amount of labeled bounding boxes and a large amount of unlabeled data are provided , or an object detector is jointly trained with a large labeled dataset as well as a large external unlabeled dataset . A straightforward way to address Semi-Supervised Object Detection ( SS-OD ) is to adapt from existing advanced semi-supervised image classification methods ( Sohn et al. , 2020a ) . Unfortunately , object detection has some unique characteristics that interact poorly with such methods . For example , the nature of class-imbalance in object detection tasks impedes the usage of pseudo-labeling . In object detection , there exists foreground-background imbalance and foreground classes imbalance ( see Section 3.3 ) . These imbalances make models trained in SSL settings prone to generate biased predictions . Pseudo-labeling methods , one of the most successful SSL methods in image classification ( Lee , 2013 ; Sohn et al. , 2020a ) , may thus be biased towards dominant and overly confident classes ( background ) while ignoring minor and less confident classes ( foreground ) . As a result , adding biased pseudo-labels into the semi-supervised training aggravates the class-imbalance issue and introduces severe overfitting . As shown in Figure 2 , taking a two-stage object detector as an example , there exists heavy overfitting on the fore- ∗Work done partially while interning at Facebook . 1Code : https : //github.com/facebookresearch/unbiased-teacher . ground/background classification in the RPN and multi-class classification in the ROIhead ( but not on bounding box regression ) . To overcome these issues , we propose a general framework – Unbiased Teacher : an approach that jointly trains a Student and a slowly progressing Teacher in a mutually-beneficial manner , in which the Teacher generates pseudo-labels to train the Student , and the Student gradually updates the Teacher via Exponential Moving Average ( EMA ) 2 , while the Teacher and Student are given different augmented input images ( see Figure 3 ) . Inside this framework , ( i ) we utilize the pseudo-labels as explicit supervision for both RPN and ROIhead and thus alleviate the overfitting issues in both RPN and ROIhead . ( ii ) We also prevent detrimental effects due to noisy pseudo-labels by exploiting the Teacher-Student dual models ( see further discussion and analysis in Section 4.2 ) . ( iii ) With the use of EMA training and the Focal loss ( Lin et al. , 2017b ) , we can address the pseudo-labeling bias problem caused by class-imbalance and thus improve the quality of pseudo-labels . As the result , our object detector achieves significant performance improvements . We benchmark Unbiased Teacher with SSL setting using the MS-COCO and PASCAL VOC datasets , namely COCO-standard , COCO-additional , and VOC . When using only 1 % labeled data from MS-COCO ( COCO-standard ) , Unbiased Teacher achieves 6.8 absolute mAP improvement against the state-of-the-art method , STAC ( Sohn et al. , 2020b ) . Unbiased Teacher consistently achieves around 10 absolute mAP improvements when using only 0.5 , 1 , 2 , 5 % of labeled data compared to supervised baseline . We highlight the contributions of this paper as follows : • By analyzing object detectors trained with limited-supervision , we identify that the nature of class-imbalance in object detection tasks impedes the effectiveness of pseudo-labeling method on SS-OD task . • We thus proposed a simple yet effective method , Unbiased Teacher , to address the pseudolabeling bias issue caused by class-imbalance existing in ground-truth labels and the overfitting issue caused by the scarcity of labeled data . • Our Unbiased Teacher achieves state-of-the-art performance on SS-OD across COCOstandard , COCO-additional , and VOC datasets . We also provide an ablation study to verify the effectiveness of each proposed component . 2 RELATED WORKS . Semi-Supervised Learning . The majority of the recent SSL methods typically consist of ( 1 ) input augmentations and perturbations , and ( 2 ) consistency regularization . They regularize the model to be invariant and robust to certain augmentations on the input , which requires the outputs given the original and augmented inputs to be consistent . For example , existing approaches apply convention data augmentations ( Berthelot et al. , 2019 ; Laine & Aila , 2017 ; Sajjadi et al. , 2016 ; Tarvainen & 2Note that there have been many works that leverages EMA , e.g. , ADAM optimization ( Kingma & Ba , 2015 ) , Batch Normalization ( Ioffe & Szegedy , 2015 ) , self-supervised learning ( He et al. , 2020 ; Grill et al. , 2020 ) , and SSL image classification ( Tarvainen & Valpola , 2017 ) . We , for the first time , show its effectiveness in combating class imbalance issues and detrimental effect of pseudo-labels for the object detection task . Valpola , 2017 ) to generate different transformations of the semantically identical images , perturb the input images along the adversarial direction ( Miyato et al. , 2018 ; Yu et al. , 2019 ) , utilize multiple networks to generate various views of the same input data ( Qiao et al. , 2018 ) , mix input data to generate augmented training data and labels ( Zhang et al. , 2018 ; Yun et al. , 2019 ; Guo et al. , 2019 ; Hendrycks et al. , 2020 ) , or learn augmented prototypes in feature space instead of the image space ( Kuo et al. , 2020 ) . However , the complexities in architecture design of object detectors hinder the transfer of existing semi-supervised techniques from image classification to object detection . Semi-Supervised Object Detection . Object detection is one of the most important computer vision tasks and has gained enormous attention ( Lin et al. , 2017a ; He et al. , 2017 ; Redmon & Farhadi , 2017 ; Liu et al. , 2016 ) . While existing works have made significant progress over the years , they have primarily focused on training object detectors with fully-labeled datasets . On the other hand , there exist several semi-supervised object detection works that focus on training object detector with a combination of labeled , weakly-labeled , or unlabeled data . This line of work began even before the resurgence of deep learning ( Rosenberg et al. , 2005 ) . Later , along with the success of deep learning , Hoffman et al . ( 2014 ) and Gao et al . ( 2019 ) trained object detectors on data with bounding box labels for some classes and image-level class labels for other classes , enabling detection for categories that lack bounding box annotations . Tang et al . ( 2016 ) adapted the image-level classifier of a weakly labeled category ( no bounding boxes ) into a detector via similarity-based knowledge transfer . Misra et al . ( 2015 ) exploited a few sparsely labeled objects and bounding boxes in some video frames and localized unknown objects in the following videos . Unlike their settings , we follow the standard SSL setting and adapt it to the object detection task , in which the training contains a small set of labeled data and another set of completely unlabeled data ( i.e. , only images ) . In this setting , Jeong et al . ( 2019 ) proposed a consistency-based method , which enforces the predictions of an input image and its flipped version to be consistent . Sohn et al . ( 2020b ) pre-trained a detector using a small amount labeled data and generates pseudo-labels on unlabeled data to fine-tune the pre-trained detector . Their pseudo-labels are generated only once and are fixed through out the rest of training . While they can improve the performance against the model trained on labeled data , imbalance issue is not considered in existing SS-OD works . In contrast , our method not only improve the pseudo-label generation model via teacher-student mutual learning regimen ( Sec . 3.2 ) but address the crucial imbalance issue in generated pseudo-labels ( Sec . 3.3 ) . 3 UNBIASED TEACHER . Problem definition . Our goal is to address object detection in a semi-supervised setting , where a set of labeled imagesDs = { xsi , ysi } Ns i=1 and a set of unlabeled imagesDu = { xui } Nu i=1 are available for training . Ns and Nu are the number of supervised and unsupervised data . For each labeled image xs , the annotations ys contain locations , sizes , and object categories of all bounding boxes . Overview . As shown in Figure 3 , our Unbiased Teacher consists of two training stages , the BurnIn stage and the Teacher-Student Mutual Learning stage . In the Burn-In stage ( Sec . 3.1 ) , we simply train the object detector using the available supervised data to initialize the detector . At the beginning of the Teacher-Student Mutual Learning stage ( Sec . 3.2 ) , we duplicate the initialized detector into two models ( Teacher and Student models ) . Our Teacher-Student Mutual Learning stage aims at evolving both Teacher and Student models via a mutual learning mechanism , where the Teacher generates pseudo-labels to train the Student , and the Student updates the knowledge it learned back to the Teacher ; hence , the pseudo-labels used to train the Student itself are improved . Lastly , there exists class-imbalance and foreground-background imbalance problems in object detection , which impedes the effectiveness of semi-supervised techniques of image classification ( e.g. , pseudo-labeling ) being used directly on SS-OD . Therefore , in Sec . 3.3 , we also discuss how Focal loss ( Lin et al. , 2017b ) and EMA training alleviate the imbalanced pseudo-label issue . 3.1 BURN-IN . It is important to have a good initialization for both Student and Teacher models , as we will rely on the Teacher to generate pseudo-labels to train the Student in the later stage . To do so , we first use the available supervised data to optimize our model θ with the supervised loss Lsup . With the supervised data Ds = { xsi , ysi } Ns i=1 , the supervised loss of object detection consists of four losses : the RPN classification loss Lrpncls , the RPN regression loss Lrpnreg , the ROI classification loss Lroicls , and the ROI regression loss Lroireg ( Ren et al. , 2015 ) , Lsup = ∑ i Lrpncls ( x s i , y s i ) + Lrpnreg ( xsi , ysi ) + Lroicls ( xsi , ysi ) + Lroireg ( xsi , ysi ) . ( 1 ) After Burn-In , we duplicate the trained weights θ for both the Teacher and the Student models ( θt ← θ , θs ← θ ) . Starting from this trained detector , we further utilize the unsupervised data to improve the object detector via the following proposed training regimen .
This work tackles the task of semi-supervised object detection via a teacher-student method. The authors introduced a training regime where a teacher and student network, who share the same initial weights pre-trained on labeled data, jointly learns on unsupervised data. They find that label imbalance in the object detection task can lead to inefficient pseudo-label training under the usual SSL training pipeline, and therefore proposes to train the teacher network via exponential moving average to avoid bias in pseudo-labels.
SP:17933143c0bc4e7957ce9ea48d8b50d55dd98570
Revisiting Prioritized Experience Replay: A Value Perspective
1 INTRODUCTION . Learning from important experiences prevails in nature . In rodent hippocampus , memories with higher importance , such as those associated with rewarding locations or large reward-prediction errors , are replayed more frequently ( Michon et al. , 2019 ; Roscow et al. , 2019 ; Salvetti et al. , 2014 ) . People who have more frequent replay of high-reward associated memories show better performance in memory tasks ( Gruber et al. , 2016 ; Schapiro et al. , 2018 ) . A normative theory suggests that prioritized memory access according to the utility of memory explains hippocampal replay across different memory tasks ( Mattar & Daw , 2018 ) . As accumulating new experiences is costly , utilizing valuable past experiences is a key for efficient learning ( Ólafsdóttir et al. , 2018 ) . Differentiating important experiences from unimportant ones also benefits reinforcement learning ( RL ) algorithms ( Katharopoulos & Fleuret , 2018 ) . Prioritized experience replay ( PER ) ( Schaul et al. , 2016 ) is an experience replay technique built on deep Q-network ( DQN ) ( Mnih et al. , 2015 ) , which weighs the importance of samples by their surprise , the magnitude of the temporal-difference ( TD ) error . As a result , experiences with larger surprise are sampled more frequently . PER significantly improves the learning efficiency of DQN , and has been adopted ( Hessel et al. , 2018 ; Horgan et al. , 2018 ; Kapturowski et al. , 2019 ) and extended ( Daley & Amato , 2019 ; Pan et al. , 2018 ; Schlegel et al. , 2019 ) by various deep RL algorithms . Surprise quantifies the unexpectedness of an experience to a learning agent , and biologically corresponds to the signal of reward prediction error in dopamine system ( Schultz et al. , 1997 ; Glimcher , 2011 ) , which directly shapes the memory of animal and human ( Lisman & Grace , 2005 ; McNamara et al. , 2014 ) . However , how surprise is related to the importance of experience in the context of RL is not well understood . We address this problem from an economic perspective , by linking surprise to value of experience in RL . The goal of RL agent is to maximize the expected cumulative reward , which is achieved through learning from experiences . For Q-learning , an update on an experience will lead to a more accurate prediction of the action-value or a better policy , which increases the expected cumulative reward the agent may get . We define the value of experience as the increase in the expected cumulative reward resulted from updating on the experience ( Mattar & Daw , 2018 ) . The value of experience quantifies the importance of experience from first principles : assuming that the agent is economically rational and has full information about the value of experience , it will choose the most valuable experience to update , which will yield the highest utility . As supplements , we derive two more value metrics , which corresponds to the evaluation improvement value and policy improvement value due to update on an experience . In this work , we mathematically show that these value metrics are upper-bounded by surprise for Q-learning . Therefore , surprise implicitly tracks the value of experience , and accounts for the importance of experience . We further extend our framework to maximum-entropy RL , which augments the reward with an entropy term to encourage exploration ( Haarnoja et al. , 2017 ) . We derive the lower and upper bounds of these value metrics for soft Q-learning , which are related to surprise and ” on-policyness ” of the experience . Experiments in Maze and CartPole support our theoretical results for both tabular and function approximation RL methods , showing that the derived bounds hold in practice . Moreover , we also show that experience replay using the upper bound as priority improves maximum-entropy RL ( i.e. , soft DQN ) in Atari games . 2 MOTIVATION . 2.1 Q-LEARNING AND EXPERIENCE REPLAY . We consider a Markov Decision Process ( MDP ) defined by a tuple { S , A , P , R , γ } , where S is a finite set of states , A is a finite set of actions , P is the transition function , R is the reward function , and γ ∈ [ 0 , 1 ] is the discount factor . A policy π of an agent assigns probability π ( a|s ) to each action a ∈ A given state s ∈ S. The goal is to learn an optimal policy that maximizes the expected discounted return starting from time step t , Gt = rt + γrt+1 + γ2rt+2 + ... = ∑∞ i=0 γ irt+i , where rt is the reward the agent receives at time step t. Value function vπ ( s ) is defined as the expected return starting from state s following policy π , and Q-function qπ ( s , a ) is the expected return on performing action a in state s and subsequently following policy π . According to Q-learning ( Watkins & Dayan , 1992 ) , the optimal policy can be learned through policy iteration : performing policy evaluation and policy improvement interactively and iteratively . For each policy evaluation , we update Q ( s , a ) , an estimate of qπ ( s , a ) , by Qnew ( s , a ) = Qold ( s , a ) + αTD ( s , a , r , s′ ) , where the TD error TD ( s , a , r , s′ ) = r + γmaxa′ Qold ( s′ , a′ ) − Qold ( s , a ) and α is the step-size parameter . Qnew and Qold denote the estimated Q-function before and after the update respectively . And for each policy improvement , we update the policy from πold to πnew according to the newly estimated Q-function , πnew = arg max a Qnew ( s , a ) . Standard Q-learning only uses each experience once before disregarded , which is sample inefficient and can be improved by experience replay technique ( Lin , 1992 ) . We denote the experience that the agent collected at time k by a 4-tuple ek = { sk , ak , rk , s′k } . According to experience replay , the experience ek is stored into the replay buffer and can be accessed multiple times during learning . 2.2 VALUE METRICS OF EXPERIENCE . To quantify the importance of experience , we derive three value metrics of experience . The utility of update on experience ek is defined as the value added to the cumulative discounted rewards starting from state sk , after updating on ek . Intuitively , choosing the most valuable experience for update will yield the highest utility to the agent . We denote such utility as the expected value of backup EVB ( ek ) ( Mattar & Daw , 2018 ) , EVB ( ek ) = vπnew ( sk ) − vπold ( sk ) = ∑ a πnew ( a|sk ) qπnew ( sk , a ) − ∑ a πold ( a|sk ) qπold ( sk , a ) ( 1 ) where πold , vπold and qπold are respectively the policy , value function and Q-function before the update , and πnew , vπnew , and qπnew are those after . As the update on experience ek consists of policy evaluation and policy improvement , the value of experience can further be separated to evaluation improvement value EIV ( ek ) and policy improvement value PIV ( ek ) by rewriting ( 1 ) : EVB ( ek ) = ∑ a [ πnew ( a|sk ) − πold ( a|sk ) ] qπnew ( sk , a ) ︸ ︷︷ ︸ PIV ( ek ) + ∑ a πold ( a|sk ) [ qπnew ( sk , a ) − qπold ( sk , a ) ] ︸ ︷︷ ︸ EIV ( ek ) , ( 2 ) where PIV ( ek ) measures the value improvements due to the change of the policy , and EIV ( ek ) captures those due to the change of evaluation . Thus , we have three metrics for the value of experience : EVB , PIV and EIV . 2.3 VALUE METRICS OF EXPERIENCE IN Q-LEARNING . For Q-learning , we use Q-function to estimate the true action-value function . A backup over an experience ek consists of policy evaluation with Bellman operator and greedy policy improvement . As the policy improvement is greedy , we can rewrite value metrics of experience to simpler forms . EVB can be written as follows from ( 1 ) , EVB ( ek ) = max a Qnew ( sk , a ) −max a Qold ( sk , a ) . ( 3 ) Note that EVB here is different from that in Mattar & Daw ( 2018 ) : in our case , EVB is derived from Q-learning ; while in their case , EVB is derived from Dyna , a model-based RL algorithm ( Sutton , 1990 ) . Similarly , from ( 2 ) , PIV can be written as PIV ( ek ) = max a Qnew ( sk , a ) −Qnew ( sk , aold ) , ( 4 ) where aold = arg maxaQold ( sk , a ) , and EIV can be written as EIV ( ek ) = Qnew ( sk , aold ) −Qold ( sk , aold ) . ( 5 ) 2.4 A MOTIVATING EXAMPLE . We illustrate the potential gain of value of experience in a “ Linear GridWorld ” environment ( Figure 1a ) . This environment contains N linearly-aligned grids and 4 actions ( north , south , east , west ) . The rewards are rare : 1 for entering the goal state and 0 elsewhere . The solution for this environment is always choosing east . We use this example to highlight the difference between prioritization strategies . Three agents perform Q-learning updates on the experiences drawn from the same replay buffer , which contains all the ( 4N ) experiences and associated rewards . The first agent replays the experiences uniformly at random , while the other two agents invoke the oracle to prioritize the experiences , which greedily select the experience with the highest surprise or EVB respectively . In order to learn the optimal policy , agents need to replay the experiences associated with action east in a reverse order . For the agent with random replay , the expected number of replays required is 4N2 ( Figure 1d ) . For the other two agents , prioritization significantly reduces the number of replays required : prioritization with surprise requires 4N replays , and prioritization with EVB only uses N replays , which is optimal ( Figure 1d ) . The main difference is that EVB only prioritizes the experiences that are associated with the optimal policy ( Figure 1c ) , while the surprise is sensitive to changes in the value function and will prioritize non-optimal experiences : for example , the agent may choose the experiences associated with south or north in the second update , which are not optimal but have the same surprise as the experience associated with east ( Figure 1b ) . Thus , EVB that directily quantifies the value of experience can serve as an optimal priority . 3 UPPER BOUNDS OF VALUE METRICS OF EXPERIENCE IN Q-LEARNING . PER ( Schaul et al. , 2016 ) greatly improves the learning efficiency of DQN . However , the underlying rationale is not well understood . Here , we prove that surprise is the upper bound of the value metrics in Q-learning . Theorem 3.1 . The three value metrics of experience ek in Q-learning ( |EVB| , |PIV| and |EIV| ) are bounded by α|TD ( sk , ak , rk , s′k ) | , where α is a step-size parameter . Proof . See Appendix A.1 . In Theorem 3.1 , we prove that |EVB| , |PIV| , and |EIV| are upper-bounded by the surprise ( scaled by the learning step-size ) in Q-learning . As surprise intrinsically tracks the evaluation and policy improvements , it can serve as an appropriate importance metric for past experiences . We will further study these relationship in experiments .
the idea of prioritized experience replay is revisited, but from a new perspective with new theoretical results. Here, the authors propose the expected value of backup (EVB) as a metric to assess the quality of a sample and its potential improvement on the policy and on the value function. The authors decompose this metric into the benefit attributed to the policy and benefit to the value function. The authors have two theorems. The first theorem shows that the surprise (aka the temporal difference error) is an upper bound of the EVB in the Q-learning setting. The second theorem shows that the surprise, multiplied by a constant that depends on the policy, is an upper bound to EVB in the soft RL setting. The authors demonstrate that the proposed tighter bound on the EVB *could* yield improvements in the soft RL setting.
SP:bc08cb09ecac842d09fd6e03ecf45b2fce9857b6
Revisiting Prioritized Experience Replay: A Value Perspective
1 INTRODUCTION . Learning from important experiences prevails in nature . In rodent hippocampus , memories with higher importance , such as those associated with rewarding locations or large reward-prediction errors , are replayed more frequently ( Michon et al. , 2019 ; Roscow et al. , 2019 ; Salvetti et al. , 2014 ) . People who have more frequent replay of high-reward associated memories show better performance in memory tasks ( Gruber et al. , 2016 ; Schapiro et al. , 2018 ) . A normative theory suggests that prioritized memory access according to the utility of memory explains hippocampal replay across different memory tasks ( Mattar & Daw , 2018 ) . As accumulating new experiences is costly , utilizing valuable past experiences is a key for efficient learning ( Ólafsdóttir et al. , 2018 ) . Differentiating important experiences from unimportant ones also benefits reinforcement learning ( RL ) algorithms ( Katharopoulos & Fleuret , 2018 ) . Prioritized experience replay ( PER ) ( Schaul et al. , 2016 ) is an experience replay technique built on deep Q-network ( DQN ) ( Mnih et al. , 2015 ) , which weighs the importance of samples by their surprise , the magnitude of the temporal-difference ( TD ) error . As a result , experiences with larger surprise are sampled more frequently . PER significantly improves the learning efficiency of DQN , and has been adopted ( Hessel et al. , 2018 ; Horgan et al. , 2018 ; Kapturowski et al. , 2019 ) and extended ( Daley & Amato , 2019 ; Pan et al. , 2018 ; Schlegel et al. , 2019 ) by various deep RL algorithms . Surprise quantifies the unexpectedness of an experience to a learning agent , and biologically corresponds to the signal of reward prediction error in dopamine system ( Schultz et al. , 1997 ; Glimcher , 2011 ) , which directly shapes the memory of animal and human ( Lisman & Grace , 2005 ; McNamara et al. , 2014 ) . However , how surprise is related to the importance of experience in the context of RL is not well understood . We address this problem from an economic perspective , by linking surprise to value of experience in RL . The goal of RL agent is to maximize the expected cumulative reward , which is achieved through learning from experiences . For Q-learning , an update on an experience will lead to a more accurate prediction of the action-value or a better policy , which increases the expected cumulative reward the agent may get . We define the value of experience as the increase in the expected cumulative reward resulted from updating on the experience ( Mattar & Daw , 2018 ) . The value of experience quantifies the importance of experience from first principles : assuming that the agent is economically rational and has full information about the value of experience , it will choose the most valuable experience to update , which will yield the highest utility . As supplements , we derive two more value metrics , which corresponds to the evaluation improvement value and policy improvement value due to update on an experience . In this work , we mathematically show that these value metrics are upper-bounded by surprise for Q-learning . Therefore , surprise implicitly tracks the value of experience , and accounts for the importance of experience . We further extend our framework to maximum-entropy RL , which augments the reward with an entropy term to encourage exploration ( Haarnoja et al. , 2017 ) . We derive the lower and upper bounds of these value metrics for soft Q-learning , which are related to surprise and ” on-policyness ” of the experience . Experiments in Maze and CartPole support our theoretical results for both tabular and function approximation RL methods , showing that the derived bounds hold in practice . Moreover , we also show that experience replay using the upper bound as priority improves maximum-entropy RL ( i.e. , soft DQN ) in Atari games . 2 MOTIVATION . 2.1 Q-LEARNING AND EXPERIENCE REPLAY . We consider a Markov Decision Process ( MDP ) defined by a tuple { S , A , P , R , γ } , where S is a finite set of states , A is a finite set of actions , P is the transition function , R is the reward function , and γ ∈ [ 0 , 1 ] is the discount factor . A policy π of an agent assigns probability π ( a|s ) to each action a ∈ A given state s ∈ S. The goal is to learn an optimal policy that maximizes the expected discounted return starting from time step t , Gt = rt + γrt+1 + γ2rt+2 + ... = ∑∞ i=0 γ irt+i , where rt is the reward the agent receives at time step t. Value function vπ ( s ) is defined as the expected return starting from state s following policy π , and Q-function qπ ( s , a ) is the expected return on performing action a in state s and subsequently following policy π . According to Q-learning ( Watkins & Dayan , 1992 ) , the optimal policy can be learned through policy iteration : performing policy evaluation and policy improvement interactively and iteratively . For each policy evaluation , we update Q ( s , a ) , an estimate of qπ ( s , a ) , by Qnew ( s , a ) = Qold ( s , a ) + αTD ( s , a , r , s′ ) , where the TD error TD ( s , a , r , s′ ) = r + γmaxa′ Qold ( s′ , a′ ) − Qold ( s , a ) and α is the step-size parameter . Qnew and Qold denote the estimated Q-function before and after the update respectively . And for each policy improvement , we update the policy from πold to πnew according to the newly estimated Q-function , πnew = arg max a Qnew ( s , a ) . Standard Q-learning only uses each experience once before disregarded , which is sample inefficient and can be improved by experience replay technique ( Lin , 1992 ) . We denote the experience that the agent collected at time k by a 4-tuple ek = { sk , ak , rk , s′k } . According to experience replay , the experience ek is stored into the replay buffer and can be accessed multiple times during learning . 2.2 VALUE METRICS OF EXPERIENCE . To quantify the importance of experience , we derive three value metrics of experience . The utility of update on experience ek is defined as the value added to the cumulative discounted rewards starting from state sk , after updating on ek . Intuitively , choosing the most valuable experience for update will yield the highest utility to the agent . We denote such utility as the expected value of backup EVB ( ek ) ( Mattar & Daw , 2018 ) , EVB ( ek ) = vπnew ( sk ) − vπold ( sk ) = ∑ a πnew ( a|sk ) qπnew ( sk , a ) − ∑ a πold ( a|sk ) qπold ( sk , a ) ( 1 ) where πold , vπold and qπold are respectively the policy , value function and Q-function before the update , and πnew , vπnew , and qπnew are those after . As the update on experience ek consists of policy evaluation and policy improvement , the value of experience can further be separated to evaluation improvement value EIV ( ek ) and policy improvement value PIV ( ek ) by rewriting ( 1 ) : EVB ( ek ) = ∑ a [ πnew ( a|sk ) − πold ( a|sk ) ] qπnew ( sk , a ) ︸ ︷︷ ︸ PIV ( ek ) + ∑ a πold ( a|sk ) [ qπnew ( sk , a ) − qπold ( sk , a ) ] ︸ ︷︷ ︸ EIV ( ek ) , ( 2 ) where PIV ( ek ) measures the value improvements due to the change of the policy , and EIV ( ek ) captures those due to the change of evaluation . Thus , we have three metrics for the value of experience : EVB , PIV and EIV . 2.3 VALUE METRICS OF EXPERIENCE IN Q-LEARNING . For Q-learning , we use Q-function to estimate the true action-value function . A backup over an experience ek consists of policy evaluation with Bellman operator and greedy policy improvement . As the policy improvement is greedy , we can rewrite value metrics of experience to simpler forms . EVB can be written as follows from ( 1 ) , EVB ( ek ) = max a Qnew ( sk , a ) −max a Qold ( sk , a ) . ( 3 ) Note that EVB here is different from that in Mattar & Daw ( 2018 ) : in our case , EVB is derived from Q-learning ; while in their case , EVB is derived from Dyna , a model-based RL algorithm ( Sutton , 1990 ) . Similarly , from ( 2 ) , PIV can be written as PIV ( ek ) = max a Qnew ( sk , a ) −Qnew ( sk , aold ) , ( 4 ) where aold = arg maxaQold ( sk , a ) , and EIV can be written as EIV ( ek ) = Qnew ( sk , aold ) −Qold ( sk , aold ) . ( 5 ) 2.4 A MOTIVATING EXAMPLE . We illustrate the potential gain of value of experience in a “ Linear GridWorld ” environment ( Figure 1a ) . This environment contains N linearly-aligned grids and 4 actions ( north , south , east , west ) . The rewards are rare : 1 for entering the goal state and 0 elsewhere . The solution for this environment is always choosing east . We use this example to highlight the difference between prioritization strategies . Three agents perform Q-learning updates on the experiences drawn from the same replay buffer , which contains all the ( 4N ) experiences and associated rewards . The first agent replays the experiences uniformly at random , while the other two agents invoke the oracle to prioritize the experiences , which greedily select the experience with the highest surprise or EVB respectively . In order to learn the optimal policy , agents need to replay the experiences associated with action east in a reverse order . For the agent with random replay , the expected number of replays required is 4N2 ( Figure 1d ) . For the other two agents , prioritization significantly reduces the number of replays required : prioritization with surprise requires 4N replays , and prioritization with EVB only uses N replays , which is optimal ( Figure 1d ) . The main difference is that EVB only prioritizes the experiences that are associated with the optimal policy ( Figure 1c ) , while the surprise is sensitive to changes in the value function and will prioritize non-optimal experiences : for example , the agent may choose the experiences associated with south or north in the second update , which are not optimal but have the same surprise as the experience associated with east ( Figure 1b ) . Thus , EVB that directily quantifies the value of experience can serve as an optimal priority . 3 UPPER BOUNDS OF VALUE METRICS OF EXPERIENCE IN Q-LEARNING . PER ( Schaul et al. , 2016 ) greatly improves the learning efficiency of DQN . However , the underlying rationale is not well understood . Here , we prove that surprise is the upper bound of the value metrics in Q-learning . Theorem 3.1 . The three value metrics of experience ek in Q-learning ( |EVB| , |PIV| and |EIV| ) are bounded by α|TD ( sk , ak , rk , s′k ) | , where α is a step-size parameter . Proof . See Appendix A.1 . In Theorem 3.1 , we prove that |EVB| , |PIV| , and |EIV| are upper-bounded by the surprise ( scaled by the learning step-size ) in Q-learning . As surprise intrinsically tracks the evaluation and policy improvements , it can serve as an appropriate importance metric for past experiences . We will further study these relationship in experiments .
This work aimed to understand the prioritized experience replay, a widely used technique to improve learning efficiently for RL agents. The authors proposed three different value metrics to quantify the experience, and showed that they are upper bounded by the TD error (up to a constant). The extension to soft Q-learning was also presented. Finally, the authors showed in experiments that derived upper bounds hold in Maze and CartPole. They also demonstrated that a new variant based on the upper bound achieved better performance on a subset of Atari games.
SP:bc08cb09ecac842d09fd6e03ecf45b2fce9857b6
AutoBayes: Automated Bayesian Graph Exploration for Nuisance-Robust Inference
1 INTRODUCTION . The great advancement of deep learning techniques based on deep neural networks ( DNN ) has enabled more practical design of human-machine interfaces ( HMI ) through the analysis of the user ’ s physiological data ( Faust et al. , 2018 ) , such as electroencephalogram ( EEG ) ( Lawhern et al. , 2018 ) and electromyogram ( EMG ) ( Atzori et al. , 2016 ) . However , such biosignals are highly prone to variation depending on the biological states of each subject ( Christoforou et al. , 2010 ) . Hence , frequent calibration is often required in typical HMI systems . Toward resolving this issue , subject-invariant methods ( Özdenizci et al. , 2019b ) , employing adversarial training ( Makhzani et al. , 2015 ; Lample et al. , 2017 ; Creswell et al. , 2017 ) with the Conditional Variational AutoEncoder ( A-CVAE ) ( Louizos et al. , 2015 ; Sohn et al. , 2015 ) shown in Fig . 1 ( b ) , have emerged to reduce user calibration for realizing successful HMI systems . Compared to a standard DNN classifier C in Fig . 1 ( a ) , integrating additional functional blocks for encoder E , nuisanceconditional decoder D , and adversary A networks offers excellent subject-invariant performance . The DNN structure may be potentially extended with more functional blocks and more latent nodes as shown in Fig . 1 ( c ) . However , such a DNN architecture design may rely on human effort and insight to determine the block connectivity of DNNs . Automation of hyperparameter and architecture exploration in the context of AutoML ( Ashok et al. , 2017 ; Brock et al. , 2017 ; Cai et al. , 2017 ; He et al. , 2018 ; Miikkulainen et al. , 2019 ; Real et al. , 2017 ; 2020 ; Stanley & Miikkulainen , 2002 ; Zoph et al. , 2018 ) can facilitate DNN design suited for nuisance-invariant inference . Nevertheless , without proper reasoning , most of the search space for link connectivity will be pointless . In this paper , we propose a systematic automation framework called AutoBayes , which searches for the best inference graph model associated with a Bayesian graph model ( also a.k.a . Bayesian network ) well-suited to reproduce the training datasets . The proposed method automatically formulates various different Bayesian graphs by factorizing the joint probability distribution in terms of data , class label , subject identification ( ID ) , and inherent latent representations . Given Bayesian graphs , some meaningful inference graphs are generated through the Bayes-Ball algorithm ( Shachter , 2013 ) for pruning redundant links to achieve high-accuracy estimation . In order to promote robustness against nuisance variations such as inter-subject/session factors , the explored Bayesian graphs can provide 1For example of speech recognition , nuisance factors such as speaker ’ s attributes and recording environment may change the task accuracy . For image recognition , ambient light conditions and image sensor conditions may become inherent nuisance factors . In the context of this paper , nuisance variations mainly refer to subject identities and biological states during recording sessions for physiological data learning . reasoning to use adversarial training with/without variational modeling and latent disentanglement . We demonstrate that AutoBayes can achieve excellent performance across various public datasets , in particular with an ensemble stacking of multiple explored graphical models . 2 KEY CONTRIBUTIONS . At the core of our methodology is the consideration of graphical models that capture the probabilistic relationship between random variables representing the data features X , task labels Y , nuisance variation labels S , and ( potential ) latent representations Z . The ultimate goal is to infer the task label Y from the measured data feature X , which is hindered by the presence of nuisance variations ( e.g. , inter-subject/session variations ) that are ( partially ) labelled by S. One may use a standard DNN to classify Y given X as shown in Fig . 1 ( a ) , without explicitly involving S or Z . Although A-CVAE in Fig . 1 ( b ) may offer nuisance-robust performance through adversarial disentanglement of S from latent Z , there is no guarantee that such a model can perform well across different datasets . It is exemplified in Fig . 2 where A-CVAE outperforms the standard DNN model for some datasets ( QMNIST , Stress , ErrP ) while it does not for the other cases . This may be due to the underlying probabilistic relationship of the data varying across datasets . Our proposed framework can construct justifiable models , achieving higher performance for every dataset , as demonstrated in Fig . 2 . It is verified that significant gain is attainable with ensemble methods of different Bayesian graphs which are explored in our AutoBayes . For example , our method with a relatively shallow architecture achieves 99.61 % accuracy which is close to state-of-the-art performance in QMNIST dataset . The main contributions of this paper over the existing works are five-fold as follows : Algorithm 1 Pseudocode for AutoBayes Framework Require : Nodes set V = [ Y , X , S1 , S2 , . . . , Sn , Z1 , Z2 , . . . , Zm ] , where Y denotes task labels , X is a measurement data , S = [ S1 , S2 , . . . , Sn ] are ( potentially multiple ) semi-supervised nuisance variations , and Z = [ Z1 , Z2 , . . . , Zm ] are ( potentially multiple ) latent vectors Ensure : Semi-supervised training/validation datasets 1 : for all permutations of node factorization from Y to X do 2 : Let B0 be the corresponding Bayesian graph for the permuted full-chain factorization 3 : for all combinations of link pruning on the full-chain Bayesian graph B0 do 4 : Let B be the corresponding pruned Bayesian graph 5 : Apply the Bayes-Ball algorithm on B to build a conditional independency list I 6 : for all permutations of node factorization from X to Y do 7 : Let F0 be the factor graph corresponding to a full-chain conditional probability 8 : Prune all redundant links in F0 based on conditional independency I 9 : Let F be the pruned factor graph 10 : Merge the pruned Bayesian graph B into the pruned factor graph F 11 : Attach an adversary network A to latent nodes Z for Zk ⊥ S ∈ I 12 : Assign an encoder network E for p ( Z| · · · ) in the merged factor graph 13 : Assign a decoder network D for p ( x| · · · ) in the merged factor graph 14 : Assign a nuisance indicator network N for p ( S| · · · ) in the merged factor graph 15 : Assign a classifier network C for p ( y| · · · ) in the merged factor graph 16 : Adversary train the whole DNN structure to minimize a loss function in ( 5 ) 17 : end for . At most ( |V| − 2 ) ! combinations 18 : end for . At most 2|V| ( |V|−1 ) /2 combinations 19 : end for . At most ( |V| − 2 ) ! combinations 20 : return the best model having highest task accuracy in validation sets • AutoBayes automatically explores potential graphical models inherent to the data by combinatorial pruning of dependency assumptions ( edges ) and then applies Bayes-Ball to examine various inference strategies , rather than blindly exploring hyperparameters of DNN blocks . • AutoBayes offers a solid reason of how to connect multiple DNN blocks to impose conditioning and adversary censoring for the task classifier , feature encoder , decoder , nuisance indicator and adversary networks , based on an explored Bayesian graph . • The framework is also extensible to multiple latent representations and nuisances factors . • Besides fully-supervised training , AutoBayes can automatically build some relevant graphi- cal models suited for semi-supervised learning . • Multiple graphical models explored in AutoBayes can be efficiently exploited to improve performance by ensemble stacking . We note that this paper relates to some existing literature in AutoML , variational Bayesian inference ( Kingma & Welling , 2013 ; Sohn et al. , 2015 ; Louizos et al. , 2015 ) , adversarial training ( Goodfellow et al. , 2014 ; Dumoulin et al. , 2016 ; Donahue et al. , 2016 ; Makhzani et al. , 2015 ; Lample et al. , 2017 ; Creswell et al. , 2017 ) , and Bayesian network ( Nie et al. , 2018 ; Njah et al. , 2019 ; Rohekar et al. , 2018 ) as addressed in Appendix A.1 in more detail . Nonetheless , AutoBayes is a novel framework that diverges from AutoML , which mostly employs architecture tuning at a micro level . Our work focuses on exploring neural architectures at a macro level , which is not an arbitrary diversion , but a necessary interlude . Our method focuses on the relationships between the connections in a neural network ’ s architecture and the characteristics of the data ( Minsky & Papert , 2017 ) . In addition to the macro-level structure learning of Bayesian network , our approach provides a new perspective in how to involve the adversarial blocks and to exploit multiple models for ensemble stacking . 3 AUTOBAYES . AutoBayes Algorithm : The overall procedure of the AutoBayes algorithm is described in the pseudocode of Algorithm 1 . The AutoBayes automatically constructs non-redundant inference factor graphs given a hypothetical Bayesian graph assumption , through the use of the Bayes-Ball algorithm . Depending on the derived conditional independency and pruned factor graphs , DNN blocks for encoder E , decoder D , classifier C , nuisance estimator N and adversary A are reasonably connected . The entire network is trained with variational Bayesian inference and adversarial training . The Bayes-Ball algorithm ( Shachter , 2013 ) facilitates an automatic pruning of redundant links in inference factor graphs through the analysis of conditional independency . Fig . 3 shows ten Bayes-Ball rules to identify conditional independency . Given a Bayesian graph , we can determine whether two disjoint sets of nodes are independent conditionally on other nodes through a graph separation criterion . Specifically , an undirected path is activated if a Bayes ball can travel along without encountering a stop symbol : in Fig . 3 . If there are no active paths between two nodes when some conditioning nodes are shaded , then those random variables are conditionally independent . Graphical Models : We here focus on 4-node graphs . Let p ( y , s , z , x ) denote the joint probability distribution underlying the datasets for the four random variables , i.e. , Y , S , Z , and X . The chain rule can yield the following factorization for a generative model from Y to X ( note that at most 4 ! factorization orders exist including useless ones such as with reversed direction from X to Y ) : p ( y , s , z , x ) = p ( y ) p ( s|y ) p ( z|s , y ) p ( x|z , s , y ) , ( 1 ) which is visualized in Bayesian graph of Fig . 4 ( a ) . The probability conditioned on X can then be factorized , e.g. , as follows ( among 3 ! different orders of inference factorization for four-node graphs ) : p ( y , s , z|x ) = { p ( z|x ) p ( s|z , x ) p ( y|s , z , x ) , Z-first-inference p ( s|x ) p ( z|s , x ) p ( y|z , s , x ) , S-first-inference ( 2 ) which are marginalized to obtain the likelihood : p ( y|x ) = Es , z|x [ p ( y , s , z|x ) ] . The above two scheduling strategies in ( 2 ) are illustrated in factor graph models as in Figs . 4 ( b ) and ( c ) , respectively . The graphical models in Fig . 4 do not impose any assumption of potentially inherent independency in datasets and hence are most generic . However , depending on the underlying independency in datasets , we may be able to prune some edges in those graphs . For example , if the data only follows the simple Markov chain of Y −X , while being independent of S and Z , as shown in Fig . 5 ( a ) , all links except one between X and Y will be unreasonable in inference graphs of Figs . 4 ( b ) and ( c ) , that justifies the standard classifier model in Fig . 1 ( a ) . This implies that more complicated inference models such as A-CVAE can be unnecessarily redundant depending on the dataset . This motivates us to consider an extended AutoML framework which automatically explores the best pair of inference factor graph and corresponding Bayesian graph models matching dataset statistics besides the micro-scale hyperparameter tuning . Methodology : AutoBayes begins with exploring any potential Bayesian graphs by cutting links of the full-chain graph in Fig . 4 ( a ) , imposing possible ( conditional ) independence . We then adopt the Bayes-Ball algorithm on each hypothetical Bayesian graph to examine conditional independence over different inference strategies , e.g. , full-chain Z-/S-first inference graphs in Figs . 4 ( b ) / ( c ) . Applying Bayes-Ball justifies the reasonable pruning of the links in the full-chain inference graphs , and also the potential adversarial censoring when Z is independent of S. This process automatically constructs the connectivity of inference , generative , and adversary blocks with sound reasoning . Consider an example case when the data adheres to the following factorization : p ( y , s , z , x ) = p ( y ) p ( s| y ) p ( z| s , y ) p ( x|z , s , y ) , ( 3 ) where we explicitly indicate conditional independence by slash-cancellation from the full-chain case in ( 1 ) . This corresponds to a Bayesian graphical model illustrated in Fig . 5 ( e ) . Applying the Bayes-Ball algorithm to the Bayesian graph yields the following conditional probability : p ( y , s , z|x ) = p ( z|x ) p ( s|z , x ) p ( y|z , s , x ) , ( 4 ) for the Z-first inference strategy in ( 2 ) . The corresponding factor graph is then given in Fig . 6 ( c ) . Note that the Bayes-Ball also reveals that there is no marginal dependency between Z and S , which provides the reason to use adversarial censoring to suppress nuisance information S in the latent space Z . In consequence , by combining the Bayesian graph and factor graph , we automatically obtain A-CVAE model in Fig . 1 ( b ) . AutoBayes justifies A-CVAE under the assumption that the data follows the Bayesian model E in Fig . 5 ( e ) . As the true generative model is unknown , AutoBayes explores different Bayesian graphs like in Fig . 5 to search for the most relevant model . Our framework is readily applicable to graphs with more than 4 nodes to represent multiple Y , S , and Z . Models J and K in Fig . 5 are such examples having multiple latent factors Z1 and Z2 . Despite the search space for AutoBayes will rapidly grow with the number of nodes , most realistic datasets do not require a large number of neural network blocks for macro-level optimization . See Appendix A.2 for more detailed descriptions for some Bayesian graph models to construct factor graphs like in Fig . 6 . Also see discussions of graphical models suitable for semi-supervised learning in Appendix A.4 . Training : Given a pair of generative graph and inference graph , the corresponding DNN structures will be trained . For example of the generative graph model K in Fig . 5 ( k ) , one relevant inference graph Kz in Fig . 6 ( k ) will result in the overall network structure as shown in Fig . 7 , where adversary network is attached as Z2 is ( conditionally ) independent of S. This 5-node graph model justifies a recent work on partially disentanged A-CVAE by Han et al . ( 2020 ) . Each factor block is realized by a DNN , e.g. , parameterized by θ for pθ ( z1 , z2|x ) , and all of the networks except for adversarial network are optimized to minimize corresponding loss functions including L ( ŷ , y ) as follows : min θ , ψ , µ max η E [ L ( ŷ , y ) + λsL ( ŝ , s ) + λxL ( x̂′ , x ) + λzKL ( z1 , z2‖N ( 0 , I ) ) − λaL ( ŝ′ , s ) ] , ( 5 ) ( z1 , z2 ) = pθ ( x ) , ŷ = pφ ( z1 , z2 ) , ŝ = pψ ( z1 ) , x̂ ′ = pµ ( z1 ) , ŝ ′ = pη ( z2 ) , ( 6 ) where λ∗ denotes a regularization coefficient , KL is the Kullback–Leibler divergence , and the adversary network pη ( s′|z2 ) is trained to minimize L ( ŝ′ , s ) in an alternating fashion ( see the Adversarial Regularization paragraph below ) . X Z S Y ( a ) Model Dz X S Z Y ( b ) Model Ds X Z S Y ( c ) Model Ez X Z S Y ( d ) Model Es X Z S Y ( e ) Model Fz X S Z Y ( f ) Model Fs The training objective can be formally understood from a likelihood maximization perspective , in manner that can be seen as a generalization of the VAE Evidence Lower Bound ( ELBO ) concept ( Kingma & Welling , 2013 ) . Specifically , it can be viewed as the maximization of a variational lower bound of the likelihood pΦ ( x , y , s ) that is implicitly defined and parameterized by the networks , where Φ represents the collective parameters of the network modules ( e.g. , Φ = ( φ , ψ , µ ) in the example of equation 5 ) that specify the generative model pΦ ( x , y , s|z ) , which implies the likelihood pΦ ( x , y , s ) , as given by pΦ ( x , y , s ) = ∫ pΦ ( x , y , s|z ) p ( z ) dz . However , since this expression is generally intractable , we introduce qθ ( z|x , y , s ) as a variational approximation of the posterior pΦ ( z|x , y , s ) implied by the generative model ( Kingma & Welling , 2013 ; Ranganath et al. , 2014 ) : 1 n n∑ i=1 log pΦ ( xi , yi , si ) = 1 n n∑ i=1 [ log pΦ ( xi , yi , si|zi ) −log qθ ( zi|xi , yi , si ) p ( zi ) +log qθ ( zi|xi , yi , si ) pΦ ( zi|xi , yi , si ) ] ≈ 1 n n∑ i=1 [ log pΦ ( xi , yi , si|zi ) ] −KL ( qθ ( z|x , y , s ) ‖p ( z ) ) + KL ( qθ ( z|x , y , s ) ‖pΦ ( z|x , y , s ) ) ≥ 1 n n∑ i=1 [ log pΦ ( xi , yi , si|zi ) ] −KL ( qθ ( z|x , y , s ) ‖p ( z ) ) , ( 7 ) where the samples zi ∼ qθ ( z|xi , yi , si ) are drawn for each training tuple ( xi , yi , si ) , and the final inequality follows from the non-negativity of KL divergence . Ultimately , the minimization of our training loss function corresponds to the maximization of the lower bound in ( 7 ) , which corresponds to maximizing the likelihood of our implicit generative model , while also optimizing the variational posterior qθ ( z|x , y , s ) toward the actual posterior for the latent representation pΦ ( z|x , y , s ) , since the gap in the bound is given by KL ( qθ ( z|x , y , s ) ‖pΦ ( z|x , y , s ) ) . Further factoring of log pΦ ( x , y , s|z ) yields the multiple loss-terms and network modules . Adversarial Regularization : We can utilize adversarial censoring when Z and S should be marginally independent , e.g. , such as in Fig . 1 ( b ) and Fig . 7 , in order to reinforce the learning of a representation Z that is disentangled from the nuisance variations S. This is accomplished by introducing an adversarial network that aims to maximize a parameterized approximation q ( s|z ) of the likelihood p ( s|z ) , while this likelihood is also incorporated into the loss for the other modules with a negative weight . The adversarial network , by maximizing the log likelihood log q ( s|z ) , essentially maximizes a lower-bound of the mutual information I ( S ; Z ) , and hence the main network is regularized with the additional term that corresponds to minimizing this estimate of mutual information . This follows since the log-likelihood maximized by the adversarial network is given by E [ log q ( s|z ) ] = I ( S ; Z ) −H ( S ) −KL ( p ( s|z ) ‖q ( s|z ) ) , ( 8 ) where the entropy H ( S ) is constant . Ensemble Learning : We further introduce ensemble methods to make best use of all Bayesian graph models explored by the AutoBayes framework without wasting lower-performance models . Ensemble stacked generalization works by stacking the predictions of the base learners in a higher level learning space , where a meta learner corrects the predictions of base learners ( Wolpert , 1992 ) . Subsequent to training base learners , we assemble the posterior probability vectors of all base learners together to improve the prediction . We compare the predictive performance of a logistic regression ( LR ) and a shallow multi-layer perceptron ( MLP ) as an ensemble meta learner to aggregate all inference models . See Appendix A.5 for more detailed description of the stacked generalization .
The authors present a novel method dubbed AutoBayes that tries to find optimal Bayesian graph models for "nuisance-robust" deep learning. They employ the Bayes-Ball algorithm to construct reasonable inference graphs from a generative model given by iterative search. The corresponding DNN modules are then built/linked and trained using a form of variational inference with adversarial regularization where applicable. The authors also propose the use of an ensembling approach to further improve robustness of the "best" model.
SP:b2e738e35c3c6739200f87341678897be89771f7
AutoBayes: Automated Bayesian Graph Exploration for Nuisance-Robust Inference
1 INTRODUCTION . The great advancement of deep learning techniques based on deep neural networks ( DNN ) has enabled more practical design of human-machine interfaces ( HMI ) through the analysis of the user ’ s physiological data ( Faust et al. , 2018 ) , such as electroencephalogram ( EEG ) ( Lawhern et al. , 2018 ) and electromyogram ( EMG ) ( Atzori et al. , 2016 ) . However , such biosignals are highly prone to variation depending on the biological states of each subject ( Christoforou et al. , 2010 ) . Hence , frequent calibration is often required in typical HMI systems . Toward resolving this issue , subject-invariant methods ( Özdenizci et al. , 2019b ) , employing adversarial training ( Makhzani et al. , 2015 ; Lample et al. , 2017 ; Creswell et al. , 2017 ) with the Conditional Variational AutoEncoder ( A-CVAE ) ( Louizos et al. , 2015 ; Sohn et al. , 2015 ) shown in Fig . 1 ( b ) , have emerged to reduce user calibration for realizing successful HMI systems . Compared to a standard DNN classifier C in Fig . 1 ( a ) , integrating additional functional blocks for encoder E , nuisanceconditional decoder D , and adversary A networks offers excellent subject-invariant performance . The DNN structure may be potentially extended with more functional blocks and more latent nodes as shown in Fig . 1 ( c ) . However , such a DNN architecture design may rely on human effort and insight to determine the block connectivity of DNNs . Automation of hyperparameter and architecture exploration in the context of AutoML ( Ashok et al. , 2017 ; Brock et al. , 2017 ; Cai et al. , 2017 ; He et al. , 2018 ; Miikkulainen et al. , 2019 ; Real et al. , 2017 ; 2020 ; Stanley & Miikkulainen , 2002 ; Zoph et al. , 2018 ) can facilitate DNN design suited for nuisance-invariant inference . Nevertheless , without proper reasoning , most of the search space for link connectivity will be pointless . In this paper , we propose a systematic automation framework called AutoBayes , which searches for the best inference graph model associated with a Bayesian graph model ( also a.k.a . Bayesian network ) well-suited to reproduce the training datasets . The proposed method automatically formulates various different Bayesian graphs by factorizing the joint probability distribution in terms of data , class label , subject identification ( ID ) , and inherent latent representations . Given Bayesian graphs , some meaningful inference graphs are generated through the Bayes-Ball algorithm ( Shachter , 2013 ) for pruning redundant links to achieve high-accuracy estimation . In order to promote robustness against nuisance variations such as inter-subject/session factors , the explored Bayesian graphs can provide 1For example of speech recognition , nuisance factors such as speaker ’ s attributes and recording environment may change the task accuracy . For image recognition , ambient light conditions and image sensor conditions may become inherent nuisance factors . In the context of this paper , nuisance variations mainly refer to subject identities and biological states during recording sessions for physiological data learning . reasoning to use adversarial training with/without variational modeling and latent disentanglement . We demonstrate that AutoBayes can achieve excellent performance across various public datasets , in particular with an ensemble stacking of multiple explored graphical models . 2 KEY CONTRIBUTIONS . At the core of our methodology is the consideration of graphical models that capture the probabilistic relationship between random variables representing the data features X , task labels Y , nuisance variation labels S , and ( potential ) latent representations Z . The ultimate goal is to infer the task label Y from the measured data feature X , which is hindered by the presence of nuisance variations ( e.g. , inter-subject/session variations ) that are ( partially ) labelled by S. One may use a standard DNN to classify Y given X as shown in Fig . 1 ( a ) , without explicitly involving S or Z . Although A-CVAE in Fig . 1 ( b ) may offer nuisance-robust performance through adversarial disentanglement of S from latent Z , there is no guarantee that such a model can perform well across different datasets . It is exemplified in Fig . 2 where A-CVAE outperforms the standard DNN model for some datasets ( QMNIST , Stress , ErrP ) while it does not for the other cases . This may be due to the underlying probabilistic relationship of the data varying across datasets . Our proposed framework can construct justifiable models , achieving higher performance for every dataset , as demonstrated in Fig . 2 . It is verified that significant gain is attainable with ensemble methods of different Bayesian graphs which are explored in our AutoBayes . For example , our method with a relatively shallow architecture achieves 99.61 % accuracy which is close to state-of-the-art performance in QMNIST dataset . The main contributions of this paper over the existing works are five-fold as follows : Algorithm 1 Pseudocode for AutoBayes Framework Require : Nodes set V = [ Y , X , S1 , S2 , . . . , Sn , Z1 , Z2 , . . . , Zm ] , where Y denotes task labels , X is a measurement data , S = [ S1 , S2 , . . . , Sn ] are ( potentially multiple ) semi-supervised nuisance variations , and Z = [ Z1 , Z2 , . . . , Zm ] are ( potentially multiple ) latent vectors Ensure : Semi-supervised training/validation datasets 1 : for all permutations of node factorization from Y to X do 2 : Let B0 be the corresponding Bayesian graph for the permuted full-chain factorization 3 : for all combinations of link pruning on the full-chain Bayesian graph B0 do 4 : Let B be the corresponding pruned Bayesian graph 5 : Apply the Bayes-Ball algorithm on B to build a conditional independency list I 6 : for all permutations of node factorization from X to Y do 7 : Let F0 be the factor graph corresponding to a full-chain conditional probability 8 : Prune all redundant links in F0 based on conditional independency I 9 : Let F be the pruned factor graph 10 : Merge the pruned Bayesian graph B into the pruned factor graph F 11 : Attach an adversary network A to latent nodes Z for Zk ⊥ S ∈ I 12 : Assign an encoder network E for p ( Z| · · · ) in the merged factor graph 13 : Assign a decoder network D for p ( x| · · · ) in the merged factor graph 14 : Assign a nuisance indicator network N for p ( S| · · · ) in the merged factor graph 15 : Assign a classifier network C for p ( y| · · · ) in the merged factor graph 16 : Adversary train the whole DNN structure to minimize a loss function in ( 5 ) 17 : end for . At most ( |V| − 2 ) ! combinations 18 : end for . At most 2|V| ( |V|−1 ) /2 combinations 19 : end for . At most ( |V| − 2 ) ! combinations 20 : return the best model having highest task accuracy in validation sets • AutoBayes automatically explores potential graphical models inherent to the data by combinatorial pruning of dependency assumptions ( edges ) and then applies Bayes-Ball to examine various inference strategies , rather than blindly exploring hyperparameters of DNN blocks . • AutoBayes offers a solid reason of how to connect multiple DNN blocks to impose conditioning and adversary censoring for the task classifier , feature encoder , decoder , nuisance indicator and adversary networks , based on an explored Bayesian graph . • The framework is also extensible to multiple latent representations and nuisances factors . • Besides fully-supervised training , AutoBayes can automatically build some relevant graphi- cal models suited for semi-supervised learning . • Multiple graphical models explored in AutoBayes can be efficiently exploited to improve performance by ensemble stacking . We note that this paper relates to some existing literature in AutoML , variational Bayesian inference ( Kingma & Welling , 2013 ; Sohn et al. , 2015 ; Louizos et al. , 2015 ) , adversarial training ( Goodfellow et al. , 2014 ; Dumoulin et al. , 2016 ; Donahue et al. , 2016 ; Makhzani et al. , 2015 ; Lample et al. , 2017 ; Creswell et al. , 2017 ) , and Bayesian network ( Nie et al. , 2018 ; Njah et al. , 2019 ; Rohekar et al. , 2018 ) as addressed in Appendix A.1 in more detail . Nonetheless , AutoBayes is a novel framework that diverges from AutoML , which mostly employs architecture tuning at a micro level . Our work focuses on exploring neural architectures at a macro level , which is not an arbitrary diversion , but a necessary interlude . Our method focuses on the relationships between the connections in a neural network ’ s architecture and the characteristics of the data ( Minsky & Papert , 2017 ) . In addition to the macro-level structure learning of Bayesian network , our approach provides a new perspective in how to involve the adversarial blocks and to exploit multiple models for ensemble stacking . 3 AUTOBAYES . AutoBayes Algorithm : The overall procedure of the AutoBayes algorithm is described in the pseudocode of Algorithm 1 . The AutoBayes automatically constructs non-redundant inference factor graphs given a hypothetical Bayesian graph assumption , through the use of the Bayes-Ball algorithm . Depending on the derived conditional independency and pruned factor graphs , DNN blocks for encoder E , decoder D , classifier C , nuisance estimator N and adversary A are reasonably connected . The entire network is trained with variational Bayesian inference and adversarial training . The Bayes-Ball algorithm ( Shachter , 2013 ) facilitates an automatic pruning of redundant links in inference factor graphs through the analysis of conditional independency . Fig . 3 shows ten Bayes-Ball rules to identify conditional independency . Given a Bayesian graph , we can determine whether two disjoint sets of nodes are independent conditionally on other nodes through a graph separation criterion . Specifically , an undirected path is activated if a Bayes ball can travel along without encountering a stop symbol : in Fig . 3 . If there are no active paths between two nodes when some conditioning nodes are shaded , then those random variables are conditionally independent . Graphical Models : We here focus on 4-node graphs . Let p ( y , s , z , x ) denote the joint probability distribution underlying the datasets for the four random variables , i.e. , Y , S , Z , and X . The chain rule can yield the following factorization for a generative model from Y to X ( note that at most 4 ! factorization orders exist including useless ones such as with reversed direction from X to Y ) : p ( y , s , z , x ) = p ( y ) p ( s|y ) p ( z|s , y ) p ( x|z , s , y ) , ( 1 ) which is visualized in Bayesian graph of Fig . 4 ( a ) . The probability conditioned on X can then be factorized , e.g. , as follows ( among 3 ! different orders of inference factorization for four-node graphs ) : p ( y , s , z|x ) = { p ( z|x ) p ( s|z , x ) p ( y|s , z , x ) , Z-first-inference p ( s|x ) p ( z|s , x ) p ( y|z , s , x ) , S-first-inference ( 2 ) which are marginalized to obtain the likelihood : p ( y|x ) = Es , z|x [ p ( y , s , z|x ) ] . The above two scheduling strategies in ( 2 ) are illustrated in factor graph models as in Figs . 4 ( b ) and ( c ) , respectively . The graphical models in Fig . 4 do not impose any assumption of potentially inherent independency in datasets and hence are most generic . However , depending on the underlying independency in datasets , we may be able to prune some edges in those graphs . For example , if the data only follows the simple Markov chain of Y −X , while being independent of S and Z , as shown in Fig . 5 ( a ) , all links except one between X and Y will be unreasonable in inference graphs of Figs . 4 ( b ) and ( c ) , that justifies the standard classifier model in Fig . 1 ( a ) . This implies that more complicated inference models such as A-CVAE can be unnecessarily redundant depending on the dataset . This motivates us to consider an extended AutoML framework which automatically explores the best pair of inference factor graph and corresponding Bayesian graph models matching dataset statistics besides the micro-scale hyperparameter tuning . Methodology : AutoBayes begins with exploring any potential Bayesian graphs by cutting links of the full-chain graph in Fig . 4 ( a ) , imposing possible ( conditional ) independence . We then adopt the Bayes-Ball algorithm on each hypothetical Bayesian graph to examine conditional independence over different inference strategies , e.g. , full-chain Z-/S-first inference graphs in Figs . 4 ( b ) / ( c ) . Applying Bayes-Ball justifies the reasonable pruning of the links in the full-chain inference graphs , and also the potential adversarial censoring when Z is independent of S. This process automatically constructs the connectivity of inference , generative , and adversary blocks with sound reasoning . Consider an example case when the data adheres to the following factorization : p ( y , s , z , x ) = p ( y ) p ( s| y ) p ( z| s , y ) p ( x|z , s , y ) , ( 3 ) where we explicitly indicate conditional independence by slash-cancellation from the full-chain case in ( 1 ) . This corresponds to a Bayesian graphical model illustrated in Fig . 5 ( e ) . Applying the Bayes-Ball algorithm to the Bayesian graph yields the following conditional probability : p ( y , s , z|x ) = p ( z|x ) p ( s|z , x ) p ( y|z , s , x ) , ( 4 ) for the Z-first inference strategy in ( 2 ) . The corresponding factor graph is then given in Fig . 6 ( c ) . Note that the Bayes-Ball also reveals that there is no marginal dependency between Z and S , which provides the reason to use adversarial censoring to suppress nuisance information S in the latent space Z . In consequence , by combining the Bayesian graph and factor graph , we automatically obtain A-CVAE model in Fig . 1 ( b ) . AutoBayes justifies A-CVAE under the assumption that the data follows the Bayesian model E in Fig . 5 ( e ) . As the true generative model is unknown , AutoBayes explores different Bayesian graphs like in Fig . 5 to search for the most relevant model . Our framework is readily applicable to graphs with more than 4 nodes to represent multiple Y , S , and Z . Models J and K in Fig . 5 are such examples having multiple latent factors Z1 and Z2 . Despite the search space for AutoBayes will rapidly grow with the number of nodes , most realistic datasets do not require a large number of neural network blocks for macro-level optimization . See Appendix A.2 for more detailed descriptions for some Bayesian graph models to construct factor graphs like in Fig . 6 . Also see discussions of graphical models suitable for semi-supervised learning in Appendix A.4 . Training : Given a pair of generative graph and inference graph , the corresponding DNN structures will be trained . For example of the generative graph model K in Fig . 5 ( k ) , one relevant inference graph Kz in Fig . 6 ( k ) will result in the overall network structure as shown in Fig . 7 , where adversary network is attached as Z2 is ( conditionally ) independent of S. This 5-node graph model justifies a recent work on partially disentanged A-CVAE by Han et al . ( 2020 ) . Each factor block is realized by a DNN , e.g. , parameterized by θ for pθ ( z1 , z2|x ) , and all of the networks except for adversarial network are optimized to minimize corresponding loss functions including L ( ŷ , y ) as follows : min θ , ψ , µ max η E [ L ( ŷ , y ) + λsL ( ŝ , s ) + λxL ( x̂′ , x ) + λzKL ( z1 , z2‖N ( 0 , I ) ) − λaL ( ŝ′ , s ) ] , ( 5 ) ( z1 , z2 ) = pθ ( x ) , ŷ = pφ ( z1 , z2 ) , ŝ = pψ ( z1 ) , x̂ ′ = pµ ( z1 ) , ŝ ′ = pη ( z2 ) , ( 6 ) where λ∗ denotes a regularization coefficient , KL is the Kullback–Leibler divergence , and the adversary network pη ( s′|z2 ) is trained to minimize L ( ŝ′ , s ) in an alternating fashion ( see the Adversarial Regularization paragraph below ) . X Z S Y ( a ) Model Dz X S Z Y ( b ) Model Ds X Z S Y ( c ) Model Ez X Z S Y ( d ) Model Es X Z S Y ( e ) Model Fz X S Z Y ( f ) Model Fs The training objective can be formally understood from a likelihood maximization perspective , in manner that can be seen as a generalization of the VAE Evidence Lower Bound ( ELBO ) concept ( Kingma & Welling , 2013 ) . Specifically , it can be viewed as the maximization of a variational lower bound of the likelihood pΦ ( x , y , s ) that is implicitly defined and parameterized by the networks , where Φ represents the collective parameters of the network modules ( e.g. , Φ = ( φ , ψ , µ ) in the example of equation 5 ) that specify the generative model pΦ ( x , y , s|z ) , which implies the likelihood pΦ ( x , y , s ) , as given by pΦ ( x , y , s ) = ∫ pΦ ( x , y , s|z ) p ( z ) dz . However , since this expression is generally intractable , we introduce qθ ( z|x , y , s ) as a variational approximation of the posterior pΦ ( z|x , y , s ) implied by the generative model ( Kingma & Welling , 2013 ; Ranganath et al. , 2014 ) : 1 n n∑ i=1 log pΦ ( xi , yi , si ) = 1 n n∑ i=1 [ log pΦ ( xi , yi , si|zi ) −log qθ ( zi|xi , yi , si ) p ( zi ) +log qθ ( zi|xi , yi , si ) pΦ ( zi|xi , yi , si ) ] ≈ 1 n n∑ i=1 [ log pΦ ( xi , yi , si|zi ) ] −KL ( qθ ( z|x , y , s ) ‖p ( z ) ) + KL ( qθ ( z|x , y , s ) ‖pΦ ( z|x , y , s ) ) ≥ 1 n n∑ i=1 [ log pΦ ( xi , yi , si|zi ) ] −KL ( qθ ( z|x , y , s ) ‖p ( z ) ) , ( 7 ) where the samples zi ∼ qθ ( z|xi , yi , si ) are drawn for each training tuple ( xi , yi , si ) , and the final inequality follows from the non-negativity of KL divergence . Ultimately , the minimization of our training loss function corresponds to the maximization of the lower bound in ( 7 ) , which corresponds to maximizing the likelihood of our implicit generative model , while also optimizing the variational posterior qθ ( z|x , y , s ) toward the actual posterior for the latent representation pΦ ( z|x , y , s ) , since the gap in the bound is given by KL ( qθ ( z|x , y , s ) ‖pΦ ( z|x , y , s ) ) . Further factoring of log pΦ ( x , y , s|z ) yields the multiple loss-terms and network modules . Adversarial Regularization : We can utilize adversarial censoring when Z and S should be marginally independent , e.g. , such as in Fig . 1 ( b ) and Fig . 7 , in order to reinforce the learning of a representation Z that is disentangled from the nuisance variations S. This is accomplished by introducing an adversarial network that aims to maximize a parameterized approximation q ( s|z ) of the likelihood p ( s|z ) , while this likelihood is also incorporated into the loss for the other modules with a negative weight . The adversarial network , by maximizing the log likelihood log q ( s|z ) , essentially maximizes a lower-bound of the mutual information I ( S ; Z ) , and hence the main network is regularized with the additional term that corresponds to minimizing this estimate of mutual information . This follows since the log-likelihood maximized by the adversarial network is given by E [ log q ( s|z ) ] = I ( S ; Z ) −H ( S ) −KL ( p ( s|z ) ‖q ( s|z ) ) , ( 8 ) where the entropy H ( S ) is constant . Ensemble Learning : We further introduce ensemble methods to make best use of all Bayesian graph models explored by the AutoBayes framework without wasting lower-performance models . Ensemble stacked generalization works by stacking the predictions of the base learners in a higher level learning space , where a meta learner corrects the predictions of base learners ( Wolpert , 1992 ) . Subsequent to training base learners , we assemble the posterior probability vectors of all base learners together to improve the prediction . We compare the predictive performance of a logistic regression ( LR ) and a shallow multi-layer perceptron ( MLP ) as an ensemble meta learner to aggregate all inference models . See Appendix A.5 for more detailed description of the stacked generalization .
The paper presents AutoBayes: a new approach for nuisance-robust deep learning which explores different Bayesian graph models to search for the best inference strategy. It automatically builds connections between classifier, encoder, decoder, nuisance estimator and adversary DNN blocks. The approach also enables disentangling the learned representations in terms of nuisance variation and task labels. Different benchmark datasets have been used for evaluation.
SP:b2e738e35c3c6739200f87341678897be89771f7
Delay-Tolerant Local SGD for Efficient Distributed Training
1 INTRODUCTION Data-parallel synchronous SGD is currently the workhorse algorithm for large-scale distributed deep learning tasks with many workers ( e.g . GPUs ) , where each worker calculates the stochastic gradient on local data and synchronizes with the other workers in one training iteration ( Goyal et al. , 2017 ; You et al. , 2017 ; Huo et al. , 2020 ) . However , high communication overheads make it inefficient to train large deep neural networks ( DNNs ) with a large number of workers . Generally speaking , the communication overheads come in two forms : 1 ) high communication delay due to the unstable network or a large number of communication hops , and 2 ) large communication budget caused by the large size of the DNN models with limited network bandwidth . Although communication delay is not a prominent problem for the data center environment , it can severely degrade training efficiency in practical scenarios , e.g . when the workers are geo-distributed or placed under different networks ( Ethernet , cellular networks , Wi-Fi , etc . ) in federated learning ( Konečnỳ et al. , 2016 ) . Existing works to address the communication inefficiency of synchronous SGD can be roughly classified into three categories : 1 ) pipelining ( Pipe-SGD ( Li et al. , 2018 ) ) ; 2 ) gradient compression ( Aji & Heafield , 2017 ; Stich et al. , 2018 ; Alistarh et al. , 2018 ; Yu et al. , 2018 ; Vogels et al. , 2019 ) ; and 3 ) periodic averaging ( also known as Local SGD ) ( Stich , 2019 ; Lin et al. , 2018a ) . In pipelining , the model update uses stale information such that the next iteration does not wait for the synchronization of information in the current iteration to update the model . As the synchronization barrier is removed , pipelining can overlap computation with communication to achieve delay tolerance . Gradient compression reduces the amount of data transferred in each iteration by condensing the gradient with a compressor C ( · ) . Representative methods include scalar quantization ( Alistarh et al. , 2017 ; Wen et al. , 2017 ; Bernstein et al. , 2018 ) , gradient sparsification ( Aji & Heafield , 2017 ; Stich et al. , 2018 ; Alistarh et al. , 2018 ) , and vector quantization ( Yu et al. , 2018 ; Vogels et al. , 2019 ) . Periodic averaging reduces the frequency of communication by synchronizing the workers every p ( larger than 1 ) iterations . Periodic averaging is also shown to be effective for federated learning ( McMahan et al. , 2017 ) . In summary , exiting works handle the high communication delay with pipelining and use gradient compression and periodic averaging to reduce the communication budget . However , all existing methods fail to address both . It is also unclear how the three communication-efficient techniques introduced above can be used jointly without hurting the convergence of SGD . In this paper , we propose a novel framework Overlap Local Computation with Compressed Communication ( i.e. , OLCO3 ) to make distributed training both delay-tolerant AND communication efficient by enabling and improving the combination of the above three communicationefficient techniques . In Table 1 , we compare OLCO3 with the aforementioned works and two succeeding state-of-the-art delay-tolerant methods CoCoD-SGD ( Shen et al. , 2019 ) and OverlapLocalSGD ( Wang et al. , 2020 ) . Under the periodic averaging framework , we use p to denote the number of local SGD iterations per communication round , and s to denote the number of communication rounds that the information used in the model update has been outdated for . Let the computation time of one SGD iteration be Tcomput , then we can pipeline the communication and the computation when the communication delay time is less than sp · Tcomput . For simplicity , we define the delay tolerance of a method as T = sp . Local SGD has to use up-to-date information for the model update ( s = 0 , p ≥ 1 , T = sp = 0 ) . CoCoD-SGD and OverlapLocalSGD combine pipelining and periodic averaging by using stale results from last communication round ( s = 1 , p ≥ 1 , T = sp = p ) , while our OLCO3 supports various staleness ( s ≥ 1 , p ≥ 1 , T = sp ) and all other features in Table 1 . The main contributions of this paper are summarized as follows : • We propose the novel OLCO3 method , which achieves extreme communication efficiency by addressing both the high communication delay and large communication budget issues . • OLCO3 introduces novel staleness compensation and compression compensation techniques . Convergence analysis shows that OLCO3 achieves the same convergence rate as SGD . • Extensive experiments on deep learning tasks show that OLCO3 significantly outperforms ex- isting delay-tolerant methods in both the communication efficiency and model accuracy . 2 BACKGROUNDS & RELATED WORKS SGD and Pipelining . In distributed training , we minimize the global loss function f ( · ) = 1 K ∑K k=1 fk ( · ) , where fk ( · ) is the local loss function at worker k ∈ [ K ] . At iteration t , vanilla synchronous SGD updates the model xt ∈ Rd with learning rate ηt via xt+1 = xt − ηt K ∑K k=1∇Fk ( xt ; ξ ( k ) t ) , where ξ ( k ) t is the stochastic sampling variable and ∇Fk ( xt ; ξ ( k ) t ) is the corresponding stochastic gradient at worker k. Throughout this paper , we assume that the stochastic gradient is an unbiased estimator by default , i.e. , E ξ ( k ) t ∇Fk ( xt ; ξ ( k ) t ) = ∇fk ( xt ) . Pipe-SGD ( Li et al. , 2018 ) parallelizes the communication and computation of SGD via pipelining . At iteration t , worker k computes stochastic gradient ∇Fk ( xt ; ξ ( k ) t ) at current model xt and communicates to get the averaged stochastic gradient 1K ∑K k=1∇Fk ( xt ; ξ ( k ) t ) . Instead of waiting the communication to finish , Pipe-SGD concurrently updates the current model with stale averaged stochastic gradient via xt+1 = xt − ηtK ∑K k=1∇Fk ( xt−s ; ξ ( k ) t−s ) . Note that Pipe-SGD is different from asynchronous SGD ( Ho et al. , 2013 ; Lian et al. , 2015 ) which computes stochastic gradient using stale model and does not parallelize the computation and communication of a worker . A problem of Pipe-SGD is that its performance deteriorates severely under high communication delay ( large s ) . Pipelining with Periodic Averaging . CoCoD-SGD ( Shen et al. , 2019 ) utilizes periodic averaging to reduce the number of communication rounds and parallelizes the local model update and global model averaging by concurrently conducting xt = 1 K K∑ k=1 x ( k ) t and x ( k ) t+p = x ( k ) t − t+p−1∑ τ=t ητ∇Fk ( x ( k ) τ ; ξ ( k ) τ ) . ( 1 ) in which x ( k ) t denotes the local model at worker k as the local models on different workers are no longer consistent in non-communicating iterations . When the operations in Eq . ( 1 ) finishes , the local model is updated via x ( k ) t+p ← xt + x ( k ) t+p− x ( k ) t and t← t+ p. CoCoD-SGD can tolerate delay up to p SGD iterations ( i.e. , one communication round in periodic averaging ) . OverlapLocalSGD ( Wang et al. , 2020 ) improves CoCoD-SGD by heuristically pulling x ( k ) t+p back to the xt after the operations in Eq . ( 1 ) via x ( k ) t+p ← ( 1 − α ) x ( k ) t+p + αxt where 0 ≤ α < 1 . The motivation is to reduce the inconsistency in the local models across workers . OverlapLocalSGD also develops a momentum variant , which maintains a slow momentum buffer for xt following SlowMo ( Wang et al. , 2019 ) . As both CoCoD-SGD and OverlapLocalSGD communicates the non-compressed local model update , they suffer from a large communication budget in each communication round . Gradient Compression . The gradient vector v ∈ Rd can be sent with a much smaller communication budget by applying a compressor C ( · ) . Specifically , Scalar quantization rounds 32-bit floatingpoint gradient components to low-precision values of only several bits . One important such algorithm is scaled SignSGD ( called SignSGD in this paper ) ( Bernstein et al. , 2018 ; Karimireddy et al. , 2019 ) which uses C ( v ) = ‖v‖1d sign ( v ) to compress v to 1 bit . Gradient sparsification only communicates large gradient components . Vector quantization uses a codebook where each code is a vector and quantizes the gradient vector as a linear combination of the vector codes . With the local error feedback technique ( Seide et al. , 2014 ; Lin et al. , 2018b ; Wu et al. , 2018 ; Karimireddy et al. , 2019 ; Zheng et al. , 2019 ) , which adds the previous compression error ( i.e. , v − C ( v ) ) to current gradient before compression , gradient compression can achieve comparable performance as full-precision training . Local error feedback also works for both one-way compression ( compress the communication from worker to sever ) ( Karimireddy et al. , 2019 ) and two-way compression ( compress the communication between worker and server ) ( Zheng et al. , 2019 ) . Challenges . Simultaneously achieving communication compression with pipelining and periodic averaging requires careful algorithm design because 1 ) pipelining introduces staleness , and 2 ) stateof-the-art vector quantization methods usually require an additional round of communication to solve the compressor C ( · ) , which is unfavorable in high communication delay scenarios . 3 THE PROPOSED FRAMEWORK : OLCO3 In this section , we will introduce our new delay-tolerant and communication-efficient training framework OLCO3 . We discuss two variants of OLCO3 : OLCO3-TC for two-way compression in masterslave communication mode , and OLCO3-VQ adopting commutative vector quantization for both the master-slave and ring all-reduce communication modes . Note that one-way compression in just a special case of OLCO3-TC and we omit it for conciseness . We use “ line x ” to refer to the x-th line of Algorithm 1 . The key differences between OLCO3-TC and OLCO3-VQ are marked in red color . 3.1 OLCO3-TC FOR TWO-WAY COMPRESSION Motivation . OLCO3-TC is presented in the green part of Algorithm 1 for efficient master-slave distributed training . Naively pipelining local computation with compressed communication will break the update rule of momentum SGD for the averaged model xt = 1K ∑K k=1 x ( k ) t , leading to non- convergence . Therefore , we consider an auxiliary variable x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t −et , where e ( k ) t is the local compression error at worker k and et is the compression error at the server . If x̃t can follow the update rule of momentum SGD , then the real trained model xt will gradually approach x̃t as the training converges because the gradient and errors e ( k ) t , et → 0 . Pipelining . For non-communicating iterations , we perform the local update following Local SGD ( line 4 ) . A communicating iteration takes place every p iterations . To pipeline the communication Algorithm 1 Overlap Local Computation with Compressed Communication ( OLCO3 ) on worker k ∈ [ K ] . Green part : OLCO3-TC ; Yellow part : OLCO3-VQ . Best view in color . 1 : Input : period p ≥ 1 , staleness s ≥ 0 , number of iterations T , number of workers K , learning rate { ηt } T−1t=0 , and compression scheme C ( · ) . 2 : Initialize : Local model x ( k ) 0 = x0 , local error e ( k ) 0 = 0 , server error e0 = 0 , local momentum buffer m ( k ) 0 = 0 , and momentum constant 0 < µ < 1 . Variables with negative subscripts are 0 . 3 : for t = 0 , 1 , · · · , T − 1 do 4 : m ( k ) t+1 = µm ( k ) t +∇Fk ( x ( k ) t ; ξ ( k ) t ) , x ( k ) t+1 = x ( k ) t − ηtm ( k ) t+1 // Momentum Local SGD . 5 : if ( t+ 1 ) mod p = 0 then 6 : Maintain or reset the momentum buffer . 7 : ∆ ( k ) t+1 = x ( k ) t+1−p − x ( k ) t+1 + e ( k ) t // Compression compensation . 8 : e ( k ) t+1 = e ( k ) t+2 = · · · = e ( k ) t+p = ∆ ( k ) t+1 − C ( ∆ ( k ) t+1 ) // Compression . 9 : Invoke the communication thread in parallel which does : 10 : ( 1 ) Send C ( ∆ ( k ) t+1 ) to and receive C ( ∆t+1 ) from the server node . 11 : ( 2 ) Server : ∆t+1 = 1K ∑K k=1 C ( ∆ ( k ) t+1 ) + et ; et+1 = et+2 = · · · = et+p = ∆t+1 − C ( ∆t+1 ) . 12 : Block until C ( ∆t+1−sp ) is ready . 13 : xt+1 = xt+1−p − C ( ∆t+1−sp ) 14 : x ( k ) t+1 ← xt+1 − ∑s−1 i=0 C ( ∆ ( k ) t+1−ip ) // Staleness compensation . 15 : ∆ ( k ) t+1 = x ( k ) t+1−p − x ( k ) t+1 + e ( k ) t−sp // Compression compensation . 16 : Invoke the communication thread in parallel which does : 17 : ( 1 ) e ( k ) t+1 = e ( k ) t+2 = · · · = e ( k ) t+p = ∆ ( k ) t+1 − C ( ∆ ( k ) t+1 ) // Compression . 18 : ( 2 ) Average 1K ∑K k=1 C ( ∆ ( k ) t+1 ) by ring all-reduce or master-slave communication . 19 : Block until 1K ∑K k=1 C ( ∆ ( k ) t+1−sp ) and e ( k ) t+1−sp is ready . 20 : xt+1 = xt+1−p − 1K ∑K k=1 C ( ∆ ( k ) t+1−sp ) 21 : x ( k ) t+1 ← xt+1 − ∑s−1 i=0 ∆ ( k ) t+1−ip // Staleness compensation . 22 : end if 23 : end for 24 : Output : averaged model xT = 1K ∑K k=1 x ( k ) T and computation , we compress the local update ∆ ( k ) t+1 ( line 7 ) for efficient communication , and at the same time , try to update the model with a stale compressed global update C ( ∆t+1−sp ) ( line 13 ) that has been outdated for s communication rounds ( i.e. , the staleness is s ) . The momentum buffer can be maintained or reset to zero every p iteration ( line 6 ) . If the delay tolerance T = sp is larger than the actual communication delay , the blocking in line 12 becomes a no-op and there will be no synchronization barrier . The server compresses the sum of the compressed local updates from all workers ( line 11 ) and sends it back , making OLCO3-TC an efficient two-way compression method . Compensation . To make the update of the auxiliary variable x̃t follow momentum SGD , we propose to 1 ) compensate staleness with all compressed local updates with staleness ∈ [ 0 , s − 1 ] ( line 14 ) , which requires no communication and allows less stale local update to affect the local model , and 2 ) maintain a local error ( line 8 ) and add it to the next local update before compression ( line 7 ) to compensate the compression error . With the two compensation techniques in OLCO3-TC , Lemma 1 shows that the update rule of x̃t follows momentum SGD with averaged momentum 1K ∑K k=1 m ( k ) t . Lemma 1 . For OLCO3-TC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t − et−sp , then we have x̃t = x̃t−1 − ηt−1K ∑K k=1 m ( k ) t . Note that there is a “ gradient mismatch ” problem as the local momentum m ( k ) t is computed at the local model x ( k ) t but used in the update rule of the auxiliary variable x̃t ( Karimireddy et al. , 2019 ; Xu et al. , 2020 ) . However , our analysis shows that it does not affect the convergence rate . We have also considered OLCO3 for one-way compression ( i.e. , OLCO3-OC ) as a special case of OLCO3-TC . In OLCO3-OC , the compressor at the server side is identity function and the server error et is 0 . For OLCO3-OC , the auxiliary variable x̃t also follows momentum SGD as stated in Lemma 2 . Lemma 2 . For OLCO3-OC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t , then we have x̃t = x̃t−1 − ηt−1 K ∑K k=1 m ( k ) t . We can see that the delay tolerance of both OLCO3-TC and OLCO3-OC are T = sp ( s ≥ 1 , p ≥ 1 ) . They have a memory overhead of O ( sd ) for storing information with staleness ∈ [ 0 , s − 1 ] . For most compression schemes such as SignSGD , the computation complexity of C ( · ) is O ( d ) . 3.2 OLCO3-VQ FOR COMMUTATIVE VECTOR QUANTIZATION OLCO3-TC and OLCO3-OC work for compressed communication in the master-slave communication paradigm . In contrast , OLCO3-VQ ( the yellow part of Algorithm 1 ) works for both the master-slave and ring all-reduce communication paradigms . Ring all-reduce minimizes communication congestion by shifting from centralized aggregation in master-slave communication ( Yu et al. , 2018 ) . OLCO3-VQ relies on a state-of-the-art vector quantization scheme , PowerSGD ( Vogels et al. , 2019 ) , which satisfies commutability for compression , i.e. , C ( v1 ) + C ( v2 ) = C ( v1 + v2 ) . However , directly using PowerSGD breaks the delay tolerance of OLCO3 as its compressor C ( · ) needs communication and introduces synchronization barriers . Specifically , PowerSGD invokes communication across all workers to compute a transformation matrix , which is used to project the local updates to the compressed form . Pipelining with Communication-Dependent Compressor . To make OLCO3-VQ delay-tolerant , we further propose a novel compression compensation technique with the stale local error ( line 15 ) . This is in contrast to OLCO3-TC and OLCO3-OC , which use immediate compressed results to calculate the up-to-date local error . As this technique removes the dependency on immediate compressed results , we can move the whole compression and averaging process to the communication thread ( lines 17 and 18 ) . For staleness compensation , OLCO3-VQ uses all uncompressed local updates with staleness ∈ [ 0 , s−1 ] instead of compressed local updates in OLCO3-TC . With the two compensation techniques , Lemma 3 shows that for OLCO3-VQ , the auxiliary variable x̃t associated with the stale local error also follows the momentum SGD update rule . Lemma 3 . For OLCO3-VQ , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t−sp , then we have x̃t = x̃t−1 − ηt−1 K ∑K k=1 m ( k ) t . 4 THEORETICAL RESULTS In this section , we provide the convergence results of the OLCO3 variants for both SGD and momentum SGD maintaining momentum ( line 6 of Algorithm 1 ) with common assumptions . As OLCO3OC is a special case of OLCO3-TC , we only analyze OLCO3-TC and OLCO3-VQ . The detailed proofs of Theorems 1 , 2 , 3 , and 4 can be found in Appendix D , E , F , and G respectively . The detailed proofs of Lemma 1 , 2 , and 3 can be found in Appendix C. We use f∗ to denote the optimal loss . Assumption 1 . ( L-Lipschitz Smoothness ) Both the local ( fk ( · ) ) and global ( f ( · ) = 1K ∑K k=1 fk ( · ) ) loss functions are L-smooth , i.e. , ‖∇f ( x ) −∇f ( y ) ‖2 ≤ L‖x− y‖2 , ∀x , y ∈ Rd , ( 2 ) ‖∇fk ( x ) −∇fk ( y ) ‖2 ≤ L‖x− y‖2 , ∀k ∈ [ K ] , ∀x , y ∈ Rd . ( 3 ) Assumption 2 . ( Local Bounded Variance ) The local stochastic gradient ∇Fk ( x ; ξ ) has a bounded variance , i.e. , Eξ∼Dk‖∇Fk ( x ; ξ ) − ∇fk ( x ) ‖22 ≤ σ2 , ∀k ∈ [ K ] , ∀x ∈ Rd . Note that Eξ∼Dk∇Fk ( x ; ξ ) = ∇fk ( x ) . Assumption 3 . ( Bounded Variance across Workers ) The L2 norm of the difference of the local and global full gradient is bounded , i.e. , ‖∇fk ( x ) −∇f ( x ) ‖22 ≤ κ2 , ∀k ∈ [ K ] , ∀x ∈ Rd . κ = 0 leads to i.i.d . data distributions across workers . Assumption 4 . ( Bounded Full Gradient ) The second moment of the global full gradient is bounded , i.e. , ‖∇f ( x ) ‖22 ≤ G2 , ∀x ∈ Rd . Assumption 5 . ( Karimireddy et al. , 2019 ) The compression function C ( · ) : Rd → R is a δapproximate compressor for 0 < δ ≤ 1 if for all v ∈ Rd , ‖C ( v ) − v‖22 ≤ ( 1− δ ) ‖v‖22 . 4.1 SGD Theorem 1 . For OLCO3-VQ with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2L2 ( s+ 1 ) pσ2 [ 1+ ( 4 ) 14 ( 1− δ ) δ2 ( s+ 1 ) p ] + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 5 ( 1− δ ) δ2 ) + 168 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 . If we set the learning rate η = O ( K 12T− 12 ) and the communication interval p = O ( K− 34T 14 ( s + 1 ) −1 ) , the convergence rate will be O ( K− 12T− 12 ) . The O ( K− 12T− 12 ) rate is the same as synchronous SGD and Local SGD , and achieves linear speedup regarding the number of workers K. Theorem 2 . For OLCO3-TC with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 80h ( δ ) p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 80h ( δ ) ) + 960η2p2G2h ( δ ) . ( 5 ) If we set the learning rate η = O ( K 12T− 12 ) and the communication interval p = O ( K− 34T 14 ( s + 1 ) −1 ) , the convergence rate will be O ( K− 12T− 12 ) . When the data distributions across workers are i.i.d . ( i.e. , κ = 0 ) , if we choose the learning rate η = O ( K 12T− 12 ) and the communication interval p = min { O ( K− 32T 12 ( s + 1 ) −1 ) , O ( K− 34T 14 ) } ( p = O ( K− 34T 14 ) for a enough large T ) instead , the convergence rate will still be O ( K− 12T− 12 ) . Therefore , OLCO3-TC can tackle a larger communication interval p ( O ( K− 3 4T 1 4 ) ) than OLCO3VQ ( O ( K− 34T 14 ( s+ 1 ) −1 ) ) in the i.i.d . setting . But they are the same in the non-i.i.d . setting . 4.2 MOMENTUM SGD Theorem 3 . For OLCO3-VQ with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and let g ( µ , δ , s , p ) = 15 ( 1−µ ) 2 + 60 ( 1−δ ) ( s+1 ) 2p2 δ2 , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K ( 6 ) + 4η2L2 ( 1− µ ) 2 [ ( 4 ( s+ 1 ) p+ g ( µ , δ , s , p ) ) σ2 + ( 12 ( s+ 1 ) 2p2 + g ( µ , δ , s , p ) ) κ2 + g ( µ , δ , s , p ) G2 ] . Theorem 4 . For OLCO3-TC with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 6η2L2 ( 1− µ ) 2 [ σ2 ( 9 ( 1− µ ) 2 + 2 ( s+ 1 ) p+ 168h ( δ ) p2 ) + κ2 ( 9 ( 1− µ ) 2 + 6 ( s+ 1 ) 2p2 + 168h ( δ ) p2 ) +G2 ( 9 ( 1− µ ) 2 + 168h ( δ ) p2 ) ] . ( 7 ) The same convergence rate and communication interval p are achieved as in Section 4.1 . 5 EXPERIMENTS We compare the following methods : 1 ) Local SGD ( baseline , NO delay tolerance with T = 0 ) ; 2 ) Pipe-SGD ; 3 ) CoCoD-SGD ; 4 ) OverlapLocalSGD with hyperparameters following Wang et al . ( 2020 ) ; 5 ) OLCO3-OC with SignSGD compression ; 6 ) OLCO3-VQ with PowerSGD compressoin ; 7 ) OLCO3-TC with SignSGD compression . The momentum buffer is maintained ( line 6 of Algorithm 1 ) by default . We do not report the results of Pipe-SGD as it does not converge for the large delay tolerance T we experimented . We train ResNet-110 ( He et al. , 2016 ) with 8 workers for the CIFAR-10 ( Krizhevsky et al. , 2009 ) image classification task , and report the mean and standard deviation of the test accuracy over 3 runs in both the i.i.d . and non-i.i.d . setting . We also train ResNet-50 with 16 workers for the ImageNet ( Russakovsky et al. , 2015 ) image classification task . More detailed descriptions of the experiment configurations can be found in Appendix A.1 . Delay Tolerance with Lower Communication Budget . The training curves of ResNet-110 on CIFAR-10 and ResNet-50 on ImageNet are shown in Figure 1 . We use s = 1 because CoCoD-SGD and OverlapLocalSGD do not support s ≥ 2 . Compared with other delay-tolerant methods , the communication budget of the OLCO3 variants is significantly smaller due to compressed communication . OLCO3 is also robust to communication delay with a large T = sp . Therefore , OLCO3 features extreme communication efficiency with compressed communication , delay tolerance , and low communication frequency due to periodic averaging . Better Model Performance . The two plots in the first row of Figure 1 show that OLCO3-OC and OLCO3-TC outperforms other delay-tolerant methods and are comparable to Local SGD regarding the model accuracy . The performance of OLCO3-VQ is similar to CoCoD-SGD but inferior to OverlapLocalSGD . However , in the non-i.i.d . results reported in Table 2 , all OLCO3 variants outperform existing delay-tolerant methods in accuracy . This is in line with the theoretical results in Theorems 1 , 2 , 3 , and 4 , which show that OLCO3-TC can tackle a larger p than OLCO3-VQ in the i.i.d . setting but the two methods are similar in the non-i.i.d . setting . In the non-i.i.d . setting , all OLCO3 variants perform very close to Local SGD . On average , OLCO3-OC and OLCO3-TC improve the test accuracy of CoCoD-SGD and OverlapLocalSGD by 2.0 % and 0.8 % , respectively . OLCO3-VQ improves CoCoD-SGD and OverlapLocalSGD by 1.6 % and 0.4 % . These results empirically confirm that the staleness compensation and compression compensation techniques in OLCO3 are effective . Varying Delay Tolerance . We vary the delay tolerance T with staleness fixed at s = 1 in the left plot of Figure 2 . The goal is to check the robustness of OLCO3 to the different period p. The results show that OLCO3-OC and OLCO3-TC always outperform other delay-tolerant methods , and have more comparable performance to Local SGD . Note that both the OLCO3-OC and OLCO3TC provide a significantly smaller communication budget according to Figure 1 . OLCO3-VQ also outperforms CoCoD-SGD with a much smaller communication budget . Varying Staleness . We vary the staleness s of OLCO3 in the right plot of Figure 2 under fixed delay tolerance T . Local SGD only supports s = 0 with no delay tolerance , and CoCoD-SGD and OverlapLocalSGD only support s = 1 , so there is only one result for them in the figure . When increasing the staleness beyond 2 for OLCO3 , the deterioration of the model performance is very small , especially for OLCO3-VQ . This suggests that the staleness compensation techniques in OLCO3 are effective . The performance peaks at s = 2 because an appropriate staleness may introduce some noise that helps generalization . In comparison , we can not tune staleness s for better performance in CoCoD-SGD and OverlapLocalSGD . 6 CONCLUSION In this work , we proposed a new OLCO3 framework to achieve extreme communication efficiency with high delay tolerance and a low communication budget in distributed training . OLCO3 uses novel staleness compensation and compression compensation techniques , and the theoretical results show that it converges as fast as vanilla synchronous SGD . Experimental results show that OLCO3 significantly outperforms existing delay-tolerant methods in terms of the communication budget and model performance . REFERENCES Alham Fikri Aji and Kenneth Heafield . Sparse communication for distributed gradient descent . arXiv preprint arXiv:1704.05021 , 2017 . Dan Alistarh , Demjan Grubic , Jerry Li , Ryota Tomioka , and Milan Vojnovic . Qsgd : Communication-efficient sgd via gradient quantization and encoding . In Advances in Neural Information Processing Systems , pp . 1709–1720 , 2017 . Dan Alistarh , Torsten Hoefler , Mikael Johansson , Nikola Konstantinov , Sarit Khirirat , and Cédric Renggli . The convergence of sparsified gradient methods . In Advances in Neural Information Processing Systems , pp . 5973–5983 , 2018 . Jeremy Bernstein , Yu-Xiang Wang , Kamyar Azizzadenesheli , and Animashree Anandkumar . signSGD : Compressed optimisation for non-convex problems . In Jennifer Dy and Andreas Krause ( eds . ) , Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , pp . 560–569 , Stockholmsmässan , Stockholm Sweden , 10–15 Jul 2018 . PMLR . URL http : //proceedings.mlr.press/v80/ bernstein18a.html . Priya Goyal , Piotr Dollár , Ross Girshick , Pieter Noordhuis , Lukasz Wesolowski , Aapo Kyrola , Andrew Tulloch , Yangqing Jia , and Kaiming He . Accurate , large minibatch sgd : Training imagenet in 1 hour . arXiv preprint arXiv:1706.02677 , 2017 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 770–778 , 2016 . Qirong Ho , James Cipar , Henggang Cui , Seunghak Lee , Jin Kyu Kim , Phillip B Gibbons , Garth A Gibson , Greg Ganger , and Eric P Xing . More effective distributed ml via a stale synchronous parallel parameter server . In Advances in neural information processing systems , pp . 1223–1231 , 2013 . Zhouyuan Huo , Bin Gu , and Heng Huang . Large batch training does not need warmup . arXiv preprint arXiv:2002.01576 , 2020 . Sai Praneeth Karimireddy , Quentin Rebjock , Sebastian Stich , and Martin Jaggi . Error feedback fixes signsgd and other gradient compression schemes . In International Conference on Machine Learning , pp . 3252–3261 , 2019 . Jakub Konečnỳ , H Brendan McMahan , Felix X Yu , Peter Richtárik , Ananda Theertha Suresh , and Dave Bacon . Federated learning : Strategies for improving communication efficiency . arXiv preprint arXiv:1610.05492 , 2016 . Alex Krizhevsky , Geoffrey Hinton , et al . Learning multiple layers of features from tiny images . 2009 . Youjie Li , Mingchao Yu , Songze Li , Salman Avestimehr , Nam Sung Kim , and Alexander Schwing . Pipe-sgd : A decentralized pipelined sgd framework for distributed deep net training . In Advances in Neural Information Processing Systems , pp . 8045–8056 , 2018 . Xiangru Lian , Yijun Huang , Yuncheng Li , and Ji Liu . Asynchronous parallel stochastic gradient for nonconvex optimization . In Advances in Neural Information Processing Systems , pp . 2737–2745 , 2015 . Tao Lin , Sebastian U Stich , Kumar Kshitij Patel , and Martin Jaggi . Don ’ t use large mini-batches , use local sgd . arXiv preprint arXiv:1808.07217 , 2018a . Yujun Lin , Song Han , Huizi Mao , Yu Wang , and Bill Dally . Deep gradient compression : Reducing the communication bandwidth for distributed training . In International Conference on Learning Representations , 2018b . URL https : //openreview.net/forum ? id=SkhQHMW0W . Ilya Loshchilov and Frank Hutter . Sgdr : Stochastic gradient descent with warm restarts . arXiv preprint arXiv:1608.03983 , 2016 . Brendan McMahan , Eider Moore , Daniel Ramage , Seth Hampson , and Blaise Aguera y Arcas . Communication-efficient learning of deep networks from decentralized data . In Artificial Intelligence and Statistics , pp . 1273–1282 . PMLR , 2017 . Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , et al . Pytorch : An imperative style , highperformance deep learning library . In Advances in Neural Information Processing Systems , pp . 8024–8035 , 2019 . Olga Russakovsky , Jia Deng , Hao Su , Jonathan Krause , Sanjeev Satheesh , Sean Ma , Zhiheng Huang , Andrej Karpathy , Aditya Khosla , Michael Bernstein , Alexander C. Berg , and Li Fei-Fei . ImageNet Large Scale Visual Recognition Challenge . International Journal of Computer Vision ( IJCV ) , 115 ( 3 ) :211–252 , 2015. doi : 10.1007/s11263-015-0816-y . Frank Seide , Hao Fu , Jasha Droppo , Gang Li , and Dong Yu . 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns . In Fifteenth Annual Conference of the International Speech Communication Association , 2014 . Shuheng Shen , Linli Xu , Jingchang Liu , Xianfeng Liang , and Yifei Cheng . Faster distributed deep net training : Computation and communication decoupled stochastic gradient descent . arXiv preprint arXiv:1906.12043 , 2019 . Sebastian U. Stich . Local SGD converges fast and communicates little . In International Conference on Learning Representations , 2019 . URL https : //openreview.net/forum ? id= S1g2JnRcFX . Sebastian U Stich , Jean-Baptiste Cordonnier , and Martin Jaggi . Sparsified sgd with memory . In Advances in Neural Information Processing Systems , pp . 4447–4458 , 2018 . Thijs Vogels , Sai Praneeth Karimireddy , and Martin Jaggi . Powersgd : Practical low-rank gradient compression for distributed optimization . In Advances in Neural Information Processing Systems , pp . 14259–14268 , 2019 . Jianyu Wang , Vinayak Tantia , Nicolas Ballas , and Michael Rabbat . Slowmo : Improving communication-efficient distributed sgd with slow momentum . arXiv preprint arXiv:1910.00643 , 2019 . Jianyu Wang , Hao Liang , and Gauri Joshi . Overlap local-sgd : An algorithmic approach to hide communication delays in distributed sgd . In ICASSP 2020-2020 IEEE International Conference on Acoustics , Speech and Signal Processing ( ICASSP ) , pp . 8871–8875 . IEEE , 2020 . Wei Wen , Cong Xu , Feng Yan , Chunpeng Wu , Yandan Wang , Yiran Chen , and Hai Li . Terngrad : Ternary gradients to reduce communication in distributed deep learning . In Advances in neural information processing systems , pp . 1509–1519 , 2017 . Jiaxiang Wu , Weidong Huang , Junzhou Huang , and Tong Zhang . Error compensated quantized sgd and its applications to large-scale distributed optimization . arXiv preprint arXiv:1806.08054 , 2018 . An Xu , Zhouyuan Huo , and Heng Huang . Training faster with compressed gradient . arXiv preprint arXiv:2008.05823 , 2020 . Yang You , Igor Gitman , and Boris Ginsburg . Scaling sgd batch size to 32k for imagenet training . arXiv preprint arXiv:1708.03888 , 6 , 2017 . Mingchao Yu , Zhifeng Lin , Krishna Narra , Songze Li , Youjie Li , Nam Sung Kim , Alexander Schwing , Murali Annavaram , and Salman Avestimehr . Gradiveq : Vector quantization for bandwidth-efficient gradient aggregation in distributed cnn training . In Advances in Neural Information Processing Systems , pp . 5123–5133 , 2018 . Shuai Zheng , Ziyue Huang , and James Kwok . Communication-efficient distributed blockwise momentum sgd with error-feedback . In Advances in Neural Information Processing Systems , pp . 11446–11456 , 2019 . A ADDITIONAL EXPERIMENTAL RESULTS A.1 EXPERIMENTAL SETTING All experiments are implemented with PyTorch ( Paszke et al. , 2019 ) and run on a cluster of Nvidia Tesla P40 GPUs . Each node is connected by 40Gbps Ethernet and equipped with 4 GPUs . CIFAR . We train the ResNet-110 ( He et al. , 2016 ) model with 8 workers on CIFAR-10 ( Krizhevsky et al. , 2009 ) image classification task . We report the mean and standard deviation metrics over 3 runs . The base learning rate is 0.4 and the total batch size is 512 . The momentum constant is 0.9 and the weight decay is 1× 10−4 . The model is trained for 200 epochs with a learning rate decay of 0.1 at epoch 100 and 150 . We linearly warm up the learning rate from 0.05 to 0.4 in the beginning 5 epochs . For OLCO3 with staleness s ∈ { 2 , 4 , 8 } , we set the base learning rate to 0.2 due to increased staleness . The rank of PowerSGD is 4 . Random cropping , random flipping , and standardization are applied as data augmentation techniques . We also train ResNet-56 to explore more combinations of s and p in Appendix A.4 with the same other settings . ImageNet . We train the ResNet-50 model with 16 workers on ImageNet ( Russakovsky et al. , 2015 ) image classification tasks . The model is trained for 120 epochs with a cosine learning rate scheduling ( Loshchilov & Hutter , 2016 ) . The base learning rate is 0.4 and the total batch size is 2048 . The momentum constant is 0.9 and the weight decay is 1 × 10−4 . We linearly warm up the learning rate from 0.025 to 0.4 in the beginning 5 epochs . The rank of PowerSGD is 50 . Random cropping , random flipping , and standardization are applied as data augmentation techniques . The Non-i.i.d . Setting . Similar to ( Wang et al. , 2020 ) , we randomly choose fraction α of the whole data , sort the data by the class , and evenly assign them to all workers in order . For the rest fraction ( 1− α ) of the whole data , we randomly and evenly distribute them to all workers ( Figure 3 ) . When 0 < α ≤ 1 is large , the data distribution across workers is non-i.i.d and highly skewed . When α = 0 , it becomes i.i.d . data distribution across workers . In our non-i.i.d . experiments , we choose α = 0.8 . A.2 TRAINING CURVE A.3 TEST ACCURACY Again , Figure 6 empirically confirms the theoretical results in Theorems 1 , 2 , 3 , and 4 that OLCO3TC can handle a larger period p than OLCO3 and that this gap increases with the staleness s in the i.i.d . setting . Note that in the right plot of Figure 2 , the gap between OLCO3-TC and OLCO3-VQ does not increase with s because the period p is decreasing ( the delay tolerance T = sp is fixed ) . B ASSUMPTIONS Assumption 1 . ( L-Lipschitz Smoothness ) Both the local ( fk ( · ) ) and global ( f ( · ) = 1K ∑K k=1 fk ( · ) ) loss functions are L-smooth , i.e. , ‖∇f ( x ) −∇f ( y ) ‖2 ≤ L‖x− y‖2 , ∀x , y ∈ Rd , ( 8 ) ‖∇fk ( x ) −∇fk ( y ) ‖2 ≤ L‖x− y‖2 , ∀k ∈ [ K ] , ∀x , y ∈ Rd . ( 9 ) Assumption 2 . ( Local Bounded Variance ) The local stochastic gradient ∇Fk ( x ; ξ ) has a bounded variance , i.e. , Eξ∼Dk‖∇Fk ( x ; ξ ) −∇fk ( x ) ‖22 ≤ σ2 , ∀k ∈ [ K ] , ∀x ∈ Rd . ( 10 ) Note that Eξ∼Dk∇Fk ( x ; ξ ) = ∇fk ( x ) . Assumption 3 . ( Bounded Variance across Workers ) The L2 norm of the difference of the local and global full gradient is bounded , i.e. , ‖∇fk ( x ) −∇f ( x ) ‖22 ≤ κ2 , ∀k ∈ [ K ] , ∀x ∈ Rd , ( 11 ) where κ = 0 leads to i.i.d . data distributions across workers . Assumption 4 . ( Bounded Full Gradient ) The second moment of the global full gradient is bounded , i.e. , ‖∇f ( x ) ‖22 ≤ G2 , ∀x ∈ Rd . ( 12 ) Assumption 5 . ( δ-approximate compressor ) The compression function C ( · ) : Rd → R is a δapproximate compressor for 0 < δ ≤ 1 if for all v ∈ Rd , ‖C ( v ) − v‖22 ≤ ( 1− δ ) ‖v‖22 . ( 13 ) C BASIC LEMMAS Lemma 1 . For OLCO3-TC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t − et−sp , then we have x̃t = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 14 ) Proof . For t = np where n is some integer , x̃np = 1 K K∑ k=1 x ( k ) np − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = xnp − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = x ( n−1 ) p − C ( ∆ ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 C ( ∆ ( k ) ( n−1−i ) p ) ) − C ( ∆ ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np + 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − C ( ∆ ( n−s ) p ) − e ( n−s ) p = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np − e ( n−s ) p−1 = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 np−1∑ τ= ( n−1 ) p ητm ( k ) τ+1 − 1 K K∑ k=1 e ( k ) np−1 − e ( n−s ) p−1 = x̃np−1 − 1 K K∑ k=1 ηnp−1m ( k ) np . ( 15 ) For t 6= np , x̃t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t − et−sp = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−1 − et−sp−1 = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 16 ) Lemma 2 . For OLCO3-OC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t , then we have x̃t = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 17 ) Proof . For t = np where n is some integer , x̃np = 1 K K∑ k=1 x ( k ) np − 1 K K∑ k=1 e ( k ) np = xnp − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np = x ( n−1 ) p − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 C ( ∆ ( k ) ( n−1−i ) p ) ) − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 C ( ∆ ( k ) np ) − 1 K K∑ k=1 e ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 np−1∑ τ= ( n−1 ) p ητm ( k ) τ+1 − 1 K K∑ k=1 e ( k ) np−1 = x̃np−1 − 1 K K∑ k=1 ηnp−1m ( k ) np . ( 18 ) For t 6= np , x̃t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−1 = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 19 ) Lemma 3 . For OLCO3-VQ , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t−sp , then we have x̃t = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 20 ) Proof . For t = np where n is some integer , x̃np = 1 K K∑ k=1 x ( k ) np − 1 K K∑ k=1 e ( k ) ( n−s ) p = xnp − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p − 1 K K∑ k=1 e ( k ) ( n−s ) p = x ( n−1 ) p − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p − 1 K K∑ k=1 e ( k ) ( n−s ) p = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 ∆ ( k ) ( n−1−i ) p ) − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p − 1 K K∑ k=1 e ( k ) ( n−s ) p = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 ∆ ( k ) ( n−1−i ) p ) − 1 K K∑ k=1 ∆ ( k ) ( n−s ) p − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 np−1∑ τ= ( n−1 ) p ητm ( k ) τ+1 − 1 K K∑ k=1 e ( k ) np−1−sp = x̃np−1 − 1 K K∑ k=1 ηnp−1m ( k ) np . ( 21 ) For t 6= np , x̃t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−sp = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−1−sp = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 22 ) D PROOF OF THEOREM 1 Lemma 4 . For OLCO3-VQ with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 23 ) Proof . First we have E‖∇F ( x ( k ) t ; ξ ( k ) t ) ‖22 ≤ 3E‖∇F ( x ( k ) t ; ξ ( k ) t ) −∇fk ( x ( k ) t ) ‖2 + 3E‖∇fk ( x ( k ) t ) −∇f ( x ( k ) t ) ‖22 + 3E‖∇f ( x ( k ) t ) ‖22 ≤ 3σ2 + 3κ2 + 3G2 . ( 24 ) Let St = b tpc , E‖e ( k ) t ‖22 = E‖e ( k ) Stp ‖22 = E‖C ( ∆ ( k ) Stp ) −∆ ( k ) Stp‖ 2 2 ≤ ( 1− δ ) E‖∆ ( k ) Stp ‖22 = ( 1− δ ) E‖ Stp−1∑ t′= ( St−1 ) p η∇F ( x ( k ) t′ ; ξ ( k ) t′ ) + e ( k ) ( St−s−1 ) p‖ 2 2 ≤ ( 1− δ ) ( 1 + ρ ) E‖e ( k ) ( St−s−1 ) p‖ 2 2 + ( 1 + δ ) ( 1 + 1 ρ ) E‖ Stp−1∑ t′= ( St−1 ) p η∇F ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ ( 1− δ ) ( 1 + ρ ) E‖e ( k ) ( St−s−1 ) p‖ 2 2 + 3 ( 1 + δ ) ( 1 + 1 ρ ) p2η2 ( σ2 + κ2 +G2 ) . ( 25 ) Therefore , E‖e ( k ) t ‖22 ≤ 3 ( 1− δ ) ( 1 + 1 ρ ) p2η2 ( σ2 + κ2 +G2 ) bSts c−1∑ i=0 [ ( 1− δ ) ( 1 + ρ ) ] i ≤ 3 ( 1− δ ) ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) . ( 26 ) Let ρ = δ2 ( 1−δ ) such that 1 + 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ 12 ( 1−δ ) δ2 p 2η2 ( σ2 + κ2 +G2 ) . Lemma 5 . For OLCO3-VQ with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 16L ( s+1 ) p , we have 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 9η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 36 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 . ( 27 ) Proof . Let St = b tpc , 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 ∆ ( k′ ) ( St−i ) p − t−1∑ t′=Stp η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) ) − ( − s−1∑ i=0 ∆ ( k ) ( St−i ) p − t−1∑ t′=Stp η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ) − 1 K K∑ k′=1 e ( k ′ ) t−sp‖22 = 1 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) − 1 K K∑ k′=1 e ( k ′ ) t−sp − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2η 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 + 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) t−sp − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 . ( 28 ) The first term is bounded by 2η2 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ 2η 2 K K∑ k=1 E ∥∥∥∥∥∥ t−1∑ t′= ( St−s ) p ( − 1 K K∑ k′=1 ( ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) −∇fk′ ( x ( k′ ) t′ ) ) + ( ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ) ) ∥∥∥∥∥∥ 2 2 + 2η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p ( − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ) ‖ 2 2 = 2η2 K K∑ k=1 t−1∑ t′= ( St−s ) p E‖ − 1 K K∑ k′=1 ( ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) −∇fk′ ( x ( k′ ) t′ ) ) + ( ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ) ‖ 2 2 + 2η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p ( − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ) ‖ 2 2 ≤ 2η 2 K K∑ k=1 t−1∑ t′= ( St−s ) p E‖∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ‖ 2 2 + 2η2 K K∑ k=1 t−1∑ t′= ( St−s ) p ( t− ( St − s ) p ) E‖ − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ‖ 2 2 ≤ 2η2 ( s+ 1 ) pσ2 + 2η 2 ( s+ 1 ) p K t−1∑ t′=t− ( s+1 ) p K∑ k=1 E‖ − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ‖ 2 2 , ( 29 ) where the third inequality follows 1K ∑K k=1 ‖ 1 K ∑K k′=1 ak′ − ak‖22 = 1 K ∑K k=1 ‖ak‖22 − ‖ 1K ∑K k=1 ak‖22 ≤ 1 K ∑K k=1 ‖ak‖22 , and 1 K K∑ k=1 E‖ − 1 K K∑ k′=1 ∇fk′ ( xk ′ t ) +∇fk ( x ( k ) t ) ‖22 = 3 K K∑ k=1 E [ ‖∇fk ( x ( k ) t ) −∇fk ( x̃t ) ‖22 + ‖∇fk ( x̃t ) −∇f ( x̃t ) ‖22 + ‖∇f ( x̃t ) − 1 K K∑ k′=1 ∇fk′ ( xk ′ t ) ‖22 ] ≤ 3 K K∑ k=1 E [ L2‖x̃t − x ( k ) t ‖22 + κ2 + 1 K K∑ k′=1 ‖∇fk′ ( x̃t ) −∇fk′ ( xk ′ t ) ‖22 ] ≤ 6L 2 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 + 3κ2 . ( 30 ) The second term is bounded by 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) t−sp − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2 ( 1 + s ) K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p‖ 2 2 + 2 ( 1 + 1s ) K K∑ k=1 E‖ − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2 ( 1 + s ) K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 2 ( 1 + 1s ) K K∑ k=1 E‖ s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2 ( 1 + s ) K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 2 ( 1 + s ) K K∑ k=1 s−1∑ i=0 E‖e ( k ) ( St−i−s−1 ) p‖ 2 2 = 2 ( s+ 1 ) K K∑ k=1 s∑ i=0 E‖e ( k ) ( St−i−s ) p‖ 2 2 ≤ 24 ( 1− δ ) δ2 ( s+ 1 ) 2p2η2 ( σ2 + κ2 +G2 ) , ( 31 ) where the last inequality follows Lemma 4 . Combine the bounds of the first term and the second term , we have 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 + 2η2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 + 3κ2 ) + 24 ( 1− δ ) δ2 ( s+ 1 ) 2p2η2 ( σ2 + κ2 +G2 ) ≤ 2η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 6η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 24 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 + 12η2L2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 . ( 32 ) Sum the above inequality from t = 0 to t = T − 1 and divide it by T , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 6η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 24 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 + 12η2L2 ( s+ 1 ) 2p2 · 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 . ( 33 ) Therefore , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1−δ ) δ2 ( s+ 1 ) p ) + 6η 2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1−δ ) δ2 ) + 24 ( 1−δ ) δ2 η 2 ( s+ 1 ) 2p2G2 1− 12η2L2 ( s+ 1 ) 2p2 . ( 34 ) If we choose η ≤ 16L ( s+1 ) p , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 9η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 36 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 . ( 35 ) Theorem 1 . For OLCO3-VQ with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2L2 ( s+ 1 ) pσ2 ( 1 + 14 ( 1− δ ) δ2 ( s+ 1 ) p ) + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 5 ( 1− δ ) δ2 ) + 168 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 . ( 36 ) Proof . According to Assumption 1 , Etf ( x̃t+1 ) − f ( x̃t ) ≤ Et〈∇f ( x̃t ) , x̃t+1 − x̃t〉+ L 2 Et‖x̃t+1 − x̃t‖22 = −η〈∇f ( x̃t ) , 1 K K∑ k=1 ∇fk ( x ( k ) t ) 〉+ Lη2 2 Et‖ 1 K K∑ k=1 ∇Fk ( x ( k ) t ; ξ ( k ) t ) ‖22 . ( 37 ) For the first term , −〈∇f ( x̃t ) , 1 K K∑ k=1 ∇fk ( x ( k ) t ) 〉 = −‖∇f ( x̃t ) ‖22 − 〈∇f ( x̃t ) , 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) 〉 ≤ −1 2 ‖∇f ( x̃t ) ‖22 + 1 2 ‖ 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) ‖22 ≤ −1 2 ‖∇f ( x̃t ) ‖22 + L2 2K K∑ k=1 ‖x̃t − x ( k ) t ‖22 , ( 38 ) where the first equality follows that∇f ( x̃t ) = 1K ∑K k=1∇fk ( x̃t ) . For the second term , Et‖ 1 K K∑ k=1 ∇Fk ( x ( k ) t ; ξ ( k ) t ) ‖22 = Et‖ 1 K K∑ k=1 ( ∇Fk ( x ( k ) t ; ξ ( k ) t ) −∇fk ( x ( k ) t ) ) + 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) +∇f ( x̃t ) ‖22 ≤ 3Et‖ 1 K K∑ k=1 ( ∇Fk ( x ( k ) t ; ξ ( k ) t ) −∇fk ( x ( k ) t ) ) ‖22 + 3‖ 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) ‖22 + 3‖∇f ( x̃t ) ‖22 ≤ 3σ 2 K + 3L2 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3‖∇f ( x̃t ) ‖22 . ( 39 ) Combine them and we have Et ( x̃t+1 ) − f ( x̃t ) ≤ − η 2 ( 1− 3ηL ) ‖∇f ( x̃t ) ‖22 + ηL2 2 ( 1 + 3ηL ) 1 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3η2Lσ2 2K . ( 40 ) If we choose η ≤ 19L , Et ( x̃t+1 ) − f ( x̃t ) ≤ − η 3 ‖∇f ( x̃t ) ‖22 + 2ηL2 3 1 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3η2Lσ2 2K . ( 41 ) Then for the averaged parameters 1K ∑K k=1 x ( k ) t , ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( x̃t ) ‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 2L2 K K∑ k=1 ‖e ( k ) t−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 6 [ f ( x̃t ) − Etf ( x̃t+1 ) ] η + 9ηLσ2 K + 4L2 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 2L2 K K∑ k=1 ‖e ( k ) t−sp‖22 . ( 42 ) Take total expectation , sum from t = 0 to t = T − 1 , and rearrange , 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 2L2 KT T−1∑ t=0 K∑ k=1 E‖e ( k ) t−sp‖22 + 4L2 KT T−1∑ t=0 K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 24 ( 1− δ ) δ2 p2η2L2 ( σ2 + κ2 +G2 ) + 12η2L2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 144 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2L2 ( s+ 1 ) pσ2 ( 1 + 14 ( 1− δ ) δ2 ( s+ 1 ) p ) + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 5 ( 1− δ ) δ2 ) + 168 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 , ( 43 ) where the second inequality follows Lemma 4 and 5 . E PROOF OF THEOREM 2 Lemma 6 . For OLCO3-TC with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 44 ) Proof . Same as the proof of Lemma 4 , except that e ( k ) ( St−s−1 ) p is replaced with e ( k ) ( St−1 ) p. Lemma 7 . For OLCO3-TC with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the server error satisfies E‖et‖22 ≤ 96 ( 2− δ ) ( 1− δ ) δ4 p2η2 ( σ2 + κ2 +G2 ) . ( 45 ) Proof . Let St = b tpc , E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) ‖ 2 2 ≤ 2E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) − 1 K K∑ k=1 ∆ ( k ) Stp ‖22 + 2E‖ 1 K K∑ k=1 ∆ ( k ) Stp ‖22 ≤ 2 K K∑ k=1 E‖C ( ∆ ( k ) Stp ) −∆ ( k ) Stp ‖22 + 2 K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 ≤ 2 ( 2− δ ) K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 . ( 46 ) Following the proof of Lemma 4 we have E‖∆ ( k ) Stp‖ 2 2 ≤ 3 ( 1+ 1ρ ) 1− ( 1−δ ) ( 1+ρ ) p 2η2 ( σ2+κ2+G2 ) . Therefore , E‖et‖22 = E‖eStp‖22 ≤ ( 1− δ ) E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) + e ( St−1 ) p‖ 2 ≤ ( 1− δ ) ( 1 + 1 ρ ) E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) ‖ 2 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 1 K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 3 ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 6 ( 2− δ ) ( 1− δ ) ( 1 + 1ρ ) 2 [ 1− ( 1− δ ) ( 1 + ρ ) ] 2 p2η2 ( σ2 + κ2 +G2 ) . ( 47 ) Let ρ = δ2 ( 1−δ ) such that 1+ 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ 96 ( 2−δ ) ( 1−δ ) δ4 p 2η2 ( σ2 +κ2 +G2 ) . Lemma 8 . For OLCO3-TC with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 16L ( s+1 ) p and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , we have 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2pσ2 ( s+ 1 + 72h ( δ ) p ) + 9η2p2κ2 ( ( s+ 1 ) 2 + 24h ( δ ) ) + 216h ( δ ) η2p2G2 . ( 48 ) Proof . Let St = b tpc , 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 C ( ∆ ( k ′ ) ( St−i ) p ) − t−1∑ t′=Stp η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) ) − ( − s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) − t−1∑ t′=Stp η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ) − 1 K K∑ k′=1 e ( k ′ ) t − et−sp‖22 , ( 49 ) where s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) = s−1∑ i=0 [ ∆ ( k ) ( St−i ) p − e ( k ) ( St−i ) p ] = s−1∑ i=0 [ ( St−i ) p−1∑ t′= ( St−i−1 ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) + e ( k ) ( St−i−1 ) p − e ( k ) ( St−i ) p ] = s−1∑ i=0 ( St−i ) p−1∑ t′= ( St−i−1 ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) + e ( k ) ( St−s ) p − e ( k ) Stp . ( 50 ) Therefore 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) − 1 K K∑ k′=1 ( e ( k ′ ) ( St−s ) p − e ( k′ ) Stp ) + ( e ( k ) ( St−s ) p − e ( k ) Stp ) − 1 K K∑ k′=1 e ( k ′ ) t − et−sp‖22 ≤ 2η 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 + 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p + e ( k ) ( St−s ) p − e ( k ) Stp − e ( St−s ) p‖ 2 2 , ( 51 ) where the first term can be bounded following Eqs . ( 29,30 ) . The second term satisfies 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p + e ( k ) ( St−s ) p − e ( k ) Stp − e ( St−s ) p‖ 2 2 ≤ 6 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p + e ( k ) ( St−s ) p‖ 2 2 + 6 K K∑ k=1 E‖e ( k ) Stp‖ 2 2 + 6 K K∑ k=1 E‖e ( St−s ) p‖ 2 2 ≤ 6 K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 6 K K∑ k=1 E‖e ( k ) Stp‖ 2 2 + 6 K K∑ k=1 E‖e ( St−s ) p‖ 2 2 ≤ 1− δ δ2 ( 1 + 4 ( 2− δ ) δ2 ) · 144p2η2 ( σ2 + κ2 +G2 ) , ( 52 ) where the last inequality follows Lemmas 6 and 7 . Let h ( δ ) = 1−δδ2 ( 1+ 4 ( 2−δ ) δ2 ) . Combine the above two inequalities and we have 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 + 2η2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 + 3κ2 ) + 144h ( δ ) p2η2 ( σ2 + κ2 +G2 ) ≤ 2η2pσ2 ( s+ 1 + 72h ( δ ) p ) + 6η2p2κ2 ( ( s+ 1 ) 2 + 24h ( δ ) ) + 144h ( δ ) p2η2G2 + 12η2L2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 . ( 53 ) Following Eqs . ( 33,34,35 ) , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2pσ2 ( s+ 1 + 72h ( δ ) p ) + 9η2p2κ2 ( ( s+ 1 ) 2 + 24h ( δ ) ) + 216h ( δ ) η2p2G2 . ( 54 ) Theorem 2 . For OLCO3-TC with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 80h ( δ ) p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 80h ( δ ) ) + 960η2p2G2h ( δ ) . ( 55 ) Proof . Following the proof of Theorem 1 , we have the same inequality as Eq . ( 41 ) : Etf ( x̃t+1 ) − f ( x̃t ) ≤ − η 3 ‖∇f ( x̃t ) ‖22 + 2ηL2 3 1 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3η2Lσ2 2K . ( 56 ) Then for the averaged parameters 1K ∑K k=1 x ( k ) t , ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( x̃t ) ‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t + et−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 4L 2 K K∑ k=1 ‖e ( k ) t ‖22 + 4L2 K ‖et−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 6 [ f ( x̃t ) − Etf ( x̃t+1 ) ] η + 9ηLσ2 K + 4L2 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 4L2 K K∑ k=1 ‖e ( k ) t ‖22 + 4L2 K ‖et−sp‖22 . ( 57 ) Take total expectation , sum from t = 0 to t = T − 1 , and rearrange , 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 4L2 KT T−1∑ t=0 K∑ k=1 E‖e ( k ) t−sp‖22 + 4L2 KT T−1∑ t=0 E‖et−sp‖22 + 4L2 KT T−1∑ t=0 K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 72h ( δ ) p+ 4 ( 1− δ ) δ2 p+ 32 ( 2− δ ) ( 1− δ ) δ4 p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 72h ( δ ) + 4 ( 1− δ ) δ2 + 32 ( 2− δ ) ( 1− δ ) δ4 ) + 12η2p2G2 ( 72h ( δ ) + 4 ( 1− δ ) δ2 + 32 ( 2− δ ) ( 1− δ ) δ4 ) ≤ 6 ( f ( x̃0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 80h ( δ ) p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 80h ( δ ) ) + 960η2p2G2h ( δ ) , ( 58 ) where the second inequality follows Lemmas 6 , 7 and 8 . F PROOF OF THEOREM 3 We first define two virtual variables zt and pt satisfying pt = { µ 1−µ ( x̃t − x̃t−1 ) , t ≥ 1 0 , t = 0 ( 59 ) and zt = x̃t + pt . ( 60 ) Then the update rule of zt satisfies zt+1 − zt = ( x̃t+1 − x̃t ) + µ 1− µ ( x̃t+1 − x̃t ) − µ 1− µ ( x̃t − x̃t−1 ) = − η K K∑ k=1 m ( k ) t+1 − µ 1− µ η K K∑ k=1 m ( k ) t+1 + µ 1− µ η K K∑ k=1 m ( k ) t = − η ( 1− µ ) K K∑ k=1 ( m ( k ) t+1 − µm ( k ) t ) = − η ( 1− µ ) K K∑ k=1 ∇fk ( x ( k ) t ; ξ ( k ) t ) , ( 61 ) which exists for OLCO3-OC , OLCO3-VQ and OLCO3-TC . Lemma 9 . For OLCO3 with Momentum SGD , we have E‖m ( k ) t ‖22 ≤ 3 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 62 ) Proof . E‖m ( k ) t ‖22 = E‖ t−1∑ t′=0 µt−1−t ′ ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 = ( t−1∑ t′=0 µt−1−t ′ ) 2E‖ t−1∑ t′=0 µt−1−t ′∑t−1 t′=0 µ t−1−t′ ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ ( t−1∑ t′=0 µt−1−t ′ ) 2E‖∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ 3 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 63 ) Lemma 10 . For OLCO3 with Momentum SGD , we have E‖pt‖2 ≤ 3µ2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 . ( 64 ) Proof . E‖pt‖2 = µ2 ( 1− µ ) 2 E‖x̃t − x̃t−1‖2 = µ2η2 ( 1− µ ) 2 E‖ 1 K K∑ k=1 m ( k ) t ‖2 ≤ µ2η2 ( 1− µ ) 2K K∑ k=1 E‖m ( k ) t ‖22 ≤ 3µ 2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 . ( 65 ) Lemma 11 . For OLCO3-VQ with Momentum SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) ( 1− µ ) 2δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 66 ) Proof . Let St = b tpc , E‖e ( k ) t ‖22 = E‖e ( k ) Stp ‖22 = E‖C ( ∆ ( k ) Stp ) −∆ ( k ) Stp‖ 2 2 ≤ ( 1− δ ) E‖∆ ( k ) Stp ‖22 = ( 1− δ ) E‖ ST p−1∑ t′= ( St−1 ) p ηm ( k ) t′ + e ( k ) ( St−s−1 ) p‖ 2 2 ≤ ( 1− δ ) ( 1 + ρ ) E‖e ( k ) ( St−s−1 ) p‖ 2 2 + ( 1− δ ) ( 1 + 1 ρ ) 3η2p2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 67 ) Therefore , E‖e ( k ) t ‖22 ≤ 3 ( 1− δ ) ( 1 + 1 ρ ) p2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 bSts c−1∑ i=0 [ ( 1− δ ) ( 1 + ρ ) ] i ≤ 3 ( 1− δ ) ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 68 ) Let ρ = δ2 ( 1−δ ) such that 1 + 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ 12 ( 1−δ ) ( 1−µ ) 2δ2 p 2η2 ( σ2 + κ2 +G2 ) . Lemma 12 . For OLCO3-VQ with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 1−µ√ 72L ( s+1 ) p , we have 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 4η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 12 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2κ2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + ( s+ 1 ) 2p2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2G2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) . ( 69 ) Proof . Let St = b tpc , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 ∆ ( k′ ) ( St−i ) p − ( x ( k′ ) Stp − x ( k ′ ) t ) ) − ( − s−1∑ i=0 ∆ ( k ) ( St−i ) p − ( x ( k ) Stp − x ( k ) t ) − 1 K K∑ k′=1 e ( k ′ ) t−sp‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( ηm ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ ) − ( ηm ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ ) + 1 K K∑ k′=1 e ( k ′ ) t−sp + 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p − s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 3η 2 K K∑ k=1 E‖ 1 K K∑ k′=1 m ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 −m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1‖22 + 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ‖22 + 3 K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) t−sp + 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 . ( 70 ) The first term 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 m ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 −m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1‖22 ≤ 3η 2 K K∑ k=1 E‖m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1‖22 ≤ 3η2 ( 1− µ ) 2K K∑ k=1 E‖m ( k ) ( St−s ) p‖ 2 2 ≤ 9η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 , ( 71 ) where the last inequality follows Lemma 9 . Following Eq . ( 29 ) , the second term can be bounded by 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ‖22 = 3η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p [ 1 K K∑ k′=1 ( ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) −∇fk′ ( x ( k′ ) t′ ) ) − ( ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ) ] t−1−t′∑ τ=0 µτ‖22 + 3η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p ( 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) −∇fk ( x ( k ) t′ ) ) t−1−t′∑ τ=0 µτ‖22 ≤ 3η 2 ( 1− µ ) 2K K∑ k=1 t−1∑ t′= ( St−s ) p E‖∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ‖ 2 2 + 3η2 ( 1− µ ) 2K ( t− ( St − s ) p ) K∑ k=1 t−1∑ t′= ( St−s ) p E‖ 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) −∇fk ( x ( k ) t′ ) ‖ 2 2 ≤ 3η 2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 3η2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 + 3κ 2 ) , ( 72 ) where the last inequality follows Eq . ( 30 ) . Combine the bounds of the first and second term with Lemma 11 and Eq . ( 31 ) , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 9η 2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 + 3η2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 3η2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 + 3κ 2 ) + 3 ( s+ 1 ) K K∑ k=1 s∑ i=0 E‖e ( k ) ( St−i−s ) p‖ 2 2 ≤ 9η 2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 + 3η2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 9η2 ( s+ 1 ) 2p2κ2 ( 1− µ ) 2 + 36 ( 1− δ ) η2 ( s+ 1 ) 2p2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2δ2 + 18η2L2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 . ( 73 ) Sum the above inequality from t = 0 to t = T − 1 and divide it by T , 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 3η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 12 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 9η2κ2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + ( s+ 1 ) 2p2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 9η2G2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 18η2L2 ( s+ 1 ) 2p2 ( 1− µ ) 2 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 . ( 74 ) If we choose η ≤ 1−µ√ 72L ( s+1 ) p , 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 4η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 12 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2κ2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + ( s+ 1 ) 2p2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2G2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) . ( 75 ) Theorem 3 . For OLCO3-VQ with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and let g ( µ , δ , s , p ) = 15 ( 1−µ ) 2 + 60 ( 1−δ ) ( s+1 ) 2p2 δ2 , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 4η2L2 ( 1− µ ) 2 [ ( 4 ( s+ 1 ) p+ g ( µ , δ , s , p ) ) σ2 + ( 12 ( s+ 1 ) 2p2 + g ( µ , δ , s , p ) ) κ2 + g ( µ , δ , s , p ) G2 ] . ( 76 ) Proof . Following the proof of Theorem 1 and the update rule Eq . ( 61 ) , we have a similar inequality as Eq . ( 41 ) by choosing η ≤ 1−µ9L : Etf ( zt+1 ) − f ( zt ) ≤ η 1− µ ( −1 2 ‖∇f ( zt ) ‖22 + L2 2K K∑ k=1 ‖zt − x ( k ) t ‖22 ) + Lη2 2 ( 1− µ ) 2 ( 3σ2 K + 3L2 K K∑ k=1 ‖zt − x ( k ) t ‖22 + 3‖∇f ( zt ) ‖22 ) = − η 2 ( 1− µ ) ( 1− 3Lη 1− µ ) ‖∇f ( zt ) ‖22 + L2η 2 ( 1− µ ) K ( 1 + 3Lη 1− µ ) K∑ k=1 ‖zt − x ( k ) t ‖22 + 3Lη2σ2 2 ( 1− µ ) 2K ≤ − η 3 ( 1− µ ) ‖∇f ( zt ) ‖22 + 2ηL2 3 ( 1− µ ) K K∑ k=1 ‖zt − x ( k ) t ‖22 + 3Lη2σ2 2 ( 1− µ ) 2K . ( 77 ) Then for the averaged parameters 1K ∑K k=1 x ( k ) t , ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( zt ) ‖22 + 2‖∇f ( zt ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t−sp − pt‖22 + 2‖∇f ( zt ) ‖22 ≤ 4L2‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 4L2‖pt‖22 + 2‖∇f ( zt ) ‖22 . ( 78 ) Therefore 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) [ f ( z0 ) − f ( zT ) ] ηT + 9Lησ2 ( 1− µ ) K + 4L2 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 + 4L2 T T−1∑ t=0 E‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 4L2 T T−1∑ t=0 E‖pt‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 4η2L2 ( 1− µ ) 2 [ ( 4 ( s+ 1 ) p+ g ( µ , δ , s , p ) ) σ2 + ( 12 ( s+ 1 ) 2p2 + g ( µ , δ , s , p ) ) κ2 + g ( µ , δ , s , p ) G2 ] . ( 79 ) where the last inequality follows Lemmas 10 , 11 and 12 and g ( µ , δ , s , p ) = 15 ( 1−µ ) 2 + 60 ( 1−δ ) ( s+1 ) 2p2 δ2 . G PROOF OF THEOREM 4 Lemma 13 . For OLCO3-TC with Momentum SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) ( 1− µ ) 2δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 80 ) Proof . Same as the proof of Lemma 11 , except that e ( k ) ( St−s−1 ) p is replaced with e ( k ) ( St−1 ) p. Lemma 14 . For OLCO3-TC with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the server error satisfies E‖et‖22 ≤ 96 ( 2− δ ) ( 1− δ ) ( 1− µ ) 2δ4 p2η2 ( σ2 + κ2 +G2 ) . ( 81 ) Proof . Let St = b tpc . Following the proof of Lemma 11 we have E‖∆ ( k ) Stp ‖22 ≤ 3 ( 1+ 1ρ ) 1− ( 1−δ ) ( 1+ρ ) p2η2 ( σ2+κ2+G2 ) ( 1−µ ) 2 . Therefore , E‖et‖22 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 1 K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 3 ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 6 ( 2− δ ) ( 1− δ ) ( 1 + 1ρ ) 2 [ 1− ( 1− δ ) ( 1 + ρ ) ] 2 ( 1− µ ) 2 p2η2 ( σ2 + κ2 +G2 ) , ( 82 ) where the first inequality follows the proof of Lemma 7 . Let ρ = δ2 ( 1−δ ) such that 1+ 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ E‖e ( k ) t ‖22 ≤ 96 ( 2−δ ) ( 1−δ ) ( 1−µ ) 2δ4 p 2η2 ( σ2 + κ2 +G2 ) . Lemma 15 . For OLCO3-TC with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 1−µ√ 72L ( s+1 ) p , we have 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 3η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 72h ( δ ) p2 ) + 3η2κ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 3 ( s+ 1 ) 2p2 + 72h ( δ ) p2 ) + 3η2G2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 72h ( δ ) p2 ) , ( 83 ) where h ( δ ) = 1−δδ2 ( 1 + 4 ( 2−δ ) δ2 ) . Proof . Let St = b tpc , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 C ( ∆ ( k ′ ) ( St−i ) p ) − ( x ( k′ ) Stp − x ( k ′ ) t ) ) − ( − s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) − ( x ( k′ ) Stp − x ( k ) t ) − 1 K K∑ k′=1 e ( k ) t − et−sp‖22 , ( 84 ) where s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) + ( x ( k ) Stp − x ( k ) t ) = s−1∑ i=0 [ ∆ ( k ) ( St−i ) p − e ( k ) ( St−i ) p ] + ( x ( k ) Stp − x ( k ) t ) = s−1∑ i=0 [ m ( k ) ( St−i−1 ) p p−1∑ τ=0 µτ+1 + ( St−i ) p−1∑ t′= ( St−i−1 ) p η∇F ( x ( k ) t′ ; ξ ( k ) t′ ) ( St−i ) p−1−t′∑ τ=0 µτ + e ( k ) ( St−i−1 ) p − e ( k ) ( St−i ) p ] + m ( k ) Stp t−1−Stp∑ τ=0 µτ+1 + t−1∑ t′=Stp η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ = m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ + e ( k ) ( St−s ) p − e ( k ) Stp . ( 85 ) Therefore , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ηm ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 − ηm ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ + 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p − e ( k ) ( St−s ) p + e ( k ) Stp + et−sp‖22 = 3η2 ( 1− µ ) 2K K∑ k=1 E‖m ( k ) ( St−s ) p‖ 2 2 + 3 K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p − e ( k ) ( St−s ) p + e ( k ) Stp + et−sp‖22 + 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ‖22 , ( 86 ) where the first term is bounded following Lemma 9 and the third term is bounded following Eq . ( 72 ) . The second term 3 K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p − e ( k ) ( St−s ) p + e ( k ) Stp + et−sp‖22 ≤ 9 K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 9 K K∑ k=1 E‖e ( k ) Stp‖ 2 2 + 9 K K∑ k=1 E‖e ( St−s ) p‖ 2 2 ≤ 1− δ ( 1− µ ) 2δ2 ( 1 + 4 ( 2− δ ) δ2 ) · 216p2η2 ( σ2 + κ2 +G2 ) . ( 87 ) Combine these bounds , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 9η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 + 1− δ ( 1− µ ) 2δ2 ( 1 + 4 ( 2− δ ) δ2 ) · 216p2η2 ( σ2 + κ2 +G2 ) + 3η2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 3η2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 + 3κ 2 ) = 18η2L2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 ( 88 ) Sum the above inequality from t = 0 to t = T − 1 , divide it by T , and choose η ≤ 1−µ√ 72L ( s+1 ) p , 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 3η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 72h ( δ ) p2 ) + 3η2κ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 3 ( s+ 1 ) 2p2 + 72h ( δ ) p2 ) + 3η2G2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 72h ( δ ) p2 ) . ( 89 ) Theorem 4 . For OLCO3-TC with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 6η2L2 ( 1− µ ) 2 [ σ2 ( 9 ( 1− µ ) 2 + 2 ( s+ 1 ) p+ 168h ( δ ) p2 ) + κ2 ( 9 ( 1− µ ) 2 + 6 ( s+ 1 ) 2p2 + 168h ( δ ) p2 ) +G2 ( 9 ( 1− µ ) 2 + 168h ( δ ) p2 ) ] . ( 90 ) Proof . Following the proof of Theorem 3 , Etf ( zt+1 ) −f ( zt ) ≤ − η 3 ( 1− µ ) ‖∇f ( zt ) ‖22 + 2ηL2 3 ( 1− µ ) K K∑ k=1 ‖zt−x ( k ) t ‖22 + 3Lη2σ2 2 ( 1− µ ) 2K , ( 91 ) ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( zt ) ‖22 + 2‖∇f ( zt ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t + et−sp − pt‖22 + 2‖∇f ( zt ) ‖22 ≤ 6L2‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 6L2‖et−sp‖22 + 6L2‖pt‖22 + 2‖∇f ( zt ) ‖22 . ( 92 ) Therefore 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) [ f ( z0 ) − f ( zT ) ] ηT + 9Lησ2 ( 1− µ ) K + 4L2 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 + 6L2 T T−1∑ t=0 E‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 6L2 T T−1∑ t=0 E‖et−sp‖22 + 6L2 T T−1∑ t=0 E‖pt‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 6η2L2 ( 1− µ ) 2 [ σ2 ( 9 ( 1− µ ) 2 + 2 ( s+ 1 ) p+ 168h ( δ ) p2 ) + κ2 ( 9 ( 1− µ ) 2 + 6 ( s+ 1 ) 2p2 + 168h ( δ ) p2 ) +G2 ( 9 ( 1− µ ) 2 + 168h ( δ ) p2 ) ] , ( 93 ) where the last inequality follows Lemmas 10 , 13 , 14 and 15 .
This paper proposes OLCO3, a new delay-tolerant SGD communication scheme and training framework for distributed deep neural network training. OLCO3 combines the existing ideas of Stale synchronous Parallel, batching the communication of doing multiple iterations, and gradient compression to achieve more communication efficiency. OLCO3 also uses staleness compensation and compression compensation techniques to improve model convergence under high stateness. Theoretical analysis shows that OLCO3 converges under SGD and momentum SGD. The evaluation was done with ResNet models on Cifar-10 and ImageNet datasets. Under a high staleness delay tolerance 56, OLCO3 achieves better convergence and has lower communication traffic than the baseline methods.
SP:7ca5ba13170227684a45a4fef71675925b752f87
Delay-Tolerant Local SGD for Efficient Distributed Training
1 INTRODUCTION Data-parallel synchronous SGD is currently the workhorse algorithm for large-scale distributed deep learning tasks with many workers ( e.g . GPUs ) , where each worker calculates the stochastic gradient on local data and synchronizes with the other workers in one training iteration ( Goyal et al. , 2017 ; You et al. , 2017 ; Huo et al. , 2020 ) . However , high communication overheads make it inefficient to train large deep neural networks ( DNNs ) with a large number of workers . Generally speaking , the communication overheads come in two forms : 1 ) high communication delay due to the unstable network or a large number of communication hops , and 2 ) large communication budget caused by the large size of the DNN models with limited network bandwidth . Although communication delay is not a prominent problem for the data center environment , it can severely degrade training efficiency in practical scenarios , e.g . when the workers are geo-distributed or placed under different networks ( Ethernet , cellular networks , Wi-Fi , etc . ) in federated learning ( Konečnỳ et al. , 2016 ) . Existing works to address the communication inefficiency of synchronous SGD can be roughly classified into three categories : 1 ) pipelining ( Pipe-SGD ( Li et al. , 2018 ) ) ; 2 ) gradient compression ( Aji & Heafield , 2017 ; Stich et al. , 2018 ; Alistarh et al. , 2018 ; Yu et al. , 2018 ; Vogels et al. , 2019 ) ; and 3 ) periodic averaging ( also known as Local SGD ) ( Stich , 2019 ; Lin et al. , 2018a ) . In pipelining , the model update uses stale information such that the next iteration does not wait for the synchronization of information in the current iteration to update the model . As the synchronization barrier is removed , pipelining can overlap computation with communication to achieve delay tolerance . Gradient compression reduces the amount of data transferred in each iteration by condensing the gradient with a compressor C ( · ) . Representative methods include scalar quantization ( Alistarh et al. , 2017 ; Wen et al. , 2017 ; Bernstein et al. , 2018 ) , gradient sparsification ( Aji & Heafield , 2017 ; Stich et al. , 2018 ; Alistarh et al. , 2018 ) , and vector quantization ( Yu et al. , 2018 ; Vogels et al. , 2019 ) . Periodic averaging reduces the frequency of communication by synchronizing the workers every p ( larger than 1 ) iterations . Periodic averaging is also shown to be effective for federated learning ( McMahan et al. , 2017 ) . In summary , exiting works handle the high communication delay with pipelining and use gradient compression and periodic averaging to reduce the communication budget . However , all existing methods fail to address both . It is also unclear how the three communication-efficient techniques introduced above can be used jointly without hurting the convergence of SGD . In this paper , we propose a novel framework Overlap Local Computation with Compressed Communication ( i.e. , OLCO3 ) to make distributed training both delay-tolerant AND communication efficient by enabling and improving the combination of the above three communicationefficient techniques . In Table 1 , we compare OLCO3 with the aforementioned works and two succeeding state-of-the-art delay-tolerant methods CoCoD-SGD ( Shen et al. , 2019 ) and OverlapLocalSGD ( Wang et al. , 2020 ) . Under the periodic averaging framework , we use p to denote the number of local SGD iterations per communication round , and s to denote the number of communication rounds that the information used in the model update has been outdated for . Let the computation time of one SGD iteration be Tcomput , then we can pipeline the communication and the computation when the communication delay time is less than sp · Tcomput . For simplicity , we define the delay tolerance of a method as T = sp . Local SGD has to use up-to-date information for the model update ( s = 0 , p ≥ 1 , T = sp = 0 ) . CoCoD-SGD and OverlapLocalSGD combine pipelining and periodic averaging by using stale results from last communication round ( s = 1 , p ≥ 1 , T = sp = p ) , while our OLCO3 supports various staleness ( s ≥ 1 , p ≥ 1 , T = sp ) and all other features in Table 1 . The main contributions of this paper are summarized as follows : • We propose the novel OLCO3 method , which achieves extreme communication efficiency by addressing both the high communication delay and large communication budget issues . • OLCO3 introduces novel staleness compensation and compression compensation techniques . Convergence analysis shows that OLCO3 achieves the same convergence rate as SGD . • Extensive experiments on deep learning tasks show that OLCO3 significantly outperforms ex- isting delay-tolerant methods in both the communication efficiency and model accuracy . 2 BACKGROUNDS & RELATED WORKS SGD and Pipelining . In distributed training , we minimize the global loss function f ( · ) = 1 K ∑K k=1 fk ( · ) , where fk ( · ) is the local loss function at worker k ∈ [ K ] . At iteration t , vanilla synchronous SGD updates the model xt ∈ Rd with learning rate ηt via xt+1 = xt − ηt K ∑K k=1∇Fk ( xt ; ξ ( k ) t ) , where ξ ( k ) t is the stochastic sampling variable and ∇Fk ( xt ; ξ ( k ) t ) is the corresponding stochastic gradient at worker k. Throughout this paper , we assume that the stochastic gradient is an unbiased estimator by default , i.e. , E ξ ( k ) t ∇Fk ( xt ; ξ ( k ) t ) = ∇fk ( xt ) . Pipe-SGD ( Li et al. , 2018 ) parallelizes the communication and computation of SGD via pipelining . At iteration t , worker k computes stochastic gradient ∇Fk ( xt ; ξ ( k ) t ) at current model xt and communicates to get the averaged stochastic gradient 1K ∑K k=1∇Fk ( xt ; ξ ( k ) t ) . Instead of waiting the communication to finish , Pipe-SGD concurrently updates the current model with stale averaged stochastic gradient via xt+1 = xt − ηtK ∑K k=1∇Fk ( xt−s ; ξ ( k ) t−s ) . Note that Pipe-SGD is different from asynchronous SGD ( Ho et al. , 2013 ; Lian et al. , 2015 ) which computes stochastic gradient using stale model and does not parallelize the computation and communication of a worker . A problem of Pipe-SGD is that its performance deteriorates severely under high communication delay ( large s ) . Pipelining with Periodic Averaging . CoCoD-SGD ( Shen et al. , 2019 ) utilizes periodic averaging to reduce the number of communication rounds and parallelizes the local model update and global model averaging by concurrently conducting xt = 1 K K∑ k=1 x ( k ) t and x ( k ) t+p = x ( k ) t − t+p−1∑ τ=t ητ∇Fk ( x ( k ) τ ; ξ ( k ) τ ) . ( 1 ) in which x ( k ) t denotes the local model at worker k as the local models on different workers are no longer consistent in non-communicating iterations . When the operations in Eq . ( 1 ) finishes , the local model is updated via x ( k ) t+p ← xt + x ( k ) t+p− x ( k ) t and t← t+ p. CoCoD-SGD can tolerate delay up to p SGD iterations ( i.e. , one communication round in periodic averaging ) . OverlapLocalSGD ( Wang et al. , 2020 ) improves CoCoD-SGD by heuristically pulling x ( k ) t+p back to the xt after the operations in Eq . ( 1 ) via x ( k ) t+p ← ( 1 − α ) x ( k ) t+p + αxt where 0 ≤ α < 1 . The motivation is to reduce the inconsistency in the local models across workers . OverlapLocalSGD also develops a momentum variant , which maintains a slow momentum buffer for xt following SlowMo ( Wang et al. , 2019 ) . As both CoCoD-SGD and OverlapLocalSGD communicates the non-compressed local model update , they suffer from a large communication budget in each communication round . Gradient Compression . The gradient vector v ∈ Rd can be sent with a much smaller communication budget by applying a compressor C ( · ) . Specifically , Scalar quantization rounds 32-bit floatingpoint gradient components to low-precision values of only several bits . One important such algorithm is scaled SignSGD ( called SignSGD in this paper ) ( Bernstein et al. , 2018 ; Karimireddy et al. , 2019 ) which uses C ( v ) = ‖v‖1d sign ( v ) to compress v to 1 bit . Gradient sparsification only communicates large gradient components . Vector quantization uses a codebook where each code is a vector and quantizes the gradient vector as a linear combination of the vector codes . With the local error feedback technique ( Seide et al. , 2014 ; Lin et al. , 2018b ; Wu et al. , 2018 ; Karimireddy et al. , 2019 ; Zheng et al. , 2019 ) , which adds the previous compression error ( i.e. , v − C ( v ) ) to current gradient before compression , gradient compression can achieve comparable performance as full-precision training . Local error feedback also works for both one-way compression ( compress the communication from worker to sever ) ( Karimireddy et al. , 2019 ) and two-way compression ( compress the communication between worker and server ) ( Zheng et al. , 2019 ) . Challenges . Simultaneously achieving communication compression with pipelining and periodic averaging requires careful algorithm design because 1 ) pipelining introduces staleness , and 2 ) stateof-the-art vector quantization methods usually require an additional round of communication to solve the compressor C ( · ) , which is unfavorable in high communication delay scenarios . 3 THE PROPOSED FRAMEWORK : OLCO3 In this section , we will introduce our new delay-tolerant and communication-efficient training framework OLCO3 . We discuss two variants of OLCO3 : OLCO3-TC for two-way compression in masterslave communication mode , and OLCO3-VQ adopting commutative vector quantization for both the master-slave and ring all-reduce communication modes . Note that one-way compression in just a special case of OLCO3-TC and we omit it for conciseness . We use “ line x ” to refer to the x-th line of Algorithm 1 . The key differences between OLCO3-TC and OLCO3-VQ are marked in red color . 3.1 OLCO3-TC FOR TWO-WAY COMPRESSION Motivation . OLCO3-TC is presented in the green part of Algorithm 1 for efficient master-slave distributed training . Naively pipelining local computation with compressed communication will break the update rule of momentum SGD for the averaged model xt = 1K ∑K k=1 x ( k ) t , leading to non- convergence . Therefore , we consider an auxiliary variable x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t −et , where e ( k ) t is the local compression error at worker k and et is the compression error at the server . If x̃t can follow the update rule of momentum SGD , then the real trained model xt will gradually approach x̃t as the training converges because the gradient and errors e ( k ) t , et → 0 . Pipelining . For non-communicating iterations , we perform the local update following Local SGD ( line 4 ) . A communicating iteration takes place every p iterations . To pipeline the communication Algorithm 1 Overlap Local Computation with Compressed Communication ( OLCO3 ) on worker k ∈ [ K ] . Green part : OLCO3-TC ; Yellow part : OLCO3-VQ . Best view in color . 1 : Input : period p ≥ 1 , staleness s ≥ 0 , number of iterations T , number of workers K , learning rate { ηt } T−1t=0 , and compression scheme C ( · ) . 2 : Initialize : Local model x ( k ) 0 = x0 , local error e ( k ) 0 = 0 , server error e0 = 0 , local momentum buffer m ( k ) 0 = 0 , and momentum constant 0 < µ < 1 . Variables with negative subscripts are 0 . 3 : for t = 0 , 1 , · · · , T − 1 do 4 : m ( k ) t+1 = µm ( k ) t +∇Fk ( x ( k ) t ; ξ ( k ) t ) , x ( k ) t+1 = x ( k ) t − ηtm ( k ) t+1 // Momentum Local SGD . 5 : if ( t+ 1 ) mod p = 0 then 6 : Maintain or reset the momentum buffer . 7 : ∆ ( k ) t+1 = x ( k ) t+1−p − x ( k ) t+1 + e ( k ) t // Compression compensation . 8 : e ( k ) t+1 = e ( k ) t+2 = · · · = e ( k ) t+p = ∆ ( k ) t+1 − C ( ∆ ( k ) t+1 ) // Compression . 9 : Invoke the communication thread in parallel which does : 10 : ( 1 ) Send C ( ∆ ( k ) t+1 ) to and receive C ( ∆t+1 ) from the server node . 11 : ( 2 ) Server : ∆t+1 = 1K ∑K k=1 C ( ∆ ( k ) t+1 ) + et ; et+1 = et+2 = · · · = et+p = ∆t+1 − C ( ∆t+1 ) . 12 : Block until C ( ∆t+1−sp ) is ready . 13 : xt+1 = xt+1−p − C ( ∆t+1−sp ) 14 : x ( k ) t+1 ← xt+1 − ∑s−1 i=0 C ( ∆ ( k ) t+1−ip ) // Staleness compensation . 15 : ∆ ( k ) t+1 = x ( k ) t+1−p − x ( k ) t+1 + e ( k ) t−sp // Compression compensation . 16 : Invoke the communication thread in parallel which does : 17 : ( 1 ) e ( k ) t+1 = e ( k ) t+2 = · · · = e ( k ) t+p = ∆ ( k ) t+1 − C ( ∆ ( k ) t+1 ) // Compression . 18 : ( 2 ) Average 1K ∑K k=1 C ( ∆ ( k ) t+1 ) by ring all-reduce or master-slave communication . 19 : Block until 1K ∑K k=1 C ( ∆ ( k ) t+1−sp ) and e ( k ) t+1−sp is ready . 20 : xt+1 = xt+1−p − 1K ∑K k=1 C ( ∆ ( k ) t+1−sp ) 21 : x ( k ) t+1 ← xt+1 − ∑s−1 i=0 ∆ ( k ) t+1−ip // Staleness compensation . 22 : end if 23 : end for 24 : Output : averaged model xT = 1K ∑K k=1 x ( k ) T and computation , we compress the local update ∆ ( k ) t+1 ( line 7 ) for efficient communication , and at the same time , try to update the model with a stale compressed global update C ( ∆t+1−sp ) ( line 13 ) that has been outdated for s communication rounds ( i.e. , the staleness is s ) . The momentum buffer can be maintained or reset to zero every p iteration ( line 6 ) . If the delay tolerance T = sp is larger than the actual communication delay , the blocking in line 12 becomes a no-op and there will be no synchronization barrier . The server compresses the sum of the compressed local updates from all workers ( line 11 ) and sends it back , making OLCO3-TC an efficient two-way compression method . Compensation . To make the update of the auxiliary variable x̃t follow momentum SGD , we propose to 1 ) compensate staleness with all compressed local updates with staleness ∈ [ 0 , s − 1 ] ( line 14 ) , which requires no communication and allows less stale local update to affect the local model , and 2 ) maintain a local error ( line 8 ) and add it to the next local update before compression ( line 7 ) to compensate the compression error . With the two compensation techniques in OLCO3-TC , Lemma 1 shows that the update rule of x̃t follows momentum SGD with averaged momentum 1K ∑K k=1 m ( k ) t . Lemma 1 . For OLCO3-TC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t − et−sp , then we have x̃t = x̃t−1 − ηt−1K ∑K k=1 m ( k ) t . Note that there is a “ gradient mismatch ” problem as the local momentum m ( k ) t is computed at the local model x ( k ) t but used in the update rule of the auxiliary variable x̃t ( Karimireddy et al. , 2019 ; Xu et al. , 2020 ) . However , our analysis shows that it does not affect the convergence rate . We have also considered OLCO3 for one-way compression ( i.e. , OLCO3-OC ) as a special case of OLCO3-TC . In OLCO3-OC , the compressor at the server side is identity function and the server error et is 0 . For OLCO3-OC , the auxiliary variable x̃t also follows momentum SGD as stated in Lemma 2 . Lemma 2 . For OLCO3-OC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t , then we have x̃t = x̃t−1 − ηt−1 K ∑K k=1 m ( k ) t . We can see that the delay tolerance of both OLCO3-TC and OLCO3-OC are T = sp ( s ≥ 1 , p ≥ 1 ) . They have a memory overhead of O ( sd ) for storing information with staleness ∈ [ 0 , s − 1 ] . For most compression schemes such as SignSGD , the computation complexity of C ( · ) is O ( d ) . 3.2 OLCO3-VQ FOR COMMUTATIVE VECTOR QUANTIZATION OLCO3-TC and OLCO3-OC work for compressed communication in the master-slave communication paradigm . In contrast , OLCO3-VQ ( the yellow part of Algorithm 1 ) works for both the master-slave and ring all-reduce communication paradigms . Ring all-reduce minimizes communication congestion by shifting from centralized aggregation in master-slave communication ( Yu et al. , 2018 ) . OLCO3-VQ relies on a state-of-the-art vector quantization scheme , PowerSGD ( Vogels et al. , 2019 ) , which satisfies commutability for compression , i.e. , C ( v1 ) + C ( v2 ) = C ( v1 + v2 ) . However , directly using PowerSGD breaks the delay tolerance of OLCO3 as its compressor C ( · ) needs communication and introduces synchronization barriers . Specifically , PowerSGD invokes communication across all workers to compute a transformation matrix , which is used to project the local updates to the compressed form . Pipelining with Communication-Dependent Compressor . To make OLCO3-VQ delay-tolerant , we further propose a novel compression compensation technique with the stale local error ( line 15 ) . This is in contrast to OLCO3-TC and OLCO3-OC , which use immediate compressed results to calculate the up-to-date local error . As this technique removes the dependency on immediate compressed results , we can move the whole compression and averaging process to the communication thread ( lines 17 and 18 ) . For staleness compensation , OLCO3-VQ uses all uncompressed local updates with staleness ∈ [ 0 , s−1 ] instead of compressed local updates in OLCO3-TC . With the two compensation techniques , Lemma 3 shows that for OLCO3-VQ , the auxiliary variable x̃t associated with the stale local error also follows the momentum SGD update rule . Lemma 3 . For OLCO3-VQ , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t−sp , then we have x̃t = x̃t−1 − ηt−1 K ∑K k=1 m ( k ) t . 4 THEORETICAL RESULTS In this section , we provide the convergence results of the OLCO3 variants for both SGD and momentum SGD maintaining momentum ( line 6 of Algorithm 1 ) with common assumptions . As OLCO3OC is a special case of OLCO3-TC , we only analyze OLCO3-TC and OLCO3-VQ . The detailed proofs of Theorems 1 , 2 , 3 , and 4 can be found in Appendix D , E , F , and G respectively . The detailed proofs of Lemma 1 , 2 , and 3 can be found in Appendix C. We use f∗ to denote the optimal loss . Assumption 1 . ( L-Lipschitz Smoothness ) Both the local ( fk ( · ) ) and global ( f ( · ) = 1K ∑K k=1 fk ( · ) ) loss functions are L-smooth , i.e. , ‖∇f ( x ) −∇f ( y ) ‖2 ≤ L‖x− y‖2 , ∀x , y ∈ Rd , ( 2 ) ‖∇fk ( x ) −∇fk ( y ) ‖2 ≤ L‖x− y‖2 , ∀k ∈ [ K ] , ∀x , y ∈ Rd . ( 3 ) Assumption 2 . ( Local Bounded Variance ) The local stochastic gradient ∇Fk ( x ; ξ ) has a bounded variance , i.e. , Eξ∼Dk‖∇Fk ( x ; ξ ) − ∇fk ( x ) ‖22 ≤ σ2 , ∀k ∈ [ K ] , ∀x ∈ Rd . Note that Eξ∼Dk∇Fk ( x ; ξ ) = ∇fk ( x ) . Assumption 3 . ( Bounded Variance across Workers ) The L2 norm of the difference of the local and global full gradient is bounded , i.e. , ‖∇fk ( x ) −∇f ( x ) ‖22 ≤ κ2 , ∀k ∈ [ K ] , ∀x ∈ Rd . κ = 0 leads to i.i.d . data distributions across workers . Assumption 4 . ( Bounded Full Gradient ) The second moment of the global full gradient is bounded , i.e. , ‖∇f ( x ) ‖22 ≤ G2 , ∀x ∈ Rd . Assumption 5 . ( Karimireddy et al. , 2019 ) The compression function C ( · ) : Rd → R is a δapproximate compressor for 0 < δ ≤ 1 if for all v ∈ Rd , ‖C ( v ) − v‖22 ≤ ( 1− δ ) ‖v‖22 . 4.1 SGD Theorem 1 . For OLCO3-VQ with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2L2 ( s+ 1 ) pσ2 [ 1+ ( 4 ) 14 ( 1− δ ) δ2 ( s+ 1 ) p ] + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 5 ( 1− δ ) δ2 ) + 168 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 . If we set the learning rate η = O ( K 12T− 12 ) and the communication interval p = O ( K− 34T 14 ( s + 1 ) −1 ) , the convergence rate will be O ( K− 12T− 12 ) . The O ( K− 12T− 12 ) rate is the same as synchronous SGD and Local SGD , and achieves linear speedup regarding the number of workers K. Theorem 2 . For OLCO3-TC with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 80h ( δ ) p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 80h ( δ ) ) + 960η2p2G2h ( δ ) . ( 5 ) If we set the learning rate η = O ( K 12T− 12 ) and the communication interval p = O ( K− 34T 14 ( s + 1 ) −1 ) , the convergence rate will be O ( K− 12T− 12 ) . When the data distributions across workers are i.i.d . ( i.e. , κ = 0 ) , if we choose the learning rate η = O ( K 12T− 12 ) and the communication interval p = min { O ( K− 32T 12 ( s + 1 ) −1 ) , O ( K− 34T 14 ) } ( p = O ( K− 34T 14 ) for a enough large T ) instead , the convergence rate will still be O ( K− 12T− 12 ) . Therefore , OLCO3-TC can tackle a larger communication interval p ( O ( K− 3 4T 1 4 ) ) than OLCO3VQ ( O ( K− 34T 14 ( s+ 1 ) −1 ) ) in the i.i.d . setting . But they are the same in the non-i.i.d . setting . 4.2 MOMENTUM SGD Theorem 3 . For OLCO3-VQ with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and let g ( µ , δ , s , p ) = 15 ( 1−µ ) 2 + 60 ( 1−δ ) ( s+1 ) 2p2 δ2 , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K ( 6 ) + 4η2L2 ( 1− µ ) 2 [ ( 4 ( s+ 1 ) p+ g ( µ , δ , s , p ) ) σ2 + ( 12 ( s+ 1 ) 2p2 + g ( µ , δ , s , p ) ) κ2 + g ( µ , δ , s , p ) G2 ] . Theorem 4 . For OLCO3-TC with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 6η2L2 ( 1− µ ) 2 [ σ2 ( 9 ( 1− µ ) 2 + 2 ( s+ 1 ) p+ 168h ( δ ) p2 ) + κ2 ( 9 ( 1− µ ) 2 + 6 ( s+ 1 ) 2p2 + 168h ( δ ) p2 ) +G2 ( 9 ( 1− µ ) 2 + 168h ( δ ) p2 ) ] . ( 7 ) The same convergence rate and communication interval p are achieved as in Section 4.1 . 5 EXPERIMENTS We compare the following methods : 1 ) Local SGD ( baseline , NO delay tolerance with T = 0 ) ; 2 ) Pipe-SGD ; 3 ) CoCoD-SGD ; 4 ) OverlapLocalSGD with hyperparameters following Wang et al . ( 2020 ) ; 5 ) OLCO3-OC with SignSGD compression ; 6 ) OLCO3-VQ with PowerSGD compressoin ; 7 ) OLCO3-TC with SignSGD compression . The momentum buffer is maintained ( line 6 of Algorithm 1 ) by default . We do not report the results of Pipe-SGD as it does not converge for the large delay tolerance T we experimented . We train ResNet-110 ( He et al. , 2016 ) with 8 workers for the CIFAR-10 ( Krizhevsky et al. , 2009 ) image classification task , and report the mean and standard deviation of the test accuracy over 3 runs in both the i.i.d . and non-i.i.d . setting . We also train ResNet-50 with 16 workers for the ImageNet ( Russakovsky et al. , 2015 ) image classification task . More detailed descriptions of the experiment configurations can be found in Appendix A.1 . Delay Tolerance with Lower Communication Budget . The training curves of ResNet-110 on CIFAR-10 and ResNet-50 on ImageNet are shown in Figure 1 . We use s = 1 because CoCoD-SGD and OverlapLocalSGD do not support s ≥ 2 . Compared with other delay-tolerant methods , the communication budget of the OLCO3 variants is significantly smaller due to compressed communication . OLCO3 is also robust to communication delay with a large T = sp . Therefore , OLCO3 features extreme communication efficiency with compressed communication , delay tolerance , and low communication frequency due to periodic averaging . Better Model Performance . The two plots in the first row of Figure 1 show that OLCO3-OC and OLCO3-TC outperforms other delay-tolerant methods and are comparable to Local SGD regarding the model accuracy . The performance of OLCO3-VQ is similar to CoCoD-SGD but inferior to OverlapLocalSGD . However , in the non-i.i.d . results reported in Table 2 , all OLCO3 variants outperform existing delay-tolerant methods in accuracy . This is in line with the theoretical results in Theorems 1 , 2 , 3 , and 4 , which show that OLCO3-TC can tackle a larger p than OLCO3-VQ in the i.i.d . setting but the two methods are similar in the non-i.i.d . setting . In the non-i.i.d . setting , all OLCO3 variants perform very close to Local SGD . On average , OLCO3-OC and OLCO3-TC improve the test accuracy of CoCoD-SGD and OverlapLocalSGD by 2.0 % and 0.8 % , respectively . OLCO3-VQ improves CoCoD-SGD and OverlapLocalSGD by 1.6 % and 0.4 % . These results empirically confirm that the staleness compensation and compression compensation techniques in OLCO3 are effective . Varying Delay Tolerance . We vary the delay tolerance T with staleness fixed at s = 1 in the left plot of Figure 2 . The goal is to check the robustness of OLCO3 to the different period p. The results show that OLCO3-OC and OLCO3-TC always outperform other delay-tolerant methods , and have more comparable performance to Local SGD . Note that both the OLCO3-OC and OLCO3TC provide a significantly smaller communication budget according to Figure 1 . OLCO3-VQ also outperforms CoCoD-SGD with a much smaller communication budget . Varying Staleness . We vary the staleness s of OLCO3 in the right plot of Figure 2 under fixed delay tolerance T . Local SGD only supports s = 0 with no delay tolerance , and CoCoD-SGD and OverlapLocalSGD only support s = 1 , so there is only one result for them in the figure . When increasing the staleness beyond 2 for OLCO3 , the deterioration of the model performance is very small , especially for OLCO3-VQ . This suggests that the staleness compensation techniques in OLCO3 are effective . The performance peaks at s = 2 because an appropriate staleness may introduce some noise that helps generalization . In comparison , we can not tune staleness s for better performance in CoCoD-SGD and OverlapLocalSGD . 6 CONCLUSION In this work , we proposed a new OLCO3 framework to achieve extreme communication efficiency with high delay tolerance and a low communication budget in distributed training . OLCO3 uses novel staleness compensation and compression compensation techniques , and the theoretical results show that it converges as fast as vanilla synchronous SGD . Experimental results show that OLCO3 significantly outperforms existing delay-tolerant methods in terms of the communication budget and model performance . REFERENCES Alham Fikri Aji and Kenneth Heafield . Sparse communication for distributed gradient descent . arXiv preprint arXiv:1704.05021 , 2017 . Dan Alistarh , Demjan Grubic , Jerry Li , Ryota Tomioka , and Milan Vojnovic . Qsgd : Communication-efficient sgd via gradient quantization and encoding . In Advances in Neural Information Processing Systems , pp . 1709–1720 , 2017 . Dan Alistarh , Torsten Hoefler , Mikael Johansson , Nikola Konstantinov , Sarit Khirirat , and Cédric Renggli . The convergence of sparsified gradient methods . In Advances in Neural Information Processing Systems , pp . 5973–5983 , 2018 . Jeremy Bernstein , Yu-Xiang Wang , Kamyar Azizzadenesheli , and Animashree Anandkumar . signSGD : Compressed optimisation for non-convex problems . In Jennifer Dy and Andreas Krause ( eds . ) , Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , pp . 560–569 , Stockholmsmässan , Stockholm Sweden , 10–15 Jul 2018 . PMLR . URL http : //proceedings.mlr.press/v80/ bernstein18a.html . Priya Goyal , Piotr Dollár , Ross Girshick , Pieter Noordhuis , Lukasz Wesolowski , Aapo Kyrola , Andrew Tulloch , Yangqing Jia , and Kaiming He . Accurate , large minibatch sgd : Training imagenet in 1 hour . arXiv preprint arXiv:1706.02677 , 2017 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 770–778 , 2016 . Qirong Ho , James Cipar , Henggang Cui , Seunghak Lee , Jin Kyu Kim , Phillip B Gibbons , Garth A Gibson , Greg Ganger , and Eric P Xing . More effective distributed ml via a stale synchronous parallel parameter server . In Advances in neural information processing systems , pp . 1223–1231 , 2013 . Zhouyuan Huo , Bin Gu , and Heng Huang . Large batch training does not need warmup . arXiv preprint arXiv:2002.01576 , 2020 . Sai Praneeth Karimireddy , Quentin Rebjock , Sebastian Stich , and Martin Jaggi . Error feedback fixes signsgd and other gradient compression schemes . In International Conference on Machine Learning , pp . 3252–3261 , 2019 . Jakub Konečnỳ , H Brendan McMahan , Felix X Yu , Peter Richtárik , Ananda Theertha Suresh , and Dave Bacon . Federated learning : Strategies for improving communication efficiency . arXiv preprint arXiv:1610.05492 , 2016 . Alex Krizhevsky , Geoffrey Hinton , et al . Learning multiple layers of features from tiny images . 2009 . Youjie Li , Mingchao Yu , Songze Li , Salman Avestimehr , Nam Sung Kim , and Alexander Schwing . Pipe-sgd : A decentralized pipelined sgd framework for distributed deep net training . In Advances in Neural Information Processing Systems , pp . 8045–8056 , 2018 . Xiangru Lian , Yijun Huang , Yuncheng Li , and Ji Liu . Asynchronous parallel stochastic gradient for nonconvex optimization . In Advances in Neural Information Processing Systems , pp . 2737–2745 , 2015 . Tao Lin , Sebastian U Stich , Kumar Kshitij Patel , and Martin Jaggi . Don ’ t use large mini-batches , use local sgd . arXiv preprint arXiv:1808.07217 , 2018a . Yujun Lin , Song Han , Huizi Mao , Yu Wang , and Bill Dally . Deep gradient compression : Reducing the communication bandwidth for distributed training . In International Conference on Learning Representations , 2018b . URL https : //openreview.net/forum ? id=SkhQHMW0W . Ilya Loshchilov and Frank Hutter . Sgdr : Stochastic gradient descent with warm restarts . arXiv preprint arXiv:1608.03983 , 2016 . Brendan McMahan , Eider Moore , Daniel Ramage , Seth Hampson , and Blaise Aguera y Arcas . Communication-efficient learning of deep networks from decentralized data . In Artificial Intelligence and Statistics , pp . 1273–1282 . PMLR , 2017 . Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , et al . Pytorch : An imperative style , highperformance deep learning library . In Advances in Neural Information Processing Systems , pp . 8024–8035 , 2019 . Olga Russakovsky , Jia Deng , Hao Su , Jonathan Krause , Sanjeev Satheesh , Sean Ma , Zhiheng Huang , Andrej Karpathy , Aditya Khosla , Michael Bernstein , Alexander C. Berg , and Li Fei-Fei . ImageNet Large Scale Visual Recognition Challenge . International Journal of Computer Vision ( IJCV ) , 115 ( 3 ) :211–252 , 2015. doi : 10.1007/s11263-015-0816-y . Frank Seide , Hao Fu , Jasha Droppo , Gang Li , and Dong Yu . 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns . In Fifteenth Annual Conference of the International Speech Communication Association , 2014 . Shuheng Shen , Linli Xu , Jingchang Liu , Xianfeng Liang , and Yifei Cheng . Faster distributed deep net training : Computation and communication decoupled stochastic gradient descent . arXiv preprint arXiv:1906.12043 , 2019 . Sebastian U. Stich . Local SGD converges fast and communicates little . In International Conference on Learning Representations , 2019 . URL https : //openreview.net/forum ? id= S1g2JnRcFX . Sebastian U Stich , Jean-Baptiste Cordonnier , and Martin Jaggi . Sparsified sgd with memory . In Advances in Neural Information Processing Systems , pp . 4447–4458 , 2018 . Thijs Vogels , Sai Praneeth Karimireddy , and Martin Jaggi . Powersgd : Practical low-rank gradient compression for distributed optimization . In Advances in Neural Information Processing Systems , pp . 14259–14268 , 2019 . Jianyu Wang , Vinayak Tantia , Nicolas Ballas , and Michael Rabbat . Slowmo : Improving communication-efficient distributed sgd with slow momentum . arXiv preprint arXiv:1910.00643 , 2019 . Jianyu Wang , Hao Liang , and Gauri Joshi . Overlap local-sgd : An algorithmic approach to hide communication delays in distributed sgd . In ICASSP 2020-2020 IEEE International Conference on Acoustics , Speech and Signal Processing ( ICASSP ) , pp . 8871–8875 . IEEE , 2020 . Wei Wen , Cong Xu , Feng Yan , Chunpeng Wu , Yandan Wang , Yiran Chen , and Hai Li . Terngrad : Ternary gradients to reduce communication in distributed deep learning . In Advances in neural information processing systems , pp . 1509–1519 , 2017 . Jiaxiang Wu , Weidong Huang , Junzhou Huang , and Tong Zhang . Error compensated quantized sgd and its applications to large-scale distributed optimization . arXiv preprint arXiv:1806.08054 , 2018 . An Xu , Zhouyuan Huo , and Heng Huang . Training faster with compressed gradient . arXiv preprint arXiv:2008.05823 , 2020 . Yang You , Igor Gitman , and Boris Ginsburg . Scaling sgd batch size to 32k for imagenet training . arXiv preprint arXiv:1708.03888 , 6 , 2017 . Mingchao Yu , Zhifeng Lin , Krishna Narra , Songze Li , Youjie Li , Nam Sung Kim , Alexander Schwing , Murali Annavaram , and Salman Avestimehr . Gradiveq : Vector quantization for bandwidth-efficient gradient aggregation in distributed cnn training . In Advances in Neural Information Processing Systems , pp . 5123–5133 , 2018 . Shuai Zheng , Ziyue Huang , and James Kwok . Communication-efficient distributed blockwise momentum sgd with error-feedback . In Advances in Neural Information Processing Systems , pp . 11446–11456 , 2019 . A ADDITIONAL EXPERIMENTAL RESULTS A.1 EXPERIMENTAL SETTING All experiments are implemented with PyTorch ( Paszke et al. , 2019 ) and run on a cluster of Nvidia Tesla P40 GPUs . Each node is connected by 40Gbps Ethernet and equipped with 4 GPUs . CIFAR . We train the ResNet-110 ( He et al. , 2016 ) model with 8 workers on CIFAR-10 ( Krizhevsky et al. , 2009 ) image classification task . We report the mean and standard deviation metrics over 3 runs . The base learning rate is 0.4 and the total batch size is 512 . The momentum constant is 0.9 and the weight decay is 1× 10−4 . The model is trained for 200 epochs with a learning rate decay of 0.1 at epoch 100 and 150 . We linearly warm up the learning rate from 0.05 to 0.4 in the beginning 5 epochs . For OLCO3 with staleness s ∈ { 2 , 4 , 8 } , we set the base learning rate to 0.2 due to increased staleness . The rank of PowerSGD is 4 . Random cropping , random flipping , and standardization are applied as data augmentation techniques . We also train ResNet-56 to explore more combinations of s and p in Appendix A.4 with the same other settings . ImageNet . We train the ResNet-50 model with 16 workers on ImageNet ( Russakovsky et al. , 2015 ) image classification tasks . The model is trained for 120 epochs with a cosine learning rate scheduling ( Loshchilov & Hutter , 2016 ) . The base learning rate is 0.4 and the total batch size is 2048 . The momentum constant is 0.9 and the weight decay is 1 × 10−4 . We linearly warm up the learning rate from 0.025 to 0.4 in the beginning 5 epochs . The rank of PowerSGD is 50 . Random cropping , random flipping , and standardization are applied as data augmentation techniques . The Non-i.i.d . Setting . Similar to ( Wang et al. , 2020 ) , we randomly choose fraction α of the whole data , sort the data by the class , and evenly assign them to all workers in order . For the rest fraction ( 1− α ) of the whole data , we randomly and evenly distribute them to all workers ( Figure 3 ) . When 0 < α ≤ 1 is large , the data distribution across workers is non-i.i.d and highly skewed . When α = 0 , it becomes i.i.d . data distribution across workers . In our non-i.i.d . experiments , we choose α = 0.8 . A.2 TRAINING CURVE A.3 TEST ACCURACY Again , Figure 6 empirically confirms the theoretical results in Theorems 1 , 2 , 3 , and 4 that OLCO3TC can handle a larger period p than OLCO3 and that this gap increases with the staleness s in the i.i.d . setting . Note that in the right plot of Figure 2 , the gap between OLCO3-TC and OLCO3-VQ does not increase with s because the period p is decreasing ( the delay tolerance T = sp is fixed ) . B ASSUMPTIONS Assumption 1 . ( L-Lipschitz Smoothness ) Both the local ( fk ( · ) ) and global ( f ( · ) = 1K ∑K k=1 fk ( · ) ) loss functions are L-smooth , i.e. , ‖∇f ( x ) −∇f ( y ) ‖2 ≤ L‖x− y‖2 , ∀x , y ∈ Rd , ( 8 ) ‖∇fk ( x ) −∇fk ( y ) ‖2 ≤ L‖x− y‖2 , ∀k ∈ [ K ] , ∀x , y ∈ Rd . ( 9 ) Assumption 2 . ( Local Bounded Variance ) The local stochastic gradient ∇Fk ( x ; ξ ) has a bounded variance , i.e. , Eξ∼Dk‖∇Fk ( x ; ξ ) −∇fk ( x ) ‖22 ≤ σ2 , ∀k ∈ [ K ] , ∀x ∈ Rd . ( 10 ) Note that Eξ∼Dk∇Fk ( x ; ξ ) = ∇fk ( x ) . Assumption 3 . ( Bounded Variance across Workers ) The L2 norm of the difference of the local and global full gradient is bounded , i.e. , ‖∇fk ( x ) −∇f ( x ) ‖22 ≤ κ2 , ∀k ∈ [ K ] , ∀x ∈ Rd , ( 11 ) where κ = 0 leads to i.i.d . data distributions across workers . Assumption 4 . ( Bounded Full Gradient ) The second moment of the global full gradient is bounded , i.e. , ‖∇f ( x ) ‖22 ≤ G2 , ∀x ∈ Rd . ( 12 ) Assumption 5 . ( δ-approximate compressor ) The compression function C ( · ) : Rd → R is a δapproximate compressor for 0 < δ ≤ 1 if for all v ∈ Rd , ‖C ( v ) − v‖22 ≤ ( 1− δ ) ‖v‖22 . ( 13 ) C BASIC LEMMAS Lemma 1 . For OLCO3-TC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t − et−sp , then we have x̃t = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 14 ) Proof . For t = np where n is some integer , x̃np = 1 K K∑ k=1 x ( k ) np − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = xnp − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = x ( n−1 ) p − C ( ∆ ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 C ( ∆ ( k ) ( n−1−i ) p ) ) − C ( ∆ ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np − e ( n−s ) p = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np + 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − C ( ∆ ( n−s ) p ) − e ( n−s ) p = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np − e ( n−s ) p−1 = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 np−1∑ τ= ( n−1 ) p ητm ( k ) τ+1 − 1 K K∑ k=1 e ( k ) np−1 − e ( n−s ) p−1 = x̃np−1 − 1 K K∑ k=1 ηnp−1m ( k ) np . ( 15 ) For t 6= np , x̃t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t − et−sp = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−1 − et−sp−1 = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 16 ) Lemma 2 . For OLCO3-OC , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t , then we have x̃t = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 17 ) Proof . For t = np where n is some integer , x̃np = 1 K K∑ k=1 x ( k ) np − 1 K K∑ k=1 e ( k ) np = xnp − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np = x ( n−1 ) p − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 C ( ∆ ( k ) ( n−1−i ) p ) ) − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 C ( ∆ ( k ) ( n−i ) p ) − 1 K K∑ k=1 e ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 C ( ∆ ( k ) np ) − 1 K K∑ k=1 e ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 np−1∑ τ= ( n−1 ) p ητm ( k ) τ+1 − 1 K K∑ k=1 e ( k ) np−1 = x̃np−1 − 1 K K∑ k=1 ηnp−1m ( k ) np . ( 18 ) For t 6= np , x̃t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−1 = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 19 ) Lemma 3 . For OLCO3-VQ , let x̃t : = 1K ∑K k=1 x ( k ) t − 1K ∑K k=1 e ( k ) t−sp , then we have x̃t = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 20 ) Proof . For t = np where n is some integer , x̃np = 1 K K∑ k=1 x ( k ) np − 1 K K∑ k=1 e ( k ) ( n−s ) p = xnp − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p − 1 K K∑ k=1 e ( k ) ( n−s ) p = x ( n−1 ) p − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p − 1 K K∑ k=1 e ( k ) ( n−s ) p = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 ∆ ( k ) ( n−1−i ) p ) − 1 K K∑ k=1 C ( ∆ ( k ) ( n−s ) p ) − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p − 1 K K∑ k=1 e ( k ) ( n−s ) p = 1 K K∑ k=1 ( x ( k ) ( n−1 ) p + s−1∑ i=0 ∆ ( k ) ( n−1−i ) p ) − 1 K K∑ k=1 ∆ ( k ) ( n−s ) p − 1 K K∑ k=1 s−1∑ i=0 ∆ ( k ) ( n−i ) p = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 ∆ ( k ) np = 1 K K∑ k=1 x ( k ) ( n−1 ) p − 1 K K∑ k=1 np−1∑ τ= ( n−1 ) p ητm ( k ) τ+1 − 1 K K∑ k=1 e ( k ) np−1−sp = x̃np−1 − 1 K K∑ k=1 ηnp−1m ( k ) np . ( 21 ) For t 6= np , x̃t = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−sp = 1 K K∑ k=1 x ( k ) t − 1 K K∑ k=1 e ( k ) t−1−sp = x̃t−1 − ηt−1 K K∑ k=1 m ( k ) t . ( 22 ) D PROOF OF THEOREM 1 Lemma 4 . For OLCO3-VQ with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 23 ) Proof . First we have E‖∇F ( x ( k ) t ; ξ ( k ) t ) ‖22 ≤ 3E‖∇F ( x ( k ) t ; ξ ( k ) t ) −∇fk ( x ( k ) t ) ‖2 + 3E‖∇fk ( x ( k ) t ) −∇f ( x ( k ) t ) ‖22 + 3E‖∇f ( x ( k ) t ) ‖22 ≤ 3σ2 + 3κ2 + 3G2 . ( 24 ) Let St = b tpc , E‖e ( k ) t ‖22 = E‖e ( k ) Stp ‖22 = E‖C ( ∆ ( k ) Stp ) −∆ ( k ) Stp‖ 2 2 ≤ ( 1− δ ) E‖∆ ( k ) Stp ‖22 = ( 1− δ ) E‖ Stp−1∑ t′= ( St−1 ) p η∇F ( x ( k ) t′ ; ξ ( k ) t′ ) + e ( k ) ( St−s−1 ) p‖ 2 2 ≤ ( 1− δ ) ( 1 + ρ ) E‖e ( k ) ( St−s−1 ) p‖ 2 2 + ( 1 + δ ) ( 1 + 1 ρ ) E‖ Stp−1∑ t′= ( St−1 ) p η∇F ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ ( 1− δ ) ( 1 + ρ ) E‖e ( k ) ( St−s−1 ) p‖ 2 2 + 3 ( 1 + δ ) ( 1 + 1 ρ ) p2η2 ( σ2 + κ2 +G2 ) . ( 25 ) Therefore , E‖e ( k ) t ‖22 ≤ 3 ( 1− δ ) ( 1 + 1 ρ ) p2η2 ( σ2 + κ2 +G2 ) bSts c−1∑ i=0 [ ( 1− δ ) ( 1 + ρ ) ] i ≤ 3 ( 1− δ ) ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) . ( 26 ) Let ρ = δ2 ( 1−δ ) such that 1 + 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ 12 ( 1−δ ) δ2 p 2η2 ( σ2 + κ2 +G2 ) . Lemma 5 . For OLCO3-VQ with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 16L ( s+1 ) p , we have 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 9η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 36 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 . ( 27 ) Proof . Let St = b tpc , 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 ∆ ( k′ ) ( St−i ) p − t−1∑ t′=Stp η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) ) − ( − s−1∑ i=0 ∆ ( k ) ( St−i ) p − t−1∑ t′=Stp η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ) − 1 K K∑ k′=1 e ( k ′ ) t−sp‖22 = 1 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) − 1 K K∑ k′=1 e ( k ′ ) t−sp − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2η 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 + 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) t−sp − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 . ( 28 ) The first term is bounded by 2η2 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ 2η 2 K K∑ k=1 E ∥∥∥∥∥∥ t−1∑ t′= ( St−s ) p ( − 1 K K∑ k′=1 ( ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) −∇fk′ ( x ( k′ ) t′ ) ) + ( ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ) ) ∥∥∥∥∥∥ 2 2 + 2η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p ( − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ) ‖ 2 2 = 2η2 K K∑ k=1 t−1∑ t′= ( St−s ) p E‖ − 1 K K∑ k′=1 ( ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) −∇fk′ ( x ( k′ ) t′ ) ) + ( ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ) ‖ 2 2 + 2η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p ( − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ) ‖ 2 2 ≤ 2η 2 K K∑ k=1 t−1∑ t′= ( St−s ) p E‖∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ‖ 2 2 + 2η2 K K∑ k=1 t−1∑ t′= ( St−s ) p ( t− ( St − s ) p ) E‖ − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ‖ 2 2 ≤ 2η2 ( s+ 1 ) pσ2 + 2η 2 ( s+ 1 ) p K t−1∑ t′=t− ( s+1 ) p K∑ k=1 E‖ − 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) +∇fk ( x ( k ) t′ ) ‖ 2 2 , ( 29 ) where the third inequality follows 1K ∑K k=1 ‖ 1 K ∑K k′=1 ak′ − ak‖22 = 1 K ∑K k=1 ‖ak‖22 − ‖ 1K ∑K k=1 ak‖22 ≤ 1 K ∑K k=1 ‖ak‖22 , and 1 K K∑ k=1 E‖ − 1 K K∑ k′=1 ∇fk′ ( xk ′ t ) +∇fk ( x ( k ) t ) ‖22 = 3 K K∑ k=1 E [ ‖∇fk ( x ( k ) t ) −∇fk ( x̃t ) ‖22 + ‖∇fk ( x̃t ) −∇f ( x̃t ) ‖22 + ‖∇f ( x̃t ) − 1 K K∑ k′=1 ∇fk′ ( xk ′ t ) ‖22 ] ≤ 3 K K∑ k=1 E [ L2‖x̃t − x ( k ) t ‖22 + κ2 + 1 K K∑ k′=1 ‖∇fk′ ( x̃t ) −∇fk′ ( xk ′ t ) ‖22 ] ≤ 6L 2 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 + 3κ2 . ( 30 ) The second term is bounded by 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) t−sp − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2 ( 1 + s ) K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p‖ 2 2 + 2 ( 1 + 1s ) K K∑ k=1 E‖ − 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2 ( 1 + s ) K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 2 ( 1 + 1s ) K K∑ k=1 E‖ s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 2 ( 1 + s ) K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 2 ( 1 + s ) K K∑ k=1 s−1∑ i=0 E‖e ( k ) ( St−i−s−1 ) p‖ 2 2 = 2 ( s+ 1 ) K K∑ k=1 s∑ i=0 E‖e ( k ) ( St−i−s ) p‖ 2 2 ≤ 24 ( 1− δ ) δ2 ( s+ 1 ) 2p2η2 ( σ2 + κ2 +G2 ) , ( 31 ) where the last inequality follows Lemma 4 . Combine the bounds of the first term and the second term , we have 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 + 2η2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 + 3κ2 ) + 24 ( 1− δ ) δ2 ( s+ 1 ) 2p2η2 ( σ2 + κ2 +G2 ) ≤ 2η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 6η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 24 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 + 12η2L2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 . ( 32 ) Sum the above inequality from t = 0 to t = T − 1 and divide it by T , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 6η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 24 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 + 12η2L2 ( s+ 1 ) 2p2 · 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 . ( 33 ) Therefore , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1−δ ) δ2 ( s+ 1 ) p ) + 6η 2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1−δ ) δ2 ) + 24 ( 1−δ ) δ2 η 2 ( s+ 1 ) 2p2G2 1− 12η2L2 ( s+ 1 ) 2p2 . ( 34 ) If we choose η ≤ 16L ( s+1 ) p , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 9η2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 36 ( 1− δ ) δ2 η2 ( s+ 1 ) 2p2G2 . ( 35 ) Theorem 1 . For OLCO3-VQ with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2L2 ( s+ 1 ) pσ2 ( 1 + 14 ( 1− δ ) δ2 ( s+ 1 ) p ) + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 5 ( 1− δ ) δ2 ) + 168 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 . ( 36 ) Proof . According to Assumption 1 , Etf ( x̃t+1 ) − f ( x̃t ) ≤ Et〈∇f ( x̃t ) , x̃t+1 − x̃t〉+ L 2 Et‖x̃t+1 − x̃t‖22 = −η〈∇f ( x̃t ) , 1 K K∑ k=1 ∇fk ( x ( k ) t ) 〉+ Lη2 2 Et‖ 1 K K∑ k=1 ∇Fk ( x ( k ) t ; ξ ( k ) t ) ‖22 . ( 37 ) For the first term , −〈∇f ( x̃t ) , 1 K K∑ k=1 ∇fk ( x ( k ) t ) 〉 = −‖∇f ( x̃t ) ‖22 − 〈∇f ( x̃t ) , 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) 〉 ≤ −1 2 ‖∇f ( x̃t ) ‖22 + 1 2 ‖ 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) ‖22 ≤ −1 2 ‖∇f ( x̃t ) ‖22 + L2 2K K∑ k=1 ‖x̃t − x ( k ) t ‖22 , ( 38 ) where the first equality follows that∇f ( x̃t ) = 1K ∑K k=1∇fk ( x̃t ) . For the second term , Et‖ 1 K K∑ k=1 ∇Fk ( x ( k ) t ; ξ ( k ) t ) ‖22 = Et‖ 1 K K∑ k=1 ( ∇Fk ( x ( k ) t ; ξ ( k ) t ) −∇fk ( x ( k ) t ) ) + 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) +∇f ( x̃t ) ‖22 ≤ 3Et‖ 1 K K∑ k=1 ( ∇Fk ( x ( k ) t ; ξ ( k ) t ) −∇fk ( x ( k ) t ) ) ‖22 + 3‖ 1 K K∑ k=1 ( ∇fk ( x ( k ) t ) −∇fk ( x̃t ) ) ‖22 + 3‖∇f ( x̃t ) ‖22 ≤ 3σ 2 K + 3L2 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3‖∇f ( x̃t ) ‖22 . ( 39 ) Combine them and we have Et ( x̃t+1 ) − f ( x̃t ) ≤ − η 2 ( 1− 3ηL ) ‖∇f ( x̃t ) ‖22 + ηL2 2 ( 1 + 3ηL ) 1 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3η2Lσ2 2K . ( 40 ) If we choose η ≤ 19L , Et ( x̃t+1 ) − f ( x̃t ) ≤ − η 3 ‖∇f ( x̃t ) ‖22 + 2ηL2 3 1 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3η2Lσ2 2K . ( 41 ) Then for the averaged parameters 1K ∑K k=1 x ( k ) t , ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( x̃t ) ‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 2L2 K K∑ k=1 ‖e ( k ) t−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 6 [ f ( x̃t ) − Etf ( x̃t+1 ) ] η + 9ηLσ2 K + 4L2 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 2L2 K K∑ k=1 ‖e ( k ) t−sp‖22 . ( 42 ) Take total expectation , sum from t = 0 to t = T − 1 , and rearrange , 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 2L2 KT T−1∑ t=0 K∑ k=1 E‖e ( k ) t−sp‖22 + 4L2 KT T−1∑ t=0 K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 24 ( 1− δ ) δ2 p2η2L2 ( σ2 + κ2 +G2 ) + 12η2L2 ( s+ 1 ) pσ2 ( 1 + 12 ( 1− δ ) δ2 ( s+ 1 ) p ) + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 4 ( 1− δ ) δ2 ) + 144 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2L2 ( s+ 1 ) pσ2 ( 1 + 14 ( 1− δ ) δ2 ( s+ 1 ) p ) + 36η2L2 ( s+ 1 ) 2p2κ2 ( 1 + 5 ( 1− δ ) δ2 ) + 168 ( 1− δ ) δ2 η2L2 ( s+ 1 ) 2p2G2 , ( 43 ) where the second inequality follows Lemma 4 and 5 . E PROOF OF THEOREM 2 Lemma 6 . For OLCO3-TC with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 44 ) Proof . Same as the proof of Lemma 4 , except that e ( k ) ( St−s−1 ) p is replaced with e ( k ) ( St−1 ) p. Lemma 7 . For OLCO3-TC with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the server error satisfies E‖et‖22 ≤ 96 ( 2− δ ) ( 1− δ ) δ4 p2η2 ( σ2 + κ2 +G2 ) . ( 45 ) Proof . Let St = b tpc , E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) ‖ 2 2 ≤ 2E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) − 1 K K∑ k=1 ∆ ( k ) Stp ‖22 + 2E‖ 1 K K∑ k=1 ∆ ( k ) Stp ‖22 ≤ 2 K K∑ k=1 E‖C ( ∆ ( k ) Stp ) −∆ ( k ) Stp ‖22 + 2 K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 ≤ 2 ( 2− δ ) K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 . ( 46 ) Following the proof of Lemma 4 we have E‖∆ ( k ) Stp‖ 2 2 ≤ 3 ( 1+ 1ρ ) 1− ( 1−δ ) ( 1+ρ ) p 2η2 ( σ2+κ2+G2 ) . Therefore , E‖et‖22 = E‖eStp‖22 ≤ ( 1− δ ) E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) + e ( St−1 ) p‖ 2 ≤ ( 1− δ ) ( 1 + 1 ρ ) E‖ 1 K K∑ k=1 C ( ∆ ( k ) Stp ) ‖ 2 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 1 K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 3 ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 6 ( 2− δ ) ( 1− δ ) ( 1 + 1ρ ) 2 [ 1− ( 1− δ ) ( 1 + ρ ) ] 2 p2η2 ( σ2 + κ2 +G2 ) . ( 47 ) Let ρ = δ2 ( 1−δ ) such that 1+ 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ 96 ( 2−δ ) ( 1−δ ) δ4 p 2η2 ( σ2 +κ2 +G2 ) . Lemma 8 . For OLCO3-TC with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 16L ( s+1 ) p and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , we have 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2pσ2 ( s+ 1 + 72h ( δ ) p ) + 9η2p2κ2 ( ( s+ 1 ) 2 + 24h ( δ ) ) + 216h ( δ ) η2p2G2 . ( 48 ) Proof . Let St = b tpc , 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 C ( ∆ ( k ′ ) ( St−i ) p ) − t−1∑ t′=Stp η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) ) − ( − s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) − t−1∑ t′=Stp η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ) − 1 K K∑ k′=1 e ( k ′ ) t − et−sp‖22 , ( 49 ) where s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) = s−1∑ i=0 [ ∆ ( k ) ( St−i ) p − e ( k ) ( St−i ) p ] = s−1∑ i=0 [ ( St−i ) p−1∑ t′= ( St−i−1 ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) + e ( k ) ( St−i−1 ) p − e ( k ) ( St−i ) p ] = s−1∑ i=0 ( St−i ) p−1∑ t′= ( St−i−1 ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) + e ( k ) ( St−s ) p − e ( k ) Stp . ( 50 ) Therefore 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) − 1 K K∑ k′=1 ( e ( k ′ ) ( St−s ) p − e ( k′ ) Stp ) + ( e ( k ) ( St−s ) p − e ( k ) Stp ) − 1 K K∑ k′=1 e ( k ′ ) t − et−sp‖22 ≤ 2η 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 + 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p + e ( k ) ( St−s ) p − e ( k ) Stp − e ( St−s ) p‖ 2 2 , ( 51 ) where the first term can be bounded following Eqs . ( 29,30 ) . The second term satisfies 2 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p + e ( k ) ( St−s ) p − e ( k ) Stp − e ( St−s ) p‖ 2 2 ≤ 6 K K∑ k=1 E‖ − 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p + e ( k ) ( St−s ) p‖ 2 2 + 6 K K∑ k=1 E‖e ( k ) Stp‖ 2 2 + 6 K K∑ k=1 E‖e ( St−s ) p‖ 2 2 ≤ 6 K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 6 K K∑ k=1 E‖e ( k ) Stp‖ 2 2 + 6 K K∑ k=1 E‖e ( St−s ) p‖ 2 2 ≤ 1− δ δ2 ( 1 + 4 ( 2− δ ) δ2 ) · 144p2η2 ( σ2 + κ2 +G2 ) , ( 52 ) where the last inequality follows Lemmas 6 and 7 . Let h ( δ ) = 1−δδ2 ( 1+ 4 ( 2−δ ) δ2 ) . Combine the above two inequalities and we have 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 2η2 ( s+ 1 ) pσ2 + 2η2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 + 3κ2 ) + 144h ( δ ) p2η2 ( σ2 + κ2 +G2 ) ≤ 2η2pσ2 ( s+ 1 + 72h ( δ ) p ) + 6η2p2κ2 ( ( s+ 1 ) 2 + 24h ( δ ) ) + 144h ( δ ) p2η2G2 + 12η2L2 ( s+ 1 ) p t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖x̃t − x ( k ) t ‖22 . ( 53 ) Following Eqs . ( 33,34,35 ) , 1 KT T−1∑ t=0 K−1∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 3η2pσ2 ( s+ 1 + 72h ( δ ) p ) + 9η2p2κ2 ( ( s+ 1 ) 2 + 24h ( δ ) ) + 216h ( δ ) η2p2G2 . ( 54 ) Theorem 2 . For OLCO3-TC with vanilla SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 16L ( s+1 ) p , 1 9L } and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( f ( x0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 80h ( δ ) p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 80h ( δ ) ) + 960η2p2G2h ( δ ) . ( 55 ) Proof . Following the proof of Theorem 1 , we have the same inequality as Eq . ( 41 ) : Etf ( x̃t+1 ) − f ( x̃t ) ≤ − η 3 ‖∇f ( x̃t ) ‖22 + 2ηL2 3 1 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 3η2Lσ2 2K . ( 56 ) Then for the averaged parameters 1K ∑K k=1 x ( k ) t , ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( x̃t ) ‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t + et−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 4L 2 K K∑ k=1 ‖e ( k ) t ‖22 + 4L2 K ‖et−sp‖22 + 2‖∇f ( x̃t ) ‖22 ≤ 6 [ f ( x̃t ) − Etf ( x̃t+1 ) ] η + 9ηLσ2 K + 4L2 K K∑ k=1 ‖x̃t − x ( k ) t ‖22 + 4L2 K K∑ k=1 ‖e ( k ) t ‖22 + 4L2 K ‖et−sp‖22 . ( 57 ) Take total expectation , sum from t = 0 to t = T − 1 , and rearrange , 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 4L2 KT T−1∑ t=0 K∑ k=1 E‖e ( k ) t−sp‖22 + 4L2 KT T−1∑ t=0 E‖et−sp‖22 + 4L2 KT T−1∑ t=0 K∑ k=1 E‖x̃t − x ( k ) t ‖22 ≤ 6 [ f ( x̃0 ) − Ef ( x̃T ) ] ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 72h ( δ ) p+ 4 ( 1− δ ) δ2 p+ 32 ( 2− δ ) ( 1− δ ) δ4 p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 72h ( δ ) + 4 ( 1− δ ) δ2 + 32 ( 2− δ ) ( 1− δ ) δ4 ) + 12η2p2G2 ( 72h ( δ ) + 4 ( 1− δ ) δ2 + 32 ( 2− δ ) ( 1− δ ) δ4 ) ≤ 6 ( f ( x̃0 ) − f∗ ) ηT + 9ηLσ2 K + 12η2pσ2 ( s+ 1 + 80h ( δ ) p ) + 12η2p2κ2 ( 3 ( s+ 1 ) 2 + 80h ( δ ) ) + 960η2p2G2h ( δ ) , ( 58 ) where the second inequality follows Lemmas 6 , 7 and 8 . F PROOF OF THEOREM 3 We first define two virtual variables zt and pt satisfying pt = { µ 1−µ ( x̃t − x̃t−1 ) , t ≥ 1 0 , t = 0 ( 59 ) and zt = x̃t + pt . ( 60 ) Then the update rule of zt satisfies zt+1 − zt = ( x̃t+1 − x̃t ) + µ 1− µ ( x̃t+1 − x̃t ) − µ 1− µ ( x̃t − x̃t−1 ) = − η K K∑ k=1 m ( k ) t+1 − µ 1− µ η K K∑ k=1 m ( k ) t+1 + µ 1− µ η K K∑ k=1 m ( k ) t = − η ( 1− µ ) K K∑ k=1 ( m ( k ) t+1 − µm ( k ) t ) = − η ( 1− µ ) K K∑ k=1 ∇fk ( x ( k ) t ; ξ ( k ) t ) , ( 61 ) which exists for OLCO3-OC , OLCO3-VQ and OLCO3-TC . Lemma 9 . For OLCO3 with Momentum SGD , we have E‖m ( k ) t ‖22 ≤ 3 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 62 ) Proof . E‖m ( k ) t ‖22 = E‖ t−1∑ t′=0 µt−1−t ′ ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 = ( t−1∑ t′=0 µt−1−t ′ ) 2E‖ t−1∑ t′=0 µt−1−t ′∑t−1 t′=0 µ t−1−t′ ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ ( t−1∑ t′=0 µt−1−t ′ ) 2E‖∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) ‖ 2 2 ≤ 3 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 63 ) Lemma 10 . For OLCO3 with Momentum SGD , we have E‖pt‖2 ≤ 3µ2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 . ( 64 ) Proof . E‖pt‖2 = µ2 ( 1− µ ) 2 E‖x̃t − x̃t−1‖2 = µ2η2 ( 1− µ ) 2 E‖ 1 K K∑ k=1 m ( k ) t ‖2 ≤ µ2η2 ( 1− µ ) 2K K∑ k=1 E‖m ( k ) t ‖22 ≤ 3µ 2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 . ( 65 ) Lemma 11 . For OLCO3-VQ with Momentum SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) ( 1− µ ) 2δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 66 ) Proof . Let St = b tpc , E‖e ( k ) t ‖22 = E‖e ( k ) Stp ‖22 = E‖C ( ∆ ( k ) Stp ) −∆ ( k ) Stp‖ 2 2 ≤ ( 1− δ ) E‖∆ ( k ) Stp ‖22 = ( 1− δ ) E‖ ST p−1∑ t′= ( St−1 ) p ηm ( k ) t′ + e ( k ) ( St−s−1 ) p‖ 2 2 ≤ ( 1− δ ) ( 1 + ρ ) E‖e ( k ) ( St−s−1 ) p‖ 2 2 + ( 1− δ ) ( 1 + 1 ρ ) 3η2p2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 67 ) Therefore , E‖e ( k ) t ‖22 ≤ 3 ( 1− δ ) ( 1 + 1 ρ ) p2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 bSts c−1∑ i=0 [ ( 1− δ ) ( 1 + ρ ) ] i ≤ 3 ( 1− δ ) ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 . ( 68 ) Let ρ = δ2 ( 1−δ ) such that 1 + 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ 12 ( 1−δ ) ( 1−µ ) 2δ2 p 2η2 ( σ2 + κ2 +G2 ) . Lemma 12 . For OLCO3-VQ with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 1−µ√ 72L ( s+1 ) p , we have 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 4η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 12 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2κ2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + ( s+ 1 ) 2p2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2G2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) . ( 69 ) Proof . Let St = b tpc , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 ∆ ( k′ ) ( St−i ) p − ( x ( k′ ) Stp − x ( k ′ ) t ) ) − ( − s−1∑ i=0 ∆ ( k ) ( St−i ) p − ( x ( k ) Stp − x ( k ) t ) − 1 K K∑ k′=1 e ( k ′ ) t−sp‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( ηm ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ ) − ( ηm ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ ) + 1 K K∑ k′=1 e ( k ′ ) t−sp + 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p − s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 ≤ 3η 2 K K∑ k=1 E‖ 1 K K∑ k′=1 m ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 −m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1‖22 + 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ‖22 + 3 K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) t−sp + 1 K K∑ k′=1 s−1∑ i=0 e ( k ′ ) ( St−i−s−1 ) p + s−1∑ i=0 e ( k ) ( St−i−s−1 ) p‖ 2 2 . ( 70 ) The first term 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 m ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 −m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1‖22 ≤ 3η 2 K K∑ k=1 E‖m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1‖22 ≤ 3η2 ( 1− µ ) 2K K∑ k=1 E‖m ( k ) ( St−s ) p‖ 2 2 ≤ 9η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 , ( 71 ) where the last inequality follows Lemma 9 . Following Eq . ( 29 ) , the second term can be bounded by 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ‖22 = 3η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p [ 1 K K∑ k′=1 ( ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) −∇fk′ ( x ( k′ ) t′ ) ) − ( ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ) ] t−1−t′∑ τ=0 µτ‖22 + 3η2 K K∑ k=1 E‖ t−1∑ t′= ( St−s ) p ( 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) −∇fk ( x ( k ) t′ ) ) t−1−t′∑ τ=0 µτ‖22 ≤ 3η 2 ( 1− µ ) 2K K∑ k=1 t−1∑ t′= ( St−s ) p E‖∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) −∇fk ( x ( k ) t′ ) ‖ 2 2 + 3η2 ( 1− µ ) 2K ( t− ( St − s ) p ) K∑ k=1 t−1∑ t′= ( St−s ) p E‖ 1 K K∑ k′=1 ∇fk′ ( x ( k ′ ) t′ ) −∇fk ( x ( k ) t′ ) ‖ 2 2 ≤ 3η 2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 3η2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 + 3κ 2 ) , ( 72 ) where the last inequality follows Eq . ( 30 ) . Combine the bounds of the first and second term with Lemma 11 and Eq . ( 31 ) , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 9η 2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 + 3η2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 3η2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 + 3κ 2 ) + 3 ( s+ 1 ) K K∑ k=1 s∑ i=0 E‖e ( k ) ( St−i−s ) p‖ 2 2 ≤ 9η 2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 + 3η2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 9η2 ( s+ 1 ) 2p2κ2 ( 1− µ ) 2 + 36 ( 1− δ ) η2 ( s+ 1 ) 2p2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2δ2 + 18η2L2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 . ( 73 ) Sum the above inequality from t = 0 to t = T − 1 and divide it by T , 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 3η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 12 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 9η2κ2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + ( s+ 1 ) 2p2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 9η2G2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 18η2L2 ( s+ 1 ) 2p2 ( 1− µ ) 2 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 . ( 74 ) If we choose η ≤ 1−µ√ 72L ( s+1 ) p , 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 4η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 12 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2κ2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + ( s+ 1 ) 2p2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) + 12η2G2 ( 1− µ ) 2 ( 1 ( 1− µ ) 2 + 4 ( 1− δ ) ( s+ 1 ) 2p2 δ2 ) . ( 75 ) Theorem 3 . For OLCO3-VQ with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and let g ( µ , δ , s , p ) = 15 ( 1−µ ) 2 + 60 ( 1−δ ) ( s+1 ) 2p2 δ2 , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 4η2L2 ( 1− µ ) 2 [ ( 4 ( s+ 1 ) p+ g ( µ , δ , s , p ) ) σ2 + ( 12 ( s+ 1 ) 2p2 + g ( µ , δ , s , p ) ) κ2 + g ( µ , δ , s , p ) G2 ] . ( 76 ) Proof . Following the proof of Theorem 1 and the update rule Eq . ( 61 ) , we have a similar inequality as Eq . ( 41 ) by choosing η ≤ 1−µ9L : Etf ( zt+1 ) − f ( zt ) ≤ η 1− µ ( −1 2 ‖∇f ( zt ) ‖22 + L2 2K K∑ k=1 ‖zt − x ( k ) t ‖22 ) + Lη2 2 ( 1− µ ) 2 ( 3σ2 K + 3L2 K K∑ k=1 ‖zt − x ( k ) t ‖22 + 3‖∇f ( zt ) ‖22 ) = − η 2 ( 1− µ ) ( 1− 3Lη 1− µ ) ‖∇f ( zt ) ‖22 + L2η 2 ( 1− µ ) K ( 1 + 3Lη 1− µ ) K∑ k=1 ‖zt − x ( k ) t ‖22 + 3Lη2σ2 2 ( 1− µ ) 2K ≤ − η 3 ( 1− µ ) ‖∇f ( zt ) ‖22 + 2ηL2 3 ( 1− µ ) K K∑ k=1 ‖zt − x ( k ) t ‖22 + 3Lη2σ2 2 ( 1− µ ) 2K . ( 77 ) Then for the averaged parameters 1K ∑K k=1 x ( k ) t , ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( zt ) ‖22 + 2‖∇f ( zt ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t−sp − pt‖22 + 2‖∇f ( zt ) ‖22 ≤ 4L2‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 4L2‖pt‖22 + 2‖∇f ( zt ) ‖22 . ( 78 ) Therefore 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) [ f ( z0 ) − f ( zT ) ] ηT + 9Lησ2 ( 1− µ ) K + 4L2 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 + 4L2 T T−1∑ t=0 E‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 4L2 T T−1∑ t=0 E‖pt‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 4η2L2 ( 1− µ ) 2 [ ( 4 ( s+ 1 ) p+ g ( µ , δ , s , p ) ) σ2 + ( 12 ( s+ 1 ) 2p2 + g ( µ , δ , s , p ) ) κ2 + g ( µ , δ , s , p ) G2 ] . ( 79 ) where the last inequality follows Lemmas 10 , 11 and 12 and g ( µ , δ , s , p ) = 15 ( 1−µ ) 2 + 60 ( 1−δ ) ( s+1 ) 2p2 δ2 . G PROOF OF THEOREM 4 Lemma 13 . For OLCO3-TC with Momentum SGD and under Assumptions 2 , 3 , 4 , and 5 , the local error satisfies E‖e ( k ) t ‖22 ≤ 12 ( 1− δ ) ( 1− µ ) 2δ2 p2η2 ( σ2 + κ2 +G2 ) . ( 80 ) Proof . Same as the proof of Lemma 11 , except that e ( k ) ( St−s−1 ) p is replaced with e ( k ) ( St−1 ) p. Lemma 14 . For OLCO3-TC with vanilla SGD and under Assumptions 2 , 3 , 4 , and 5 , the server error satisfies E‖et‖22 ≤ 96 ( 2− δ ) ( 1− δ ) ( 1− µ ) 2δ4 p2η2 ( σ2 + κ2 +G2 ) . ( 81 ) Proof . Let St = b tpc . Following the proof of Lemma 11 we have E‖∆ ( k ) Stp ‖22 ≤ 3 ( 1+ 1ρ ) 1− ( 1−δ ) ( 1+ρ ) p2η2 ( σ2+κ2+G2 ) ( 1−µ ) 2 . Therefore , E‖et‖22 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 1 K K∑ k=1 E‖∆ ( k ) Stp‖ 2 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 2 ( 2− δ ) ( 1− δ ) ( 1 + 1 ρ ) 3 ( 1 + 1ρ ) 1− ( 1− δ ) ( 1 + ρ ) p2η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 2 + ( 1− δ ) ( 1 + ρ ) E‖e ( St−1 ) p‖ 2 2 ≤ 6 ( 2− δ ) ( 1− δ ) ( 1 + 1ρ ) 2 [ 1− ( 1− δ ) ( 1 + ρ ) ] 2 ( 1− µ ) 2 p2η2 ( σ2 + κ2 +G2 ) , ( 82 ) where the first inequality follows the proof of Lemma 7 . Let ρ = δ2 ( 1−δ ) such that 1+ 1 ρ = 2−δ δ ≤ 2 δ , then E‖e ( k ) t ‖22 ≤ E‖e ( k ) t ‖22 ≤ 96 ( 2−δ ) ( 1−δ ) ( 1−µ ) 2δ4 p 2η2 ( σ2 + κ2 +G2 ) . Lemma 15 . For OLCO3-TC with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ 1−µ√ 72L ( s+1 ) p , we have 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 3η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 72h ( δ ) p2 ) + 3η2κ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 3 ( s+ 1 ) 2p2 + 72h ( δ ) p2 ) + 3η2G2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 72h ( δ ) p2 ) , ( 83 ) where h ( δ ) = 1−δδ2 ( 1 + 4 ( 2−δ ) δ2 ) . Proof . Let St = b tpc , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ( − s−1∑ i=0 C ( ∆ ( k ′ ) ( St−i ) p ) − ( x ( k′ ) Stp − x ( k ′ ) t ) ) − ( − s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) − ( x ( k′ ) Stp − x ( k ) t ) − 1 K K∑ k′=1 e ( k ) t − et−sp‖22 , ( 84 ) where s−1∑ i=0 C ( ∆ ( k ) ( St−i ) p ) + ( x ( k ) Stp − x ( k ) t ) = s−1∑ i=0 [ ∆ ( k ) ( St−i ) p − e ( k ) ( St−i ) p ] + ( x ( k ) Stp − x ( k ) t ) = s−1∑ i=0 [ m ( k ) ( St−i−1 ) p p−1∑ τ=0 µτ+1 + ( St−i ) p−1∑ t′= ( St−i−1 ) p η∇F ( x ( k ) t′ ; ξ ( k ) t′ ) ( St−i ) p−1−t′∑ τ=0 µτ + e ( k ) ( St−i−1 ) p − e ( k ) ( St−i ) p ] + m ( k ) Stp t−1−Stp∑ τ=0 µτ+1 + t−1∑ t′=Stp η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ = m ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ + e ( k ) ( St−s ) p − e ( k ) Stp . ( 85 ) Therefore , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 = 1 K K∑ k=1 E‖ 1 K K∑ k′=1 ηm ( k ′ ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 − ηm ( k ) ( St−s ) p t−1− ( St−s ) p∑ τ=0 µτ+1 + 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p η∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p η∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ + 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p − e ( k ) ( St−s ) p + e ( k ) Stp + et−sp‖22 = 3η2 ( 1− µ ) 2K K∑ k=1 E‖m ( k ) ( St−s ) p‖ 2 2 + 3 K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p − e ( k ) ( St−s ) p + e ( k ) Stp + et−sp‖22 + 3η2 K K∑ k=1 E‖ 1 K K∑ k′=1 t−1∑ t′= ( St−s ) p ∇Fk′ ( x ( k ′ ) t′ ; ξ ( k′ ) t′ ) t−1−t′∑ τ=0 µτ − t−1∑ t′= ( St−s ) p ∇Fk ( x ( k ) t′ ; ξ ( k ) t′ ) t−1−t′∑ τ=0 µτ‖22 , ( 86 ) where the first term is bounded following Lemma 9 and the third term is bounded following Eq . ( 72 ) . The second term 3 K K∑ k=1 E‖ 1 K K∑ k′=1 e ( k ′ ) ( St−s ) p − e ( k ) ( St−s ) p + e ( k ) Stp + et−sp‖22 ≤ 9 K K∑ k=1 E‖e ( k ) ( St−s ) p‖ 2 2 + 9 K K∑ k=1 E‖e ( k ) Stp‖ 2 2 + 9 K K∑ k=1 E‖e ( St−s ) p‖ 2 2 ≤ 1− δ ( 1− µ ) 2δ2 ( 1 + 4 ( 2− δ ) δ2 ) · 216p2η2 ( σ2 + κ2 +G2 ) . ( 87 ) Combine these bounds , 1 K K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 9η2 ( σ2 + κ2 +G2 ) ( 1− µ ) 4 + 1− δ ( 1− µ ) 2δ2 ( 1 + 4 ( 2− δ ) δ2 ) · 216p2η2 ( σ2 + κ2 +G2 ) + 3η2 ( s+ 1 ) pσ2 ( 1− µ ) 2 + 3η2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p ( 6L2 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 + 3κ 2 ) = 18η2L2 ( s+ 1 ) p ( 1− µ ) 2 t−1∑ t′=t− ( s+1 ) p 1 K K∑ k=1 E‖zt′ − x ( k ) t′ ‖ 2 2 ( 88 ) Sum the above inequality from t = 0 to t = T − 1 , divide it by T , and choose η ≤ 1−µ√ 72L ( s+1 ) p , 1 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 ≤ 3η2σ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + ( s+ 1 ) p+ 72h ( δ ) p2 ) + 3η2κ2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 3 ( s+ 1 ) 2p2 + 72h ( δ ) p2 ) + 3η2G2 ( 1− µ ) 2 ( 3 ( 1− µ ) 2 + 72h ( δ ) p2 ) . ( 89 ) Theorem 4 . For OLCO3-TC with Momentum SGD and under Assumptions 1 , 2 , 3 , 4 , and 5 , if the learning rate η ≤ min { 1−µ√ 72L ( s+1 ) p , 1−µ9L } and let h ( δ ) = 1−δ δ2 ( 1 + 4 ( 2−δ ) δ2 ) , then 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 6η2L2 ( 1− µ ) 2 [ σ2 ( 9 ( 1− µ ) 2 + 2 ( s+ 1 ) p+ 168h ( δ ) p2 ) + κ2 ( 9 ( 1− µ ) 2 + 6 ( s+ 1 ) 2p2 + 168h ( δ ) p2 ) +G2 ( 9 ( 1− µ ) 2 + 168h ( δ ) p2 ) ] . ( 90 ) Proof . Following the proof of Theorem 3 , Etf ( zt+1 ) −f ( zt ) ≤ − η 3 ( 1− µ ) ‖∇f ( zt ) ‖22 + 2ηL2 3 ( 1− µ ) K K∑ k=1 ‖zt−x ( k ) t ‖22 + 3Lη2σ2 2 ( 1− µ ) 2K , ( 91 ) ‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 2‖∇f ( 1 K K∑ k=1 x ( k ) t ) −∇f ( zt ) ‖22 + 2‖∇f ( zt ) ‖22 ≤ 2L2‖ 1 K K∑ k=1 e ( k ) t + et−sp − pt‖22 + 2‖∇f ( zt ) ‖22 ≤ 6L2‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 6L2‖et−sp‖22 + 6L2‖pt‖22 + 2‖∇f ( zt ) ‖22 . ( 92 ) Therefore 1 T T−1∑ t=0 E‖∇f ( 1 K K∑ k=1 x ( k ) t ) ‖22 ≤ 6 ( 1− µ ) [ f ( z0 ) − f ( zT ) ] ηT + 9Lησ2 ( 1− µ ) K + 4L2 KT T−1∑ t=0 K∑ k=1 E‖zt − x ( k ) t ‖22 + 6L2 T T−1∑ t=0 E‖ 1 K K∑ k=1 e ( k ) t−sp‖22 + 6L2 T T−1∑ t=0 E‖et−sp‖22 + 6L2 T T−1∑ t=0 E‖pt‖22 ≤ 6 ( 1− µ ) ( f ( x0 ) − f∗ ) ηT + 9Lησ2 ( 1− µ ) K + 6η2L2 ( 1− µ ) 2 [ σ2 ( 9 ( 1− µ ) 2 + 2 ( s+ 1 ) p+ 168h ( δ ) p2 ) + κ2 ( 9 ( 1− µ ) 2 + 6 ( s+ 1 ) 2p2 + 168h ( δ ) p2 ) +G2 ( 9 ( 1− µ ) 2 + 168h ( δ ) p2 ) ] , ( 93 ) where the last inequality follows Lemmas 10 , 13 , 14 and 15 .
Review: This paper studies distributed training of neural networks. The major obstacles in distributed training are communication costs and communication delays. In the literature there exists different methods which attempt to overcome these two issues but, as far as the authors claim, none of the existing algorithms succeeds in dealing with both these aspects at the same time. The authors propose a novel distributed method OLCO_3 which is designed to address both communication costs and communication delays. In particular, the authors propose two variants of the method: OLCO_3 TC which comprises pipelining and compensation, and OLCO_3 VQ which comprises pipelining with communication-dependent compressor. Both the versions of OLCO_3 are analyzed from a theoretical perspective: under some assumptions the authors conduct a theoretical analysis of the convergence of the proposed schemes. Finally, the method in its two variants is benchmarked and compared with state-of-the-art distributed algorithms.
SP:7ca5ba13170227684a45a4fef71675925b752f87
Tradeoffs in Data Augmentation: An Empirical Study
1 INTRODUCTION . Models that achieve state-of-the-art in image classification often use heavy data augmentation strategies . The best techniques use various transforms applied sequentially and stochastically . Though the effectiveness of this is well-established , the mechanism through which these transformations work is not well-understood . ∗Equal contribution †Currently at Form Energy , Inc , Somerville , MA Since early uses of data augmentation , it has been assumed that augmentation works because it simulates realistic samples from the true data distribution : “ [ augmentation strategies are ] reasonable since the transformed reference data is now extremely close to the original data . In this way , the amount of training data is effectively increased '' ( Bellegarda et al. , 1992 ) . Because of this , augmentations have often been designed with the heuristic of incurring minimal distribution shift from the training data . This rationale does not explain why unrealistic distortions such as cutout ( DeVries & Taylor , 2017 ) , SpecAugment ( Park et al. , 2019 ) , and mixup ( Zhang et al. , 2017 ) significantly improve generalization performance . Furthermore , methods do not always transfer across datasets—Cutout , for example , is useful on CIFAR-10 and not on ImageNet ( Lopes et al. , 2019 ) . Additionally , many augmentation policies heavily modify images by stochastically applying multiple transforms to a single image . Based on this observation , some have proposed that augmentation strategies are effective because they increase the diversity of images seen by the model . In this complex landscape , claims about diversity and distributional similarity remain unverified heuristics . Without more precise data augmentation science , finding state-of-the-art strategies requires brute force that can cost thousands of GPU hours ( Cubuk et al. , 2018 ; Zhang et al. , 2019 ) . This highlights a need to specify and measure the relationship between the original training data and the augmented dataset , as relevant to a given model ’ s performance . In this paper , we quantify these heuristics . Seeking to understand the mechanisms of augmentation , we focus on single transforms as a foundation . We present an empirical study of 204 different augmentations on CIFAR-10 and 225 on ImageNet , varying both broad transform families and finer transform parameters . To better understand current state of the art augmentation policies , we additionally measure 58 composite augmentations on ImageNet and three state of the art augmentations on CIFAR-10 . Our contributions are : 1 . We introduce Affinity and Diversity : interpretable , easy-to-compute metrics for parametrizing augmentation performance . Affinity quantifies how much an augmentation shifts the training data distribution . Diversity quantifies the complexity of the augmented data with respect to the model and learning procedure . 2 . We find that performance is dependent on both metrics . In the Affinity-Diversity plane , the best augmentation strategies jointly optimize the two ( see Fig 1 ) . 3 . We connect augmentation to other familiar forms of regularization , such as ` 2 and learning rate scheduling , observing common features of the dynamics : performance can be improved and training accelerated by turning off regularization at an appropriate time . 4 . We find that performance is only improved when a transform increases the total number of unique training examples . The utility of these new training examples is informed by the augmentation ’ s Affinity and Diversity . 2 RELATED WORK . Since early uses of data augmentation in training neural networks , there has been an assumption that effective transforms for data augmentation are those that produce images from an “ overlapping but different '' distribution ( Bengio et al. , 2011 ; Bellegarda et al. , 1992 ) . Indeed , elastic distortions as well as distortions in the scale , position , and orientation of training images have been used on MNIST ( Ciregan et al. , 2012 ; Sato et al. , 2015 ; Simard et al. , 2003 ; Wan et al. , 2013 ) , while horizontal flips , random crops , and random distortions to color channels have been used on CIFAR-10 and ImageNet ( Krizhevsky et al. , 2012 ; Zagoruyko & Komodakis , 2016 ; Zoph et al. , 2017 ) . For object detection and image segmentation , one can also use object-centric cropping ( Liu et al. , 2016 ) or cut-and-paste new objects ( Dwibedi et al. , 2017 ; Fang et al. , 2019 ; Ngiam et al. , 2019 ) . In contrast , researchers have also successfully used less domain-specific transformations , such as Gaussian noise ( Ford et al. , 2019 ; Lopes et al. , 2019 ) , input dropout ( Srivastava et al. , 2014 ) , erasing random patches of the training samples ( DeVries & Taylor , 2017 ; Park et al. , 2019 ; Zhong et al. , 2017 ) , and adversarial noise ( Szegedy et al. , 2013 ) . Mixup ( Zhang et al. , 2017 ) and Sample Pairing ( Inoue , 2018 ) are two augmentation methods that use convex combinations of training samples . It is also possible to improve generalization by combining individual transformations . For example , reinforcement learning has been used to choose more optimal combinations of data augmentation transformations ( Ratner et al. , 2017 ; Cubuk et al. , 2018 ) . Follow-up research has lowered the computation cost of such optimization , by using population based training ( Ho et al. , 2019 ) , density matching ( Lim et al. , 2019 ) , adversarial policy-design that evolves throughout training ( Zhang et al. , 2019 ) , or a reduced search space ( Cubuk et al. , 2019 ) . Despite producing unrealistic outputs , such combinations of augmentations can be highly effective in different tasks ( Berthelot et al. , 2019 ; Tan & Le , 2019 ; Tan et al. , 2019 ; Xie et al. , 2019a ; b ) . Across these different examples , the role of distribution shift in training remains unclear . Lim et al . ( 2019 ) ; Hataya et al . ( 2020 ) have found augmentation policies by minimizing the distance between the distributions of augmented data and clean data . Recent work found that after training with augmented data , fine-tuning on clean training data can be beneficial ( He et al. , 2019 ) , while Touvron et al . ( 2019 ) found it beneficial to fine-tune with a test-set resolution that aligns with the training-set resolution . The true input-space distribution from which a training dataset is drawn remains elusive . To better understand the effect of distribution shift on performance , many works attempt to estimate it . Often these techniques require training secondary models , such as those based on variational methods ( Goodfellow et al. , 2014 ; Kingma & Welling , 2014 ; Nowozin et al. , 2016 ; Blei et al. , 2017 ) . Others have augmented the training set by modelling the data distribution directly ( Tran et al. , 2017 ) . Recent work has suggested that even unrealistic distribution modelling can be beneficial ( Dai et al. , 2017 ) . These methods try to specify the distribution separately from the model they are trying to optimize . As a result , they are insensitive to any interaction between the model and data distribution . Instead , we are interested in a measure of how much the data shifts along directions that are most relevant to the model ’ s performance . 3 METHODS . We performed extensive experiments with various augmentations on CIFAR-10 and ImageNet . Experiments on CIFAR-10 used the WRN-28-2 model ( Zagoruyko & Komodakis , 2016 ) , trained for 78k steps with cosine learning rate decay . Results are the mean over 10 initializations and reported errors ( often too small to show on figures ) are the standard error on the mean . Details on the error analysis are in Sec . C. Experiments on ImageNet used the ResNet-50 model ( He et al. , 2016 ) , trained for 112.6k steps with a weight decay rate of 1e-4 , and a learning rate of 0.2 , which is decayed by 10 at epochs 30 , 60 , and 80 . Images were pre-processed by dividing each pixel value by 255 and normalizing by the dataset statistics . Random crop was also applied on all ImageNet models . These pre-processed data without further augmentation are “ clean data ” and a model trained on it is the “ clean baseline ” . We followed the same implementation details as Cubuk et al . ( 2018 ) 1 , including for most augmentation operations . Further implementation details are in Sec . A . Unless specified otherwise , data augmentation was applied following standard practice : each time an image is drawn , the given augmentation is applied with a given probability . We call this mode dynamic augmentation . Due to whatever stochasticity is in the transform itself ( such as randomly selecting the location for a crop ) or in the policy ( such as applying a flip only with 50 % probability ) , the augmented image could be different each time . Thus , most of the tested augmentations increase the number of possible distinct images that can be shown during training . We also performed select experiments by training with static augmentation . In static augmentation , the augmentation policy ( one or more transforms ) is applied once to the entire clean training set . Static augmentation does not change the number of unique images in the dataset . 3.1 AFFINITY : A SIMPLE METRIC FOR DISTRIBUTION SHIFT . Thus far , heuristics of distribution shift have motivated design of augmentation policies . Inspired by this focus , we introduce a simple metric to quantify how augmentation shifts data with respect to the decision boundary of the clean baseline model . 1Available at bit.ly/2v2FojN We start by noting that a trained model is often sensitive to the distribution of the training data . That is , model performance varies greatly between new samples from the true data distribution and samples from a shifted distribution . Importantly , the model ’ s sensitivity to distribution shift is not purely a function of the input data distribution , since training dynamics and the model ’ s implicit biases affect performance . Because the goal of augmentation is improving model performance , measuring shifts with respect to the distribution captured by the model is more meaningful than measuring shifts in the distribution of the input data alone . We thus define Affinity to be the ratio between the validation accuracy of a model trained on clean data and tested on an augmented validation set , and the accuracy of the same model tested on clean data . Here , the augmentation is applied to the validation dataset in one pass , as a static augmentation . More formally we define : Definition 1 . Let Dtrain and Dval be training and validation datasets drawn IID from the same clean data distribution , and let D′val be derived from Dval by applying a stochastic augmentation strategy , a , once to each image in Dval , D′val = { ( a ( x ) , y ) : ∀ ( x , y ) ∈ Dval } . Further let m be a model trained on Dtrain and A ( m , D ) denote the model ’ s accuracy when evaluated on dataset D. The Affinity , T [ a ; m ; Dval ] , is given by T [ a ; m ; Dval ] = A ( m , D′val ) /A ( m , Dval ) . ( 1 ) With this definition , Affinity of one represents no shift and a smaller number suggests that the augmented data is out-of-distribution for the model . In Fig . 2 we illustrate Affinity with a two-class classification task on a mixture of two Gaussians . Augmentation in this example comprises shift of the means of the Gaussians of the validation data compared to those used for training . Under this shift , we calculate both Affinity and KL divergence of the shifted data with respect to the original data . Affinity changes only when the shift in the data is with respect to the model ’ s decision boundary , whereas the KL divergence changes even when data is shifted in the direction that is irrelevant to the classification task . In this way , Affinity captures what is relevant to a model : shifts that impact predictions . This same metric has been used as a measure of a model ’ s robustness to image corruptions that do not change images ’ semantic content ( Azulay & Weiss , 2018 ; Dodge & Karam , 2017 ; Ford et al. , 2019 ; Hendrycks & Dietterich , 2019 ; Rosenfeld et al. , 2018 ; Yin et al. , 2019 ) . Here we , turn this around and use it to quantify the shift of augmented data compared to clean data . Affinity has the following advantages as a metric : 1 . It is easy to measure . It requires only clean training of the model in question . 2 . It is independent of any confounding interaction between the data augmentation and the training process , since augmentation is only used on the validation set and applied statically . 3 . It is a measure of distance sensitive to properties of both the data distribution and the model . We gain confidence in this metric by comparing it to other potential model-dependent measures of distribution shift . We consider the mean log likelihood of augmented test images ( Grathwohl et al. , 2019 ) , and the Watanabe–Akaike information criterion ( WAIC ) ( Watanabe , 2010 ) . These other metrics have high correlation with Affinity . Details can be found in Sec . F .
This paper empirically investigates two crucial factors: affinity and diversity in useful data augmentation strategies. Through extensive experiments on existing image augmentation methods, it demonstrates that a good augmentation practice should bring high affinity and diversity for validation and training data. Specifically, it uses the accuracy gap between augmented and clean validation data to measure affinity. The diversity is measured by final training loss with the augmentations used in training.
SP:2112a176f4ade9d808d78e1795701da15a5c146a
Tradeoffs in Data Augmentation: An Empirical Study
1 INTRODUCTION . Models that achieve state-of-the-art in image classification often use heavy data augmentation strategies . The best techniques use various transforms applied sequentially and stochastically . Though the effectiveness of this is well-established , the mechanism through which these transformations work is not well-understood . ∗Equal contribution †Currently at Form Energy , Inc , Somerville , MA Since early uses of data augmentation , it has been assumed that augmentation works because it simulates realistic samples from the true data distribution : “ [ augmentation strategies are ] reasonable since the transformed reference data is now extremely close to the original data . In this way , the amount of training data is effectively increased '' ( Bellegarda et al. , 1992 ) . Because of this , augmentations have often been designed with the heuristic of incurring minimal distribution shift from the training data . This rationale does not explain why unrealistic distortions such as cutout ( DeVries & Taylor , 2017 ) , SpecAugment ( Park et al. , 2019 ) , and mixup ( Zhang et al. , 2017 ) significantly improve generalization performance . Furthermore , methods do not always transfer across datasets—Cutout , for example , is useful on CIFAR-10 and not on ImageNet ( Lopes et al. , 2019 ) . Additionally , many augmentation policies heavily modify images by stochastically applying multiple transforms to a single image . Based on this observation , some have proposed that augmentation strategies are effective because they increase the diversity of images seen by the model . In this complex landscape , claims about diversity and distributional similarity remain unverified heuristics . Without more precise data augmentation science , finding state-of-the-art strategies requires brute force that can cost thousands of GPU hours ( Cubuk et al. , 2018 ; Zhang et al. , 2019 ) . This highlights a need to specify and measure the relationship between the original training data and the augmented dataset , as relevant to a given model ’ s performance . In this paper , we quantify these heuristics . Seeking to understand the mechanisms of augmentation , we focus on single transforms as a foundation . We present an empirical study of 204 different augmentations on CIFAR-10 and 225 on ImageNet , varying both broad transform families and finer transform parameters . To better understand current state of the art augmentation policies , we additionally measure 58 composite augmentations on ImageNet and three state of the art augmentations on CIFAR-10 . Our contributions are : 1 . We introduce Affinity and Diversity : interpretable , easy-to-compute metrics for parametrizing augmentation performance . Affinity quantifies how much an augmentation shifts the training data distribution . Diversity quantifies the complexity of the augmented data with respect to the model and learning procedure . 2 . We find that performance is dependent on both metrics . In the Affinity-Diversity plane , the best augmentation strategies jointly optimize the two ( see Fig 1 ) . 3 . We connect augmentation to other familiar forms of regularization , such as ` 2 and learning rate scheduling , observing common features of the dynamics : performance can be improved and training accelerated by turning off regularization at an appropriate time . 4 . We find that performance is only improved when a transform increases the total number of unique training examples . The utility of these new training examples is informed by the augmentation ’ s Affinity and Diversity . 2 RELATED WORK . Since early uses of data augmentation in training neural networks , there has been an assumption that effective transforms for data augmentation are those that produce images from an “ overlapping but different '' distribution ( Bengio et al. , 2011 ; Bellegarda et al. , 1992 ) . Indeed , elastic distortions as well as distortions in the scale , position , and orientation of training images have been used on MNIST ( Ciregan et al. , 2012 ; Sato et al. , 2015 ; Simard et al. , 2003 ; Wan et al. , 2013 ) , while horizontal flips , random crops , and random distortions to color channels have been used on CIFAR-10 and ImageNet ( Krizhevsky et al. , 2012 ; Zagoruyko & Komodakis , 2016 ; Zoph et al. , 2017 ) . For object detection and image segmentation , one can also use object-centric cropping ( Liu et al. , 2016 ) or cut-and-paste new objects ( Dwibedi et al. , 2017 ; Fang et al. , 2019 ; Ngiam et al. , 2019 ) . In contrast , researchers have also successfully used less domain-specific transformations , such as Gaussian noise ( Ford et al. , 2019 ; Lopes et al. , 2019 ) , input dropout ( Srivastava et al. , 2014 ) , erasing random patches of the training samples ( DeVries & Taylor , 2017 ; Park et al. , 2019 ; Zhong et al. , 2017 ) , and adversarial noise ( Szegedy et al. , 2013 ) . Mixup ( Zhang et al. , 2017 ) and Sample Pairing ( Inoue , 2018 ) are two augmentation methods that use convex combinations of training samples . It is also possible to improve generalization by combining individual transformations . For example , reinforcement learning has been used to choose more optimal combinations of data augmentation transformations ( Ratner et al. , 2017 ; Cubuk et al. , 2018 ) . Follow-up research has lowered the computation cost of such optimization , by using population based training ( Ho et al. , 2019 ) , density matching ( Lim et al. , 2019 ) , adversarial policy-design that evolves throughout training ( Zhang et al. , 2019 ) , or a reduced search space ( Cubuk et al. , 2019 ) . Despite producing unrealistic outputs , such combinations of augmentations can be highly effective in different tasks ( Berthelot et al. , 2019 ; Tan & Le , 2019 ; Tan et al. , 2019 ; Xie et al. , 2019a ; b ) . Across these different examples , the role of distribution shift in training remains unclear . Lim et al . ( 2019 ) ; Hataya et al . ( 2020 ) have found augmentation policies by minimizing the distance between the distributions of augmented data and clean data . Recent work found that after training with augmented data , fine-tuning on clean training data can be beneficial ( He et al. , 2019 ) , while Touvron et al . ( 2019 ) found it beneficial to fine-tune with a test-set resolution that aligns with the training-set resolution . The true input-space distribution from which a training dataset is drawn remains elusive . To better understand the effect of distribution shift on performance , many works attempt to estimate it . Often these techniques require training secondary models , such as those based on variational methods ( Goodfellow et al. , 2014 ; Kingma & Welling , 2014 ; Nowozin et al. , 2016 ; Blei et al. , 2017 ) . Others have augmented the training set by modelling the data distribution directly ( Tran et al. , 2017 ) . Recent work has suggested that even unrealistic distribution modelling can be beneficial ( Dai et al. , 2017 ) . These methods try to specify the distribution separately from the model they are trying to optimize . As a result , they are insensitive to any interaction between the model and data distribution . Instead , we are interested in a measure of how much the data shifts along directions that are most relevant to the model ’ s performance . 3 METHODS . We performed extensive experiments with various augmentations on CIFAR-10 and ImageNet . Experiments on CIFAR-10 used the WRN-28-2 model ( Zagoruyko & Komodakis , 2016 ) , trained for 78k steps with cosine learning rate decay . Results are the mean over 10 initializations and reported errors ( often too small to show on figures ) are the standard error on the mean . Details on the error analysis are in Sec . C. Experiments on ImageNet used the ResNet-50 model ( He et al. , 2016 ) , trained for 112.6k steps with a weight decay rate of 1e-4 , and a learning rate of 0.2 , which is decayed by 10 at epochs 30 , 60 , and 80 . Images were pre-processed by dividing each pixel value by 255 and normalizing by the dataset statistics . Random crop was also applied on all ImageNet models . These pre-processed data without further augmentation are “ clean data ” and a model trained on it is the “ clean baseline ” . We followed the same implementation details as Cubuk et al . ( 2018 ) 1 , including for most augmentation operations . Further implementation details are in Sec . A . Unless specified otherwise , data augmentation was applied following standard practice : each time an image is drawn , the given augmentation is applied with a given probability . We call this mode dynamic augmentation . Due to whatever stochasticity is in the transform itself ( such as randomly selecting the location for a crop ) or in the policy ( such as applying a flip only with 50 % probability ) , the augmented image could be different each time . Thus , most of the tested augmentations increase the number of possible distinct images that can be shown during training . We also performed select experiments by training with static augmentation . In static augmentation , the augmentation policy ( one or more transforms ) is applied once to the entire clean training set . Static augmentation does not change the number of unique images in the dataset . 3.1 AFFINITY : A SIMPLE METRIC FOR DISTRIBUTION SHIFT . Thus far , heuristics of distribution shift have motivated design of augmentation policies . Inspired by this focus , we introduce a simple metric to quantify how augmentation shifts data with respect to the decision boundary of the clean baseline model . 1Available at bit.ly/2v2FojN We start by noting that a trained model is often sensitive to the distribution of the training data . That is , model performance varies greatly between new samples from the true data distribution and samples from a shifted distribution . Importantly , the model ’ s sensitivity to distribution shift is not purely a function of the input data distribution , since training dynamics and the model ’ s implicit biases affect performance . Because the goal of augmentation is improving model performance , measuring shifts with respect to the distribution captured by the model is more meaningful than measuring shifts in the distribution of the input data alone . We thus define Affinity to be the ratio between the validation accuracy of a model trained on clean data and tested on an augmented validation set , and the accuracy of the same model tested on clean data . Here , the augmentation is applied to the validation dataset in one pass , as a static augmentation . More formally we define : Definition 1 . Let Dtrain and Dval be training and validation datasets drawn IID from the same clean data distribution , and let D′val be derived from Dval by applying a stochastic augmentation strategy , a , once to each image in Dval , D′val = { ( a ( x ) , y ) : ∀ ( x , y ) ∈ Dval } . Further let m be a model trained on Dtrain and A ( m , D ) denote the model ’ s accuracy when evaluated on dataset D. The Affinity , T [ a ; m ; Dval ] , is given by T [ a ; m ; Dval ] = A ( m , D′val ) /A ( m , Dval ) . ( 1 ) With this definition , Affinity of one represents no shift and a smaller number suggests that the augmented data is out-of-distribution for the model . In Fig . 2 we illustrate Affinity with a two-class classification task on a mixture of two Gaussians . Augmentation in this example comprises shift of the means of the Gaussians of the validation data compared to those used for training . Under this shift , we calculate both Affinity and KL divergence of the shifted data with respect to the original data . Affinity changes only when the shift in the data is with respect to the model ’ s decision boundary , whereas the KL divergence changes even when data is shifted in the direction that is irrelevant to the classification task . In this way , Affinity captures what is relevant to a model : shifts that impact predictions . This same metric has been used as a measure of a model ’ s robustness to image corruptions that do not change images ’ semantic content ( Azulay & Weiss , 2018 ; Dodge & Karam , 2017 ; Ford et al. , 2019 ; Hendrycks & Dietterich , 2019 ; Rosenfeld et al. , 2018 ; Yin et al. , 2019 ) . Here we , turn this around and use it to quantify the shift of augmented data compared to clean data . Affinity has the following advantages as a metric : 1 . It is easy to measure . It requires only clean training of the model in question . 2 . It is independent of any confounding interaction between the data augmentation and the training process , since augmentation is only used on the validation set and applied statically . 3 . It is a measure of distance sensitive to properties of both the data distribution and the model . We gain confidence in this metric by comparing it to other potential model-dependent measures of distribution shift . We consider the mean log likelihood of augmented test images ( Grathwohl et al. , 2019 ) , and the Watanabe–Akaike information criterion ( WAIC ) ( Watanabe , 2010 ) . These other metrics have high correlation with Affinity . Details can be found in Sec . F .
This paper studies the problem of data augmentation that obtains new training examples by modifying existing ones. Data augmentation is popular in machine learning and artificial intelligence since it enhances the number of training examples. However, its effect on model performance remains unknown in practice. An augmentation operator (e.g. image rotation) can be either helpful or harmful. This paper introduces two novel metrics, named affinity and diversity, to quantify the effect of any given augmentation operator. The authors find that an operator with high affinity score and high diversity score leads to the best performance improvement.
SP:2112a176f4ade9d808d78e1795701da15a5c146a
Graph Structural Aggregation for Explainable Learning
1 INTRODUCTION . Convolution neural networks ( LeCun et al. , 1995 ) have proven to be very efficient at learning meaningful patterns for many articificial intelligence tasks . They convey the ability to learn hierarchical information in data with Euclidean grid-like structures such as images and text . Convolutional Neural Networks ( CNNs ) have rapidly become state-of-the art methods in the fields of computer vision ( Russakovsky et al. , 2015 ) and natural language processing ( Devlin et al. , 2018 ) . However in many scientific fields , studied data have an underlying graph or manifold structure such as communication networks ( whether social or technical ) or knowledge graphs . Recently there have been many attempts to extend convolutions to those non-Euclidean structured data ( Hammond et al. , 2011 ; Kipf & Welling , 2016 ; Defferrard et al. , 2016 ) . In these new approaches , the authors propose to compute node embeddings in a semi-supervised fashion in order to perform node classification . Those node embeddings can also be used for link prediction by computing distances between each node of the graph ( Hammond et al. , 2011 ; Kipf & Welling , 2016 ) . Graph classification is studied in many fields . Whether for predicting the chemical activity of a molecule or to cluster authors from different scientific domains based on their ego-networks ( Freeman , 1982 ) . However when trying to generalize neural network approaches to the task of graph classification there are several aspects that differ widely from image classification . When trying to perform graph classification , we can deal with graphs of different sizes . To compare them we first need to obtain a graph representation that is independant of the size of the graph . Moreover , for a fixed graph , nodes are not ordered . The graph representation obtained with neural network algorithms must be independant of the order of nodes and thus be invariant by node permutation . Aggregation functions are functions that operate on node embeddings to produce a graph representation . When tackling a graph classification task , the aggregation function used is usually just a mean or a max of node embeddings as illustrated in figure 1b . But when working with graphs of large sizes , the mean over all nodes does not allow us to extract significant patterns with a good discriminating power . In order to identify patterns in graphs , some methods try to identify structural roles for nodes . Donnat et al . ( 2018 ) define structural role discovery as the process of identifying nodes which have topologically similar network neighborhoods while residing in potentially distant areas of the network as illustrated in figure 1a . Those structural roles represent local patterns in graphs . Identifying them and comparing them among graphs could improve the discriminative power of graph embeddings obtained with graph neural networks . In this work , we build an aggregation process based on the identification of structural roles , called StructAgg . The main contributions of this work are summarized bellow : 1 . Learned aggregation process . A differentiable aggregation process that learns how to aggregate node embeddings in order to produce a graph representation for a graph classification task . 2 . Identification of structural roles . Based on the definition of structural roles from Donnat et al . ( 2018 ) , our algorithm learns structural roles during the aggregation process . This is innovative because most algorithms that learn structural roles in graphs are not based on graph neural networks . 3 . Explainability of selected features for a graph classification task . The identification of structural roles enables us to understand and explain what features are selected during training . Graph neural networks often lack explainability and there are only few works that tackle this issue . One contribution of this work is the explainability of the approach . We show how our end-to-end model provides interpretability to a graph classification task based on graph neural networks . 4 . Experimental results . Our method achieves state-of-the-art results on benchmark datasets . We compare it with kernel methods and state-of-the-art message passing algorithms that use pooling layers as aggregation processes . 2 RELATED WORK . The identification of nodes that have similar structural roles is usually done by an explicit featurization of nodes or by algorithms that rely on random walks to explore nodes ’ neighborhoods . A well known algorithm in this line of research is RolX ( Gilpin et al. , 2013 ; Henderson et al. , 2012 ) , a matrix factorization that focuses on computing a soft assignment matrix based on a listing of topological properties set as inputs for nodes . Similarly struct2vec builds a multilayered graph based on topological metrics on nodes and then generates random walks to capture structural information . In another line of research , many works rely on graphlets to capture nodes ’ topological properties and identify nodes with similar neighborhoods ( Rossi et al. , 2017 ; Lee et al. , 2018 ; Ahmed et al. , 2018 ) . In their work , Donnat et al . ( 2018 ) compute node embeddings from wavelets in graphs to caracterize nodes ’ neighborhood at different scales . In this work , we introduce an aggregation process based on the identification of structural roles in graphs that is computed in an end-to-end trainable fashion . We build a hierarchical representation of nodes by using neural network models in graphs to propagate nodes ’ features at different hops . Recently there has been a rich line of research , inspired by deep models in images , that aims at redefining neural networks in graphs and in particular convolutional neural networks ( Defferrard et al. , 2016 ; Kipf & Welling , 2016 ; Veličković et al. , 2017 ; Hamilton et al. , 2017 ; Bronstein et al. , 2017 ; Bruna et al. , 2013 ; Scarselli et al. , 2009 ) . Those convolutions can be viewed as message passing algorithms that are composed of two phases . A message passing phase that runs for T steps is first defined in terms of message functions and vertex update functions . A readout phase then computes a feature vector for the whole graph using some readout function . In this work we will see how to define a readout phase that is learnable and that is representative of meaningful patterns in graphs . 3 PROPOSED METHOD . In this section we introduce the structural aggregation layer ( StructAgg ) . We show how we identify structural classes for nodes in graphs ; how those classes are used in order to develop an aggregation layer ; and how this layer allows us to compare significant structural patterns in graphs for a supervised classification task . 3.1 NOTATIONS . Let G = ( V , E , X ) be a graph , where V is the set of nodes of G , E the set of edges and X ∈ Rn×f the feature matrix of G ’ s nodes where f is the dimensionality of node features . Let n = |V | be the number of nodes of G and e = |E| the number of edges of G. Let A be the adjacency matrix of graph G and D be its degree diagonal matrix . Let vi and vj be the ith and jth nodes of G , we have : Aij = { 1 if ( vi , vj ) ∈ E 0 otherwise , Dii = ∑ j Aij Let S = { G1 , ... , Gd } be a set of d graphs and { y1 , ... , yd } be the labels associated with these graphs . 3.2 HIERARCHICAL STRUCTURAL EMBEDDING . Graph neural networks . We build our work upon graph neural networks ( GNNs ) . Several architectures of graph neural networks have been proposed by Defferrard et al . ( 2016 ) ; Kipf & Welling ( 2016 ) ; Veličković et al . ( 2017 ) or Bruna & Li ( 2017 ) . Those graph neural network models are all based on propagation mechanisms of node features that follow a general neural message passing architecture ( Ying et al. , 2018 ; Gilmer et al. , 2017 ) : X ( l+1 ) =MP ( A , X ( l ) ; W ( l ) ) ( 1 ) where X ( l ) ∈ Rn×fl are the node embeddings computed after l steps of the GNN , X ( 0 ) = X , and MP is the message propagation function , which depends on the adjacency matrix . W ( l ) is a trainable weight matrix that depends on layer l. Let fl be the dimension of the node vectors after l steps of the GNN , f0 = f . The aggregation process that we introduce next can be used with any neural message passing algorithm that follows the propagation rule 1 . In all the following of our work we denote by MP the algorithm . For the experiments , we consider Graph Convolutional Network ( GCN ) defined by ( Kipf & Welling , 2016 ) . This model is based on an approximation of convolutions on graphs defined by ( Defferrard et al. , 2016 ) and that use spectral decompositions of the Laplacian . The popularity of this model comes from its computational efficiency and the state-of-the-art results obtained on benchmark datasets . This layer propagates node features to 1-hop neighbors . Its propagation rule is the following : X ( l+1 ) =MP ( A , X ( l ) ; W ( l ) ) = GCN ( A , X ( l ) ) = ρ ( D̃−1/2ÃD̃−1/2X ( l ) W ( l ) ) ( 2 ) Where ρ is a non-linear function ( a ReLU in our case ) , Ã = A + In is the adjacency matrix with added self-loops and D̃ii = ∑ j Ãij . This propagation process allows us to obtain a node representation representing its l-hop neighborhood after l layers of GCN . We build a hierarchical representation for nodes by concatenating their embeddings after each step of GCN . The final representation Xstructi of a node i is given by : Xstructi = Ln l=1 X ( l ) i ( 3 ) Where L is the total number of GCN layers applied . Identifying structural classes . Embedding nodes with MP creates embeddings that are close for nodes that are structurally equivalent . Some use a handcrafted node embedding based on propagation processes with wavelets in graphs to identify structural clusters based on hierarchical representation of nodes ( Donnat et al. , 2018 ) . By analogy , we learn hierarchical node embeddings and an aggregation layer that identifies structural roles for node in graphs . Those structural roles are consistent along graphs of a dataset which allows us to bring interpretability to our graph classification task . Node features Xstruct contain the information of their L-hop neighborhood decomposed into L vectors each representing their l-hop neighborhood for l varying between 1 and L. We will show next that nodes that have the same L-hop neighborhood are embedded into the same vector . To identify structural roles , we thus project each node embedding on a matrix p ∈ Rfstruct×c where c is the number of structural classes and fstruct = L∑ l=1 fl is the dimensionality of Xstructi for each node i . We obtain a soft assignment matrix : C = softmax ( Xstructp ) ∈ Rn×c ( 4 ) Where the softmax function is applied in a row-wise fashion . This way , Cij represents the probability that node i belongs to cluster j . Definition 1 . Let i and j be two nodes of a graphG = ( V , E , X ) . LetNl ( i ) = { i′ ∈ N |d ( i , i′ ) ≤ l } be the l-hop neighborhood of i , which means all the nodes that are at distance lower of equal to l of i , d being the shortest-path distance . Let XNl ( i ) be the feature matrix of the l-hop neighborhood of u . Let Gi , l be the subgraph of G composed of the l-hop neighborhood of i . We say that i and j are l-structurally equivalent if there exists an isomorphism ψ from Nl ( j ) to Nl ( i ) such that the two following conditions are verified : • Gi , l = ψ ( Gj , l ) • ∀j′ ∈ Nl ( j ) , Xψ ( j′ ) = Xj′ Theorem 1 . Two nodes i and j that are L-structurally equivalent have the same final embedding , Xstructi = Xstructj .
This paper proposed the StructAgg, an aggregation algorithm in convolutional graph neural network that learns the structural roles for nodes in the graph embedding. In this algorithm, a structural representation of node is constructed through concatenation of latest p layers of node presentation in graph neural network, which consists of information from the p-hop neighborhood. The paper is good quality and its demonstration is clear. The idea behind is original. However, learning embeddings through concatenation of multiple layers of neural representation is not completely new. The contribution of this paper to the community is not ground-breaking as well. From the experiment results, it is hard to stress its significance as in most of times this method does not beat the state-of-the-art algorithms.
SP:9281a2478824e6b7b300fa7f11d61a5e5d6c1679
Graph Structural Aggregation for Explainable Learning
1 INTRODUCTION . Convolution neural networks ( LeCun et al. , 1995 ) have proven to be very efficient at learning meaningful patterns for many articificial intelligence tasks . They convey the ability to learn hierarchical information in data with Euclidean grid-like structures such as images and text . Convolutional Neural Networks ( CNNs ) have rapidly become state-of-the art methods in the fields of computer vision ( Russakovsky et al. , 2015 ) and natural language processing ( Devlin et al. , 2018 ) . However in many scientific fields , studied data have an underlying graph or manifold structure such as communication networks ( whether social or technical ) or knowledge graphs . Recently there have been many attempts to extend convolutions to those non-Euclidean structured data ( Hammond et al. , 2011 ; Kipf & Welling , 2016 ; Defferrard et al. , 2016 ) . In these new approaches , the authors propose to compute node embeddings in a semi-supervised fashion in order to perform node classification . Those node embeddings can also be used for link prediction by computing distances between each node of the graph ( Hammond et al. , 2011 ; Kipf & Welling , 2016 ) . Graph classification is studied in many fields . Whether for predicting the chemical activity of a molecule or to cluster authors from different scientific domains based on their ego-networks ( Freeman , 1982 ) . However when trying to generalize neural network approaches to the task of graph classification there are several aspects that differ widely from image classification . When trying to perform graph classification , we can deal with graphs of different sizes . To compare them we first need to obtain a graph representation that is independant of the size of the graph . Moreover , for a fixed graph , nodes are not ordered . The graph representation obtained with neural network algorithms must be independant of the order of nodes and thus be invariant by node permutation . Aggregation functions are functions that operate on node embeddings to produce a graph representation . When tackling a graph classification task , the aggregation function used is usually just a mean or a max of node embeddings as illustrated in figure 1b . But when working with graphs of large sizes , the mean over all nodes does not allow us to extract significant patterns with a good discriminating power . In order to identify patterns in graphs , some methods try to identify structural roles for nodes . Donnat et al . ( 2018 ) define structural role discovery as the process of identifying nodes which have topologically similar network neighborhoods while residing in potentially distant areas of the network as illustrated in figure 1a . Those structural roles represent local patterns in graphs . Identifying them and comparing them among graphs could improve the discriminative power of graph embeddings obtained with graph neural networks . In this work , we build an aggregation process based on the identification of structural roles , called StructAgg . The main contributions of this work are summarized bellow : 1 . Learned aggregation process . A differentiable aggregation process that learns how to aggregate node embeddings in order to produce a graph representation for a graph classification task . 2 . Identification of structural roles . Based on the definition of structural roles from Donnat et al . ( 2018 ) , our algorithm learns structural roles during the aggregation process . This is innovative because most algorithms that learn structural roles in graphs are not based on graph neural networks . 3 . Explainability of selected features for a graph classification task . The identification of structural roles enables us to understand and explain what features are selected during training . Graph neural networks often lack explainability and there are only few works that tackle this issue . One contribution of this work is the explainability of the approach . We show how our end-to-end model provides interpretability to a graph classification task based on graph neural networks . 4 . Experimental results . Our method achieves state-of-the-art results on benchmark datasets . We compare it with kernel methods and state-of-the-art message passing algorithms that use pooling layers as aggregation processes . 2 RELATED WORK . The identification of nodes that have similar structural roles is usually done by an explicit featurization of nodes or by algorithms that rely on random walks to explore nodes ’ neighborhoods . A well known algorithm in this line of research is RolX ( Gilpin et al. , 2013 ; Henderson et al. , 2012 ) , a matrix factorization that focuses on computing a soft assignment matrix based on a listing of topological properties set as inputs for nodes . Similarly struct2vec builds a multilayered graph based on topological metrics on nodes and then generates random walks to capture structural information . In another line of research , many works rely on graphlets to capture nodes ’ topological properties and identify nodes with similar neighborhoods ( Rossi et al. , 2017 ; Lee et al. , 2018 ; Ahmed et al. , 2018 ) . In their work , Donnat et al . ( 2018 ) compute node embeddings from wavelets in graphs to caracterize nodes ’ neighborhood at different scales . In this work , we introduce an aggregation process based on the identification of structural roles in graphs that is computed in an end-to-end trainable fashion . We build a hierarchical representation of nodes by using neural network models in graphs to propagate nodes ’ features at different hops . Recently there has been a rich line of research , inspired by deep models in images , that aims at redefining neural networks in graphs and in particular convolutional neural networks ( Defferrard et al. , 2016 ; Kipf & Welling , 2016 ; Veličković et al. , 2017 ; Hamilton et al. , 2017 ; Bronstein et al. , 2017 ; Bruna et al. , 2013 ; Scarselli et al. , 2009 ) . Those convolutions can be viewed as message passing algorithms that are composed of two phases . A message passing phase that runs for T steps is first defined in terms of message functions and vertex update functions . A readout phase then computes a feature vector for the whole graph using some readout function . In this work we will see how to define a readout phase that is learnable and that is representative of meaningful patterns in graphs . 3 PROPOSED METHOD . In this section we introduce the structural aggregation layer ( StructAgg ) . We show how we identify structural classes for nodes in graphs ; how those classes are used in order to develop an aggregation layer ; and how this layer allows us to compare significant structural patterns in graphs for a supervised classification task . 3.1 NOTATIONS . Let G = ( V , E , X ) be a graph , where V is the set of nodes of G , E the set of edges and X ∈ Rn×f the feature matrix of G ’ s nodes where f is the dimensionality of node features . Let n = |V | be the number of nodes of G and e = |E| the number of edges of G. Let A be the adjacency matrix of graph G and D be its degree diagonal matrix . Let vi and vj be the ith and jth nodes of G , we have : Aij = { 1 if ( vi , vj ) ∈ E 0 otherwise , Dii = ∑ j Aij Let S = { G1 , ... , Gd } be a set of d graphs and { y1 , ... , yd } be the labels associated with these graphs . 3.2 HIERARCHICAL STRUCTURAL EMBEDDING . Graph neural networks . We build our work upon graph neural networks ( GNNs ) . Several architectures of graph neural networks have been proposed by Defferrard et al . ( 2016 ) ; Kipf & Welling ( 2016 ) ; Veličković et al . ( 2017 ) or Bruna & Li ( 2017 ) . Those graph neural network models are all based on propagation mechanisms of node features that follow a general neural message passing architecture ( Ying et al. , 2018 ; Gilmer et al. , 2017 ) : X ( l+1 ) =MP ( A , X ( l ) ; W ( l ) ) ( 1 ) where X ( l ) ∈ Rn×fl are the node embeddings computed after l steps of the GNN , X ( 0 ) = X , and MP is the message propagation function , which depends on the adjacency matrix . W ( l ) is a trainable weight matrix that depends on layer l. Let fl be the dimension of the node vectors after l steps of the GNN , f0 = f . The aggregation process that we introduce next can be used with any neural message passing algorithm that follows the propagation rule 1 . In all the following of our work we denote by MP the algorithm . For the experiments , we consider Graph Convolutional Network ( GCN ) defined by ( Kipf & Welling , 2016 ) . This model is based on an approximation of convolutions on graphs defined by ( Defferrard et al. , 2016 ) and that use spectral decompositions of the Laplacian . The popularity of this model comes from its computational efficiency and the state-of-the-art results obtained on benchmark datasets . This layer propagates node features to 1-hop neighbors . Its propagation rule is the following : X ( l+1 ) =MP ( A , X ( l ) ; W ( l ) ) = GCN ( A , X ( l ) ) = ρ ( D̃−1/2ÃD̃−1/2X ( l ) W ( l ) ) ( 2 ) Where ρ is a non-linear function ( a ReLU in our case ) , Ã = A + In is the adjacency matrix with added self-loops and D̃ii = ∑ j Ãij . This propagation process allows us to obtain a node representation representing its l-hop neighborhood after l layers of GCN . We build a hierarchical representation for nodes by concatenating their embeddings after each step of GCN . The final representation Xstructi of a node i is given by : Xstructi = Ln l=1 X ( l ) i ( 3 ) Where L is the total number of GCN layers applied . Identifying structural classes . Embedding nodes with MP creates embeddings that are close for nodes that are structurally equivalent . Some use a handcrafted node embedding based on propagation processes with wavelets in graphs to identify structural clusters based on hierarchical representation of nodes ( Donnat et al. , 2018 ) . By analogy , we learn hierarchical node embeddings and an aggregation layer that identifies structural roles for node in graphs . Those structural roles are consistent along graphs of a dataset which allows us to bring interpretability to our graph classification task . Node features Xstruct contain the information of their L-hop neighborhood decomposed into L vectors each representing their l-hop neighborhood for l varying between 1 and L. We will show next that nodes that have the same L-hop neighborhood are embedded into the same vector . To identify structural roles , we thus project each node embedding on a matrix p ∈ Rfstruct×c where c is the number of structural classes and fstruct = L∑ l=1 fl is the dimensionality of Xstructi for each node i . We obtain a soft assignment matrix : C = softmax ( Xstructp ) ∈ Rn×c ( 4 ) Where the softmax function is applied in a row-wise fashion . This way , Cij represents the probability that node i belongs to cluster j . Definition 1 . Let i and j be two nodes of a graphG = ( V , E , X ) . LetNl ( i ) = { i′ ∈ N |d ( i , i′ ) ≤ l } be the l-hop neighborhood of i , which means all the nodes that are at distance lower of equal to l of i , d being the shortest-path distance . Let XNl ( i ) be the feature matrix of the l-hop neighborhood of u . Let Gi , l be the subgraph of G composed of the l-hop neighborhood of i . We say that i and j are l-structurally equivalent if there exists an isomorphism ψ from Nl ( j ) to Nl ( i ) such that the two following conditions are verified : • Gi , l = ψ ( Gj , l ) • ∀j′ ∈ Nl ( j ) , Xψ ( j′ ) = Xj′ Theorem 1 . Two nodes i and j that are L-structurally equivalent have the same final embedding , Xstructi = Xstructj .
This paper focuses on deriving explainable features for use in graph classification. To that end, they propose StructAgg that is essentially an aggregation process based on the structural roles of nodes that is then used in an end-to-end model. Experiments demonstrate the effectiveness of the proposed approach as it provides comparable performance while providing some explainability. This is an important unsolved problem, and this work provides one such approach to obtain more explainable and intuitive features/embeddings for graph classification.
SP:9281a2478824e6b7b300fa7f11d61a5e5d6c1679
Watching the World Go By: Representation Learning from Unlabeled Videos
1 INTRODUCTION . The world seen through our eyes is constantly changing . As we move through the world , we see much more than a single static image : objects rotate revealing occluded regions , deform , the surroundings change , and we ourselves move . Our internal visual systems are constantly seeing temporally coherent images . Yet many popular computer vision models learn representations which are limited to inference on single images , lacking temporal context . Representations learned from static images are inherently limited to an understanding of the world as many unrelated static snapshots . This is especially true of recent unsupervised learning techniques ( Bachman et al. , 2019 ; Chen et al. , 2020a ; He et al. , 2020 ; Hénaff et al. , 2019 ; Hjelm et al. , 2019 ; Misra & Maaten , 2020 ; Tian et al. , 2019 ; Wu et al. , 2018 ) , all of which train on a set of highly-curated , well-balanced data : ImageNet ( Deng et al. , 2009 ) . Scaling up these techniques to larger , less-curated datasets like Instagram1B ( Mahajan et al. , 2018 ) has not provided large improvements in performance ( He et al. , 2020 ) . Only so much can be learned from a single image : no amount of artificial augmentation can show a new view of an object or what might happen next in a scene . This dichotomy can be seen in Figure 1 . In order to move beyond this limitation , we argue that video supplies significantly more semantically meaningful content than a single image . With video , we can see how the world changes , find connections between images , and more directly observe the underlying scene . Prior work using temporal cues has shown success in learning from unlabeled videos ( Misra et al. , 2016 ; Wang et al. , 2019 ; Srivastava et al. , 2015 ) , but has not been able to surpass supervised pretraining . On the other hand , single image techniques ( Gutmann & Hyvärinen , 2010 ) have shown improvements over stateof-the-art by using Noise Contrastive Estimation ( NCE ) . In this work , we merge the two concepts with Video Noise Contrastive Estimation ( VINCE ) , a method for using unlabeled videos as a basis for learning visual representations . Instead of predicting whether two feature vectors come from the same underlying image , we task our network with predicting whether two images originate from the same video . Not only does this allow our method to learn how a single object might change , it also enables learning which things might be in a scene together , e.g . cats are more likely to be in videos with dogs than with sharks . Additionally , we generalize the NCE technique to operate on multiple positive pairs from a single source . To facilitate this learning , we construct Random Related Video 1Code and the Random Related Video Views dataset will be made available . Views ( R2V2 ) , a set 960,000 frames from 240,000 uncurated videos . Using our learning technique , we achieve across-the-board improvements over the recent Momentum Contrast method ( He et al. , 2020 ) as well as over a network pretrained on supervised ImageNet on diverse tasks such as scene classification , activity recognition , and object tracking . 2 RELATED WORK . 2.1 NOISE CONTRASTIVE ESTIMATION ( NCE ) . The NCE loss ( Gutmann & Hyvärinen , 2010 ) is at the center of many recent representation learning methods ( Bachman et al. , 2019 ; Chen et al. , 2020a ; He et al. , 2020 ; Hénaff et al. , 2019 ; Hjelm et al. , 2019 ; Misra & Maaten , 2020 ; Tian et al. , 2019 ; Wu et al. , 2018 ) . Similar to the triplet loss ( Chechik et al. , 2010 ) , the basic principle behind NCE is to maximize the similarity between an anchor data point and a positive data point while minimizing similarity to all other ( negative ) points . A challenge for using NCE in an unsupervised fashion is devising a way to construct positive pairs . Pairs should be different enough that a network learns a non-trivial representation , but structured enough that the learned representation is useful for downstream tasks . A standard approach used by ( Bachman et al. , 2019 ; Chen et al. , 2020a ; He et al. , 2020 ) is to generate the pairs via artificial data augmentation techniques such as color jitter , cropping , and flipping . Contrastive Multiview Coding ( Tian et al. , 2019 ) uses multiple “ views ” of a single source image such as intensity ( L ) , color ( ab ) , depth , or segmentation , training separate encoders for each view . PIRL ( Misra & Maaten , 2020 ) uses the jigsaw technique ( Noroozi & Favaro , 2016 ) to break the image into non-overlapping regions and learns a shared representation for the full image and the shuffled image patches . Similarly , Contrastive Predictive Coding ( CPC ) ( Hénaff et al. , 2019 ) uses crops of an image as “ context ” and predicts features for the unseen portions of the image . We provide a more natural data augmentation by using multiple frames from a single video . As a video progresses , the objects in the scene , the background , and the camera itself may move , providing new views . Whereas augmentations on an image are constrained by a single snapshot in time , using different frames from a single video gives entirely new information about the scene . Additionally , rather than restricting our method to only use two frames from a video , we generalize the NCE technique to use many images from a single video , resulting in more computational reuse and a better final representation ( Bachman et al . ( 2019 ) similarly makes multiple comparisons per pair , but each anchor has only one positive ) . 2.2 UNSUPERVISED LEARNING USING VIDEO CUES . In contrast with supervised learning which requires hand-labeling , self-supervised and unsupervised learning acquire their labels for free . These techniques create datasets which are orders of magnitude larger than comparable fully-supervised datasets . Whereas self-supervised learning requires extra setup during data generation ( Godard et al. , 2017 ; Pinto et al. , 2016 ; Schmidt et al. , 2016 ) , unsupervised learning can use existing data without any specific data generation constraints . Unsupervised single image methods such as auto-encoders ( Kramer , 1991 ) , colorization ( Zhang et al. , 2016 ) , GANs ( Radford et al. , 2015 ) , jigsaw ( Noroozi & Favaro , 2016 ) , and NCE ( Wu et al. , 2018 ) rely on properties of the images themselves and can be applied to arbitrary datasets . However these datasets can not represent temporal information , nor can they show novel object views or occlusions . Video data automatically provides temporal cohesion which can be used as additional supervisory signal to learn these phenomena . There is a long history of using videos for low level ( Li et al. , 2019 ; Meister et al. , 2018 ; Srivastava et al. , 2015 ) and high-level tasks ( Misra et al. , 2016 ; Wang et al. , 2019 ) . One of the most common unsupervised setups is using the present to predict the future . The Natural Language Processing community has embraced language modeling as an unsupervised task which has resulted in numerous breakthroughs ( Devlin et al. , 2019 ; Mikolov et al. , 2013 ; Radford et al. , 2019 ) . However , similar systems applied to unlabeled videos have not revolutionized computer vision . These representations still underperform supervised methods due to several issues . Primarily , neighboring video frames do not change nearly as much as neighboring words in a sentence , so a network which learns the identity function would perform well at next frame prediction . Additionally , words are reused and can thus be tokenized in an effective way whereas images never repeat , especially between two disparate video sources . To avoid these issues , many have opted for other methods . Anand et al . ( 2019 ) use the NCE loss to discriminate between temporally near frames and temporally far frames of ATARI gameplay but do not compare across games . Han et al . ( 2019 ) also use the NCE loss and the CPC technique on a 3D-ResNet to learn spatio-temporal features . Similar to our proposal , Tschannen et al . ( 2020 ) and Purushwalkam & Gupta ( 2020 ) advocates using videos to learn invariances to deformations , color changes , occlusions , and other difficult scenarios which preserve the semantic relationships of the video subjects but drastically change the appearance . These works provide additional evidence that videos offer a richer visual backing as compared with single image methods . Aside from the NCE approach , others have proposed alternative video training tasks . Misra et al . ( 2016 ) shuffle the frames of a video and train a network to predict whether they are correctly temporally ordered . Wang Wang et al . ( 2019 ) and Vondrick et al . ( 2018 ) use cycle consistency and color as a form of tracking from one frame onto another . Earlier work from Wang & Gupta ( 2015 ) uses hand-crafted features to track patches of a video and learn similarities between the patches . Others have explored learning representations from multi-modal inputs provided by video such as optical flow or audio Korbar et al . ( 2018 ) ; Owens & Efros ( 2018 ) ; Piergiovanni et al . ( 2020 ) ; Zhao et al . ( 2018 ) . Our approach is inspired by these works but focuses on learning a semantic representation of the entire scene based on a single frame from a video . Many of these prior works require multiple frames for their representation , and thus can not be used on single-image downstream tasks . However , if a network can consistently represent visually dissimilar images from the same video with similar vectors , then not only has it learned how to recognize what is in each image , but it can also represent what might happen in the past or future of that scene . 3 METHODS . In order to learn a semantically meaningful representation , we exploit the natural augmentations provided by unlabeled videos . In this section , we first outline the dataset generation process . We then describe the learning algorithm used to train our representation . 3.1 DATASET . Using ImageNet as a basis for representation learning has shown remarkable success both with supervised pretraining as well as unsupervised learning . However , even without labels , the images of ImageNet have been hand selected and are unnaturally balanced . To improve learned representations using existing techniques may require significantly larger datasets ( He et al. , 2020 ) , but obtaining data with similar properties automatically and at scale is not practical . Instead , we turn to unlabeled videos as a source of additional supervision . In order to train on a diverse set of realistic video frames , we collect a new dataset which we call Random Related Video Views ( R2V2 ) . We use the following fast and automated procedure to generate the images in our dataset . 1 . Use YouTube Search to find videos for a set of queries , and download the top K videos licensed under the Creative Commons . In practice we use the ImageNet 1K classes . 2 . Filter out videos with static images . 3 . Pick a random point in the video and extract T images with a gap of G frames between each image . In practice T = 4 and G = 150 . Using this procedure , we are able to construct R2V2 in under a day on a single machine . Because our dataset is constructed automatically , we can easily gather more data ( more frames per video , more videos overall ) . In this work we limit the scale to roughly that of comparable datasets ( see Table 1 ) . More details on data collection are provided in Appendix 1 .
The idea of learning representations from video rather than single images is an appealing one with many favorable properties to allow a system to get direct signal on appearance of objects under various natural transformations (occlusion, lighting, etc). Combining instance discrimination ideas of loss based on unlabelled images for which it is known whether they are similar or not, with the idea of curating the images from video is hypothesized to yield learned representations that capture properties enabling improved performance across a variety of single image tasks. The authors create a dataset based on video with positive pairs for noise contrastive estimation, conduct fairly comprehensive experiments and promise to make their newly constructed dataset available. The experiments showcase this type of learned representation outperform alternatives not based on videos on a variety of tasks.
SP:07bb9747aaae728b6e29880610dc5173af0f9e01
Watching the World Go By: Representation Learning from Unlabeled Videos
1 INTRODUCTION . The world seen through our eyes is constantly changing . As we move through the world , we see much more than a single static image : objects rotate revealing occluded regions , deform , the surroundings change , and we ourselves move . Our internal visual systems are constantly seeing temporally coherent images . Yet many popular computer vision models learn representations which are limited to inference on single images , lacking temporal context . Representations learned from static images are inherently limited to an understanding of the world as many unrelated static snapshots . This is especially true of recent unsupervised learning techniques ( Bachman et al. , 2019 ; Chen et al. , 2020a ; He et al. , 2020 ; Hénaff et al. , 2019 ; Hjelm et al. , 2019 ; Misra & Maaten , 2020 ; Tian et al. , 2019 ; Wu et al. , 2018 ) , all of which train on a set of highly-curated , well-balanced data : ImageNet ( Deng et al. , 2009 ) . Scaling up these techniques to larger , less-curated datasets like Instagram1B ( Mahajan et al. , 2018 ) has not provided large improvements in performance ( He et al. , 2020 ) . Only so much can be learned from a single image : no amount of artificial augmentation can show a new view of an object or what might happen next in a scene . This dichotomy can be seen in Figure 1 . In order to move beyond this limitation , we argue that video supplies significantly more semantically meaningful content than a single image . With video , we can see how the world changes , find connections between images , and more directly observe the underlying scene . Prior work using temporal cues has shown success in learning from unlabeled videos ( Misra et al. , 2016 ; Wang et al. , 2019 ; Srivastava et al. , 2015 ) , but has not been able to surpass supervised pretraining . On the other hand , single image techniques ( Gutmann & Hyvärinen , 2010 ) have shown improvements over stateof-the-art by using Noise Contrastive Estimation ( NCE ) . In this work , we merge the two concepts with Video Noise Contrastive Estimation ( VINCE ) , a method for using unlabeled videos as a basis for learning visual representations . Instead of predicting whether two feature vectors come from the same underlying image , we task our network with predicting whether two images originate from the same video . Not only does this allow our method to learn how a single object might change , it also enables learning which things might be in a scene together , e.g . cats are more likely to be in videos with dogs than with sharks . Additionally , we generalize the NCE technique to operate on multiple positive pairs from a single source . To facilitate this learning , we construct Random Related Video 1Code and the Random Related Video Views dataset will be made available . Views ( R2V2 ) , a set 960,000 frames from 240,000 uncurated videos . Using our learning technique , we achieve across-the-board improvements over the recent Momentum Contrast method ( He et al. , 2020 ) as well as over a network pretrained on supervised ImageNet on diverse tasks such as scene classification , activity recognition , and object tracking . 2 RELATED WORK . 2.1 NOISE CONTRASTIVE ESTIMATION ( NCE ) . The NCE loss ( Gutmann & Hyvärinen , 2010 ) is at the center of many recent representation learning methods ( Bachman et al. , 2019 ; Chen et al. , 2020a ; He et al. , 2020 ; Hénaff et al. , 2019 ; Hjelm et al. , 2019 ; Misra & Maaten , 2020 ; Tian et al. , 2019 ; Wu et al. , 2018 ) . Similar to the triplet loss ( Chechik et al. , 2010 ) , the basic principle behind NCE is to maximize the similarity between an anchor data point and a positive data point while minimizing similarity to all other ( negative ) points . A challenge for using NCE in an unsupervised fashion is devising a way to construct positive pairs . Pairs should be different enough that a network learns a non-trivial representation , but structured enough that the learned representation is useful for downstream tasks . A standard approach used by ( Bachman et al. , 2019 ; Chen et al. , 2020a ; He et al. , 2020 ) is to generate the pairs via artificial data augmentation techniques such as color jitter , cropping , and flipping . Contrastive Multiview Coding ( Tian et al. , 2019 ) uses multiple “ views ” of a single source image such as intensity ( L ) , color ( ab ) , depth , or segmentation , training separate encoders for each view . PIRL ( Misra & Maaten , 2020 ) uses the jigsaw technique ( Noroozi & Favaro , 2016 ) to break the image into non-overlapping regions and learns a shared representation for the full image and the shuffled image patches . Similarly , Contrastive Predictive Coding ( CPC ) ( Hénaff et al. , 2019 ) uses crops of an image as “ context ” and predicts features for the unseen portions of the image . We provide a more natural data augmentation by using multiple frames from a single video . As a video progresses , the objects in the scene , the background , and the camera itself may move , providing new views . Whereas augmentations on an image are constrained by a single snapshot in time , using different frames from a single video gives entirely new information about the scene . Additionally , rather than restricting our method to only use two frames from a video , we generalize the NCE technique to use many images from a single video , resulting in more computational reuse and a better final representation ( Bachman et al . ( 2019 ) similarly makes multiple comparisons per pair , but each anchor has only one positive ) . 2.2 UNSUPERVISED LEARNING USING VIDEO CUES . In contrast with supervised learning which requires hand-labeling , self-supervised and unsupervised learning acquire their labels for free . These techniques create datasets which are orders of magnitude larger than comparable fully-supervised datasets . Whereas self-supervised learning requires extra setup during data generation ( Godard et al. , 2017 ; Pinto et al. , 2016 ; Schmidt et al. , 2016 ) , unsupervised learning can use existing data without any specific data generation constraints . Unsupervised single image methods such as auto-encoders ( Kramer , 1991 ) , colorization ( Zhang et al. , 2016 ) , GANs ( Radford et al. , 2015 ) , jigsaw ( Noroozi & Favaro , 2016 ) , and NCE ( Wu et al. , 2018 ) rely on properties of the images themselves and can be applied to arbitrary datasets . However these datasets can not represent temporal information , nor can they show novel object views or occlusions . Video data automatically provides temporal cohesion which can be used as additional supervisory signal to learn these phenomena . There is a long history of using videos for low level ( Li et al. , 2019 ; Meister et al. , 2018 ; Srivastava et al. , 2015 ) and high-level tasks ( Misra et al. , 2016 ; Wang et al. , 2019 ) . One of the most common unsupervised setups is using the present to predict the future . The Natural Language Processing community has embraced language modeling as an unsupervised task which has resulted in numerous breakthroughs ( Devlin et al. , 2019 ; Mikolov et al. , 2013 ; Radford et al. , 2019 ) . However , similar systems applied to unlabeled videos have not revolutionized computer vision . These representations still underperform supervised methods due to several issues . Primarily , neighboring video frames do not change nearly as much as neighboring words in a sentence , so a network which learns the identity function would perform well at next frame prediction . Additionally , words are reused and can thus be tokenized in an effective way whereas images never repeat , especially between two disparate video sources . To avoid these issues , many have opted for other methods . Anand et al . ( 2019 ) use the NCE loss to discriminate between temporally near frames and temporally far frames of ATARI gameplay but do not compare across games . Han et al . ( 2019 ) also use the NCE loss and the CPC technique on a 3D-ResNet to learn spatio-temporal features . Similar to our proposal , Tschannen et al . ( 2020 ) and Purushwalkam & Gupta ( 2020 ) advocates using videos to learn invariances to deformations , color changes , occlusions , and other difficult scenarios which preserve the semantic relationships of the video subjects but drastically change the appearance . These works provide additional evidence that videos offer a richer visual backing as compared with single image methods . Aside from the NCE approach , others have proposed alternative video training tasks . Misra et al . ( 2016 ) shuffle the frames of a video and train a network to predict whether they are correctly temporally ordered . Wang Wang et al . ( 2019 ) and Vondrick et al . ( 2018 ) use cycle consistency and color as a form of tracking from one frame onto another . Earlier work from Wang & Gupta ( 2015 ) uses hand-crafted features to track patches of a video and learn similarities between the patches . Others have explored learning representations from multi-modal inputs provided by video such as optical flow or audio Korbar et al . ( 2018 ) ; Owens & Efros ( 2018 ) ; Piergiovanni et al . ( 2020 ) ; Zhao et al . ( 2018 ) . Our approach is inspired by these works but focuses on learning a semantic representation of the entire scene based on a single frame from a video . Many of these prior works require multiple frames for their representation , and thus can not be used on single-image downstream tasks . However , if a network can consistently represent visually dissimilar images from the same video with similar vectors , then not only has it learned how to recognize what is in each image , but it can also represent what might happen in the past or future of that scene . 3 METHODS . In order to learn a semantically meaningful representation , we exploit the natural augmentations provided by unlabeled videos . In this section , we first outline the dataset generation process . We then describe the learning algorithm used to train our representation . 3.1 DATASET . Using ImageNet as a basis for representation learning has shown remarkable success both with supervised pretraining as well as unsupervised learning . However , even without labels , the images of ImageNet have been hand selected and are unnaturally balanced . To improve learned representations using existing techniques may require significantly larger datasets ( He et al. , 2020 ) , but obtaining data with similar properties automatically and at scale is not practical . Instead , we turn to unlabeled videos as a source of additional supervision . In order to train on a diverse set of realistic video frames , we collect a new dataset which we call Random Related Video Views ( R2V2 ) . We use the following fast and automated procedure to generate the images in our dataset . 1 . Use YouTube Search to find videos for a set of queries , and download the top K videos licensed under the Creative Commons . In practice we use the ImageNet 1K classes . 2 . Filter out videos with static images . 3 . Pick a random point in the video and extract T images with a gap of G frames between each image . In practice T = 4 and G = 150 . Using this procedure , we are able to construct R2V2 in under a day on a single machine . Because our dataset is constructed automatically , we can easily gather more data ( more frames per video , more videos overall ) . In this work we limit the scale to roughly that of comparable datasets ( see Table 1 ) . More details on data collection are provided in Appendix 1 .
This paper incorporate the popular contrastive with unsupervised learning from video. Specifically, multiple frames from the same video is used as positive pairs and frames from different videos is viewed as negative pair. The author also proposed a simple and effective ways to collect class-balanced and diverse video frame dataset from Youtube. The author conducted extensive evaluation experiments on both video recognition and image recognition downstream tasks. Extensive ablation experiments demonstrated the effectiveness of utilizing multiple frames and the balanced data collection algorithms.
SP:07bb9747aaae728b6e29880610dc5173af0f9e01
PanRep: Universal node embeddings for heterogeneous graphs
1 INTRODUCTION . Learning node representations from heterogeneous graph data powers the success of many downstream machine learning tasks such as node classification ( Kipf & Welling , 2017 ) , and link prediction ( Wang et al. , 2017 ) . Graph neural networks ( GNNs ) learn node embeddings by applying a sequence of nonlinear operations parametrized by the graph adjacency matrix and achieve stateof-the-art performance in the aforementioned downstream tasks . The era of big data provides an opportunity for machine learning methods to harness large datasets ( Wu et al. , 2013 ) . Nevertheless , typically the labels in these datasets are scarce due to either lack of information or increased labeling costs ( Bengio et al. , 2012 ) . The lack of labeled data points hinders the performance of supervised algorithms , which may not generalize well to unseen data and motivates unsupervised learning . Unsupervised node embeddings may be used for downstream learning tasks , while the specific tasks are typically not known a priori . For example , node representations of the Amazon book graph can be employed for recommending new books as well as classifying a book ’ s genre . This work aspires to provide universal node embeddings , which will be applied in multiple downstream tasks and achieve comparable performance to their supervised counterparts . Although unsupervised learning has numerous applications , limited labels of the downstream task may be available . Refining the unsupervised universal representations with these labels could further increase the representation power of the embeddings . This can be achieved by fine-tuning the unsupervised model . Natural language processing methods have achieved state-of-the-art performance by applying such a fine-tuning framework ( Devlin et al. , 2018 ) . Fine-tuning pretrained models is beneficial compared to end-to-end supervised learning since the former typically generalizes better especially when labeled data are limited and decreases the inference time since typically just a few fine-tuning iterations typically suffice for the model to converge ( Erhan et al. , 2010 ) . This work introduces a framework for unsupervised learning of universal node representations on heterogenous graphs termed PanRep1 . It consists of a GNN encoder that maps the heterogenous graph data to node embeddings and four decoders , each capturing different topological and node feature 1Pan : Pangkosmios ( Greek for universal ) and Rep : Representation properties . The cluster and recover ( CR ) decoder exploits a clustering prior of the node attributes . The motif ( Mot ) decoder captures structural node properties that are encoded in the network motifs . The meta-path random walk ( MRW ) decoder promotes embedding similarity among nodes participating in a MRW and hence captures intermediate neighborhood structure . Finally , the heterogeneous information maximization ( HIM ) decoder aims at maximizing the mutual information among node local and the global representations per node type . These decoders model general properties of the graph data related to node homophily ( Gleich , 2015 ; Kloster & Gleich , 2014 ) or node structural similarity ( Rossi & Ahmed , 2014 ; Donnat et al. , 2018 ) . PanRep is solely supervised by the decoders and has no knowledge of the labels of the downstream task . The universal embeddings learned by PanRep are employed as features by models such as SVM ( Suykens & Vandewalle , 1999 ) or DistMult ( Yang et al. , 2014 ) to be trained for the downstream tasks . To further accommodate the case where limited labels are available for some downstream tasks we propose fine-tuning PanRep ( PanRep-FT ) . In this operational setting , PanRep-FT is optimized adhering to a task-specific loss . PanRep can be considered as a pretrained model for extracting node embeddings of heterogenous graph data . Figure 1 illustrates the two novel models . The contribution of this work is threefold . C1 . We introduce a novel problem formulation of universal unsupervised learning and design a tailored learning framework termed PanRep . We identify the following general properties of the heterogenous graph data : ( i ) the clustering of local node features , ( ii ) structural similarity among nodes , ( iii ) the local and intermediate neighborhood structure , ( iv ) and the mutual information among same-type nodes . We develop four novel decoders to model the aforementioned properties . C2 . We adjust the unsupervised universal learning framework to account for possible limited labels of the downstream task . PanRep-FT refines the universal embeddings and increases the model generalization capability . C3.We compare the proposed models to state-of-the-art supervised and unsupervised methods for node classification and link prediction . PanRep outperforms all unsupervised and certain supervised methods in node classification , especially when the labeled data for the supervised methods is small . PanRep-FT outperforms even supervised approaches in node classification and link prediction , which corroborates the merits of pretraining models . Finally , we apply our method on the drug-repurposing knowledge graph ( DRKG ) for discovering drugs for Covid-19 and identify several drugs used in clinical trials as possible drug candidates . 2 RELATED WORK . Unsupervised learning . Representation learning amounts to mapping nodes in an embedding space where the graph topological information and structure is preserved ( Hamilton et al. , 2017 ) . Typically , representation learning methods follow the encoder-decoder framework advocated by PanRep . Nevertheless , the decoder is typically attuned to a single task based on e.g. , matrix factorization ( Tang et al. , 2015 ; Ahmed et al. , 2013 ; Cao et al. , 2015 ; Ou et al. , 2016 ) , random walks ( Grover & Leskovec , 2016 ; Perozzi et al. , 2014 ) , or kernels on graphs ( Smola & Kondor , 2003 ) . Recently , methods relying on GNNs are increasingly popular for representation learning tasks ( Wu et al. , 2020 ) . GNNs typically rely on random walk-based objectives ( Grover & Leskovec , 2016 ; Hamilton et al. , 2017 ) or on maximizing the mutual information among node representations ( Veličković et al. , 2018b ) . Relational GNNs methods extend representation learning to heterogeneous graphs ( Dong et al. , 2017 ; Shi et al. , 2018 ; Shang et al. , 2016 ) . Relative to these contemporary works PanRep introduces multiple decoders to learn universal embeddings for heterogeneous graph data capturing the clustering of local node features , structural similarity among nodes , the local and intermediate neighborhood structure , and the mutual information among same-type nodes . Supervised learning . Node classification is typically formulated as a semi-supervised learning ( SSL ) task over graphs , where the labels for a subset of nodes are available for training ( Belkin et al. , 2004 ) . GNNs achieve state-of-the-art performance in SSL by utilizing regular graph convolution ( Kipf & Welling , 2017 ) or graph attention ( Veličković et al. , 2018a ) , while these models have been extended in heterogeneous graphs ( Schlichtkrull et al. , 2018 ; Fu et al. , 2020 ; Wang et al. , 2019b ) . Similarly , another prominent supervised downstream learning task is link prediction with numerous applications in recommendation systems ( Wang et al. , 2017 ) and drug discovery ( Zhou et al. , 2020 ; Ioannidis et al. , 2020 ) . Knowledge-graph ( KG ) embedding models rely on mapping the nodes and edges of the KG to a vector space by maximizing a score function for existing KG edges ( Wang et al. , 2017 ; Yang et al. , 2014 ; Zheng et al. , 2020 ) . RGCN models ( Schlichtkrull et al. , 2018 ) have been successful in link prediction and contrary to KG embedding models can further utilize node features . The universal embeddings extracted from PanRep without labeled supervision offer a strong competitive to these supervised approaches for both node classification and link prediction tasks . Pretraining . Pretraining models provides a significant performance boost compared to traditional approaches in natural language processing ( Devlin et al. , 2018 ; Radford et al. , 2019 ) and computer vision ( Donahue et al. , 2014 ; Girshick et al. , 2014 ) . Pretraining offers increased generalization capability especially when the labeled data is scarce and increased inference speed relative to end-toend training ( Devlin et al. , 2018 ) . Parallel to our work ( Hu et al. , 2020 ; Qiu et al. , 2020 ) proposed pretraining models for GNNs . These contemporary works supervise the models only using attribute or edge generation schemes without accounting for higher order structural similarity or maximizing the mutual information of the embeddings . Recently , ( Hu et al. , 2019 ) introduced a framework for pretraining GNNs for graph classification . Different than ( Hu et al. , 2019 ) that focuses on graph representations , PanRep aims at node prediction tasks and obtains node representations via capturing properties related to node homophily ( Gleich , 2015 ) or node structural similarity ( Rossi & Ahmed , 2014 ) . PanRep is a novel pretrained model for node classification and link prediction that requires significantly less labeled points to reach the performance of its fully supervised counterparts . 3 DEFINITIONS AND PROBLEM FORMULATION . A heterogeneous graph with T node types and R relation types is defined as G : = { { Vt } Tt=1 , { Er } Rr=1 } . The node types represent the different entities and the relation types represent how these entities are semantically associated to each other . For example , in the IMDB network , the node types correspond to actors , directors , movies , etc. , whereas the relation types correspond to directed-by and played-in relations . The number of nodes of type t is denoted by Nt and its associated nodal set by Vt : = { nt } Ntn=1 . The total number of nodes in G is N : = ∑t t=1Nt . The rth relation type , Er : = { ( nt , n′t′ ) ∈ Vt × Vt′ } , holds all interactions of a certain type among Vt and Vt′ and may represent that a movie is directed-by a director . Heterogenous graphs are typically used to represent KGs ( Wang et al. , 2017 ) . Each node nt is also associated with an F × 1 feature vector xnt . This feature may be a natural language embedding of the title of a movie . The nodal features are collected in a N × F matrix X . Note that certain node types may not have features and for these we use an embedding layer to represent their features . Unsupervised learning . Given G and X , the goal of representation learning is to estimate a function g such that H : = g ( X , G ) , where H ∈ RN×D represents the node embeddings and D is the size of the embedding space.Note that in estimating g , no labeled information is available . Universal representation learning . The universal representations H should perform well on different downstream tasks . Different node classification and link prediction tasks may arise by considering different number of training nodes and links and different label types , e.g. , occupation label or education level label . Consider I downstream task , for the universal representations H it holds that L ( i ) ( f ( i ) ( H ) , T ( i ) ) ≤ , i = 1 , . . . , I , ( 1 ) where L ( i ) , f ( i ) , and T ( i ) represent the loss function , learned classifier , and training set ( node labels or links ) for task i , respectively and is the largest error for all tasks . The goal of unsupervised universal representation learning is to learn H such that is small . While learning H , PanRep does not have knowledge of { L ( i ) , f ( i ) , T ( i ) } i . Nevertheless , by utilizing the novel decoder scheme PanRep achieves superior performance even compared to supervised approaches across tasks .
This paper proposes a universal and unsupervised GNN-based representation learning (node embedding pretraining) model named PanRep for heterogeneous graphs, which benefits a variety of downstream tasks such as node classification and link prediction. More specifically, employing an encoder similar to R-GCN, PanRep utilizes four different types of universal supervision signals for heterogeneous graphs, i.e. cluster and recover, motif, metapath random walk and heterogeneous info maximization, to better characterize the graph structures. PanRep can be further fine-tuned in a semi-supervised manner with limited labeled data, known as PanRep-FT, for specific applications. Experiments on benchmark datasets have shown the effectiveness and performance gain over other unsupervised and some supervised baseline approaches. One example use case is applied to COVID-19 drug repurposing to identify potential treatment candidates.
SP:d3804a2538416b73935cbece4344fa8ad9d4bbe9
PanRep: Universal node embeddings for heterogeneous graphs
1 INTRODUCTION . Learning node representations from heterogeneous graph data powers the success of many downstream machine learning tasks such as node classification ( Kipf & Welling , 2017 ) , and link prediction ( Wang et al. , 2017 ) . Graph neural networks ( GNNs ) learn node embeddings by applying a sequence of nonlinear operations parametrized by the graph adjacency matrix and achieve stateof-the-art performance in the aforementioned downstream tasks . The era of big data provides an opportunity for machine learning methods to harness large datasets ( Wu et al. , 2013 ) . Nevertheless , typically the labels in these datasets are scarce due to either lack of information or increased labeling costs ( Bengio et al. , 2012 ) . The lack of labeled data points hinders the performance of supervised algorithms , which may not generalize well to unseen data and motivates unsupervised learning . Unsupervised node embeddings may be used for downstream learning tasks , while the specific tasks are typically not known a priori . For example , node representations of the Amazon book graph can be employed for recommending new books as well as classifying a book ’ s genre . This work aspires to provide universal node embeddings , which will be applied in multiple downstream tasks and achieve comparable performance to their supervised counterparts . Although unsupervised learning has numerous applications , limited labels of the downstream task may be available . Refining the unsupervised universal representations with these labels could further increase the representation power of the embeddings . This can be achieved by fine-tuning the unsupervised model . Natural language processing methods have achieved state-of-the-art performance by applying such a fine-tuning framework ( Devlin et al. , 2018 ) . Fine-tuning pretrained models is beneficial compared to end-to-end supervised learning since the former typically generalizes better especially when labeled data are limited and decreases the inference time since typically just a few fine-tuning iterations typically suffice for the model to converge ( Erhan et al. , 2010 ) . This work introduces a framework for unsupervised learning of universal node representations on heterogenous graphs termed PanRep1 . It consists of a GNN encoder that maps the heterogenous graph data to node embeddings and four decoders , each capturing different topological and node feature 1Pan : Pangkosmios ( Greek for universal ) and Rep : Representation properties . The cluster and recover ( CR ) decoder exploits a clustering prior of the node attributes . The motif ( Mot ) decoder captures structural node properties that are encoded in the network motifs . The meta-path random walk ( MRW ) decoder promotes embedding similarity among nodes participating in a MRW and hence captures intermediate neighborhood structure . Finally , the heterogeneous information maximization ( HIM ) decoder aims at maximizing the mutual information among node local and the global representations per node type . These decoders model general properties of the graph data related to node homophily ( Gleich , 2015 ; Kloster & Gleich , 2014 ) or node structural similarity ( Rossi & Ahmed , 2014 ; Donnat et al. , 2018 ) . PanRep is solely supervised by the decoders and has no knowledge of the labels of the downstream task . The universal embeddings learned by PanRep are employed as features by models such as SVM ( Suykens & Vandewalle , 1999 ) or DistMult ( Yang et al. , 2014 ) to be trained for the downstream tasks . To further accommodate the case where limited labels are available for some downstream tasks we propose fine-tuning PanRep ( PanRep-FT ) . In this operational setting , PanRep-FT is optimized adhering to a task-specific loss . PanRep can be considered as a pretrained model for extracting node embeddings of heterogenous graph data . Figure 1 illustrates the two novel models . The contribution of this work is threefold . C1 . We introduce a novel problem formulation of universal unsupervised learning and design a tailored learning framework termed PanRep . We identify the following general properties of the heterogenous graph data : ( i ) the clustering of local node features , ( ii ) structural similarity among nodes , ( iii ) the local and intermediate neighborhood structure , ( iv ) and the mutual information among same-type nodes . We develop four novel decoders to model the aforementioned properties . C2 . We adjust the unsupervised universal learning framework to account for possible limited labels of the downstream task . PanRep-FT refines the universal embeddings and increases the model generalization capability . C3.We compare the proposed models to state-of-the-art supervised and unsupervised methods for node classification and link prediction . PanRep outperforms all unsupervised and certain supervised methods in node classification , especially when the labeled data for the supervised methods is small . PanRep-FT outperforms even supervised approaches in node classification and link prediction , which corroborates the merits of pretraining models . Finally , we apply our method on the drug-repurposing knowledge graph ( DRKG ) for discovering drugs for Covid-19 and identify several drugs used in clinical trials as possible drug candidates . 2 RELATED WORK . Unsupervised learning . Representation learning amounts to mapping nodes in an embedding space where the graph topological information and structure is preserved ( Hamilton et al. , 2017 ) . Typically , representation learning methods follow the encoder-decoder framework advocated by PanRep . Nevertheless , the decoder is typically attuned to a single task based on e.g. , matrix factorization ( Tang et al. , 2015 ; Ahmed et al. , 2013 ; Cao et al. , 2015 ; Ou et al. , 2016 ) , random walks ( Grover & Leskovec , 2016 ; Perozzi et al. , 2014 ) , or kernels on graphs ( Smola & Kondor , 2003 ) . Recently , methods relying on GNNs are increasingly popular for representation learning tasks ( Wu et al. , 2020 ) . GNNs typically rely on random walk-based objectives ( Grover & Leskovec , 2016 ; Hamilton et al. , 2017 ) or on maximizing the mutual information among node representations ( Veličković et al. , 2018b ) . Relational GNNs methods extend representation learning to heterogeneous graphs ( Dong et al. , 2017 ; Shi et al. , 2018 ; Shang et al. , 2016 ) . Relative to these contemporary works PanRep introduces multiple decoders to learn universal embeddings for heterogeneous graph data capturing the clustering of local node features , structural similarity among nodes , the local and intermediate neighborhood structure , and the mutual information among same-type nodes . Supervised learning . Node classification is typically formulated as a semi-supervised learning ( SSL ) task over graphs , where the labels for a subset of nodes are available for training ( Belkin et al. , 2004 ) . GNNs achieve state-of-the-art performance in SSL by utilizing regular graph convolution ( Kipf & Welling , 2017 ) or graph attention ( Veličković et al. , 2018a ) , while these models have been extended in heterogeneous graphs ( Schlichtkrull et al. , 2018 ; Fu et al. , 2020 ; Wang et al. , 2019b ) . Similarly , another prominent supervised downstream learning task is link prediction with numerous applications in recommendation systems ( Wang et al. , 2017 ) and drug discovery ( Zhou et al. , 2020 ; Ioannidis et al. , 2020 ) . Knowledge-graph ( KG ) embedding models rely on mapping the nodes and edges of the KG to a vector space by maximizing a score function for existing KG edges ( Wang et al. , 2017 ; Yang et al. , 2014 ; Zheng et al. , 2020 ) . RGCN models ( Schlichtkrull et al. , 2018 ) have been successful in link prediction and contrary to KG embedding models can further utilize node features . The universal embeddings extracted from PanRep without labeled supervision offer a strong competitive to these supervised approaches for both node classification and link prediction tasks . Pretraining . Pretraining models provides a significant performance boost compared to traditional approaches in natural language processing ( Devlin et al. , 2018 ; Radford et al. , 2019 ) and computer vision ( Donahue et al. , 2014 ; Girshick et al. , 2014 ) . Pretraining offers increased generalization capability especially when the labeled data is scarce and increased inference speed relative to end-toend training ( Devlin et al. , 2018 ) . Parallel to our work ( Hu et al. , 2020 ; Qiu et al. , 2020 ) proposed pretraining models for GNNs . These contemporary works supervise the models only using attribute or edge generation schemes without accounting for higher order structural similarity or maximizing the mutual information of the embeddings . Recently , ( Hu et al. , 2019 ) introduced a framework for pretraining GNNs for graph classification . Different than ( Hu et al. , 2019 ) that focuses on graph representations , PanRep aims at node prediction tasks and obtains node representations via capturing properties related to node homophily ( Gleich , 2015 ) or node structural similarity ( Rossi & Ahmed , 2014 ) . PanRep is a novel pretrained model for node classification and link prediction that requires significantly less labeled points to reach the performance of its fully supervised counterparts . 3 DEFINITIONS AND PROBLEM FORMULATION . A heterogeneous graph with T node types and R relation types is defined as G : = { { Vt } Tt=1 , { Er } Rr=1 } . The node types represent the different entities and the relation types represent how these entities are semantically associated to each other . For example , in the IMDB network , the node types correspond to actors , directors , movies , etc. , whereas the relation types correspond to directed-by and played-in relations . The number of nodes of type t is denoted by Nt and its associated nodal set by Vt : = { nt } Ntn=1 . The total number of nodes in G is N : = ∑t t=1Nt . The rth relation type , Er : = { ( nt , n′t′ ) ∈ Vt × Vt′ } , holds all interactions of a certain type among Vt and Vt′ and may represent that a movie is directed-by a director . Heterogenous graphs are typically used to represent KGs ( Wang et al. , 2017 ) . Each node nt is also associated with an F × 1 feature vector xnt . This feature may be a natural language embedding of the title of a movie . The nodal features are collected in a N × F matrix X . Note that certain node types may not have features and for these we use an embedding layer to represent their features . Unsupervised learning . Given G and X , the goal of representation learning is to estimate a function g such that H : = g ( X , G ) , where H ∈ RN×D represents the node embeddings and D is the size of the embedding space.Note that in estimating g , no labeled information is available . Universal representation learning . The universal representations H should perform well on different downstream tasks . Different node classification and link prediction tasks may arise by considering different number of training nodes and links and different label types , e.g. , occupation label or education level label . Consider I downstream task , for the universal representations H it holds that L ( i ) ( f ( i ) ( H ) , T ( i ) ) ≤ , i = 1 , . . . , I , ( 1 ) where L ( i ) , f ( i ) , and T ( i ) represent the loss function , learned classifier , and training set ( node labels or links ) for task i , respectively and is the largest error for all tasks . The goal of unsupervised universal representation learning is to learn H such that is small . While learning H , PanRep does not have knowledge of { L ( i ) , f ( i ) , T ( i ) } i . Nevertheless , by utilizing the novel decoder scheme PanRep achieves superior performance even compared to supervised approaches across tasks .
This paper proposed introduces a problem formulation of universal unsupervised learning. They develop an unsupervised node representation learning method by combining four signals: (1) cluster and recover supervision, (2) motif supervision, (3) metapath random walk supervision, and (4) heterogeneous information maximization. Aside from conducting experiments on node classification and link prediction, they apply their method on drop-repurposing knowledge graph to discover drugs for Covid-19.
SP:d3804a2538416b73935cbece4344fa8ad9d4bbe9
Correcting experience replay for multi-agent communication
1 INTRODUCTION . Since the introduction of deep Q-learning ( Mnih et al. , 2013 ) , it has become very common to use previous online experience , for instance stored in a replay buffer , to train agents in an offline manner . An obvious difficulty with doing this is that the information concerned may be out of date , leading the agent woefully astray in cases where the environment of an agent changes over time . One obvious strategy is to discard old experiences . However , this is wasteful – it requires many more samples from the environment before adequate policies can be learned , and may prevent agents from leveraging past experience sufficiently to act in complex environments . Here , we consider an alternative , Orwellian possibility , of using present information to correct the past , showing that it can greatly improve an agent ’ s ability to learn . We explore a paradigm case involving multiple agents that must learn to communicate to optimise their own or task-related objectives . As with deep Q-learning , modern model-free approaches often seek to learn this communication off-policy , using experience stored in a replay buffer ( Foerster et al. , 2016 ; 2017 ; Lowe et al. , 2017 ; Peng et al. , 2017 ) . However , multi-agent reinforcement learning ( MARL ) can be particularly challenging as the underlying game-theoretic structure is well known to lead to non-stationarity , with past experience becoming obsolete as agents come progressively to use different communication codes . It is this that our correction addresses . Altering previously communicated messages is particularly convenient for our purposes as it has no direct effect on the actual state of the environment ( Lowe et al. , 2019 ) , but a quantifiable effect on the observed message , which constitutes the receiver ’ s ‘ social environment ’ . We can therefore determine what the received message would be under the communicator ’ s current policy , rather than what it was when the experience was first generated . Once this is determined , we can simply relabel the past experience to better reflect the agent ’ s current social environment , a form of off-environment correction ( Ciosek & Whiteson , 2017 ) . We apply our ‘ communication correction ’ using the framework of centralised training with decentralised control ( Lowe et al. , 2017 ; Foerster et al. , 2018 ) , in which extra information – in this case the policies and observations of other agents – is used during training to learn decentralised multi- agent policies . We show how it can be combined with existing off-policy algorithms , with little computational cost , to achieve strong performance in both the cooperative and competitive cases . 2 BACKGROUND . Markov Games A partially observable Markov game ( POMG ) ( Littman , 1994 ; Hu et al. , 1998 ) for N agents is defined by a set of states S , sets of actions A1 , ... , AN and observations O1 , ... , ON for each agent . In general , the stochastic policy of agent i may depend on the set of action-observation histories Hi ≡ ( Oi × Ai ) ∗ such that πi : Hi × Ai → [ 0 , 1 ] . In this work we restrict ourselves to history-independent stochastic policies πi : Oi×Ai → [ 0 , 1 ] . The next state is generated according to the state transition functionP : S×A1×. . .×An×S → [ 0 , 1 ] . Each agent i obtains deterministic rewards defined as ri : S × A1 × ... × An → R and receives a deterministic private observation oi : S → Oi . There is an initial state distribution ρ0 : S → [ 0 , 1 ] and each agent i aims to maximise its own discounted sum of future rewards Es∼ρπ , a∼π [ ∑∞ t=0 γ tri ( s , a ) ] where π = { π1 , . . . , πn } is the set of policies for all agents , a = ( a1 , . . . , aN ) is the joint action and ρπ is the discounted state distribution induced by these policies starting from ρ0 . Experience Replay As an agent continually interacts with its environment it receives experiences ( ot , at , rt+1 , ot+1 ) at each time step . However , rather than using those experiences immediately for learning , it is possible to store such experience in a replay buffer , D , and sample them at a later point in time for learning ( Mnih et al. , 2013 ) . This breaks the correlation between samples , reducing the variance of updates and the potential to overfit to recent experience . In the single-agent case , prioritising samples from the replay buffer according to the temporal-difference error has been shown to be effective ( Schaul et al. , 2015 ) . In the multi-agent case , Foerster et al . ( 2017 ) showed that issues of non-stationarity could be partially alleviated for independent Q-learners by importance sampling and use of a low-dimensional ‘ fingerprint ’ such as the training iteration number . MADDPG Our method can be combined with a variety of algorithms , but we commonly employ it with multi-agent deep deterministic policy gradients ( MADDPG ) ( Lowe et al. , 2017 ) , which we describe here . MADDPG is an algorithm for centralised training and decentralised control of multi-agent systems ( Lowe et al. , 2017 ; Foerster et al. , 2018 ) , in which extra information is used to train each agent ’ s critic in simulation , whilst keeping policies decentralised such that they can be deployed outside of simulation . It uses deterministic policies , as in DDPG ( Lillicrap et al. , 2015 ) , which condition only on each agent ’ s local observations and actions . MADDPG handles the nonstationarity associated with the simultaneous adaptation of all the agents by introducing a separate centralised critic Qµi ( o , a ) for each agent where µ corresponds to the set of deterministic policies µi : O → A of all agents . Here we have denoted the vector of joint observations for all agents as o . The multi-agent policy gradient for policy parameters θ of agent i is : ∇θiJ ( θi ) = Eo , a∼D [ ∇θiµi ( oi ) ∇aiQ µ i ( o , a ) |ai=µi ( oi ) ] . ( 1 ) where D is the experience replay buffer which contains the tuples ( o , a , r , o′ ) . Like DDPG , each Qµi is approximated by a critic Q w i which is updated to minimise the error with the target . L ( wi ) = Eo , a , r , o′∼D [ ( Qwi ( o , a ) − y ) 2 ] ( 2 ) where y = ri + γQwi ( o ′ , a′ ) is evaluated for the next state and action , as stored in the replay buffer . We use this algorithm with some additional changes ( see Appendix A.3 for details ) . Communication One way to classify communication is whether it is explicit or implicit . Implicit communication involves transmitting information by changing the shared environment ( e.g . scattering breadcrumbs ) . By contrast , explicit communication can be modelled as being separate from the environment , only affecting the observations of other agents . In this work , we focus on explicit communication with the expectation that dedicated communication channels will be frequently integrated into artificial multi-agent systems such as driverless cars . Although explicit communication does not formally alter the environmental state , it does change the observations of the receiving agents , a change to what we call its ‘ social environment ’ 1 . For agents which act in the environment and communicate simultaneously , the set of actions for each agent Ai = Aei × Ami is the Cartesian product of the sets of regular environment actions Aei and explicit communication actions Ami . Similarly , the set of observations for each receiving agent Oi = Oei × Omi is the Cartesian product of the sets of regular environmental observations Oei and explicit communication observations Omi . Communication may be targeted to specific agents or broadcast to all agents and may be costly or free . The zero cost formulation is commonly used and is known as ‘ cheap talk ’ in the game theory community ( Farrell & Rabin , 1996 ) . In many multi-agent simulators the explicit communication action is related to the observed communication in a simple way , for example being transmitted to the targeted agent with or without noise on the next time step . Similarly , real world systems may transmit communication in a well understood way , such that the observed message can be accurately predicted given the sent message ( particularly if error-correction is used ) . By contrast , the effect of environment actions is generally difficult to predict , as the shared environment state will typically exhibit more complex dependencies . 3 METHODS . Our general starting point is to consider how explicit communication actions and observed messages might be relabelled using an explicit communication model . This model often takes a simple form , such as depending only on what was communicated on the previous timestep . The observed messages omt+1 given communication actions a m t are therefore sampled ( denoted by ∼ ) from : omt+1 ∼ p ( omt+1 | amt ) ( 3 ) Examples of such a communication model could be an agent i receiving a noiseless message from a single agent j such that omi , t+1 = a m j , t , or receiving the message corrupted by Gaussian noise omi , t+1 ∼ N ( amj , t , σ ) where σ is a variance parameter . We consider the noise-free case in the multiagent simulator for all bar one of our experiments , although the general idea can be applied to more complex , noisy communication models . A communication model such as this allows us to correct past actions and observations in a consistent way . To understand how this is possible , we consider a sample from a multi-agent replay buffer which is used for off-policy learning . In general , the multi-agent system at current time t′ receives observations ot′ , collectively takes actions at′ using the decentralised policies π , receives rewards rt′+1 ( split into environmental rewards ret′+1 and messaging costs r m t′+1 ) and the next observations ot′+1 . These experiences are stored as a tuple in the replay buffer for later use to update the multiagent critic ( s ) and policies . For communicating agents , we can describe a sample from the replay buffer D as the tuple : ( oet , o m t , a e t , a m t , r e t+1 , r m t+1 , o e t+1 , o m t+1 ) ∼ D ( 4 ) where we separately denote environmental ( e ) and communication ( m ) terms , and t indexes a time in the past ( rather than the current time t′ ) . For convenience we can ignore the environmental tuple of observations , actions and reward as we do not alter these , and consider only the communication tuple ( omt , a m t , r m t+1 , o m t+1 ) . Using the communication model at time t ′ , we can relate a change in amt to a change in omt+1 . If we also keep track of a m t−1 we can similarly change o m t . In our experiments we assume for simplicity that communication is costless ( the ‘ cheap talk ’ setting ) , which means that rmt+1 = 0 , however in general we could also relabel rewards using a model of communication cost p ( rmt+1 | amt ) . Equipped with an ability to rewrite history , we next consider how to use it , to improve multi-agent learning .
The paper considers a multi-agent reinforcement learning (MARL) scenario where agents take actions based on the current observation alone. The paper proposes a communication correction mechanism where, during the centralized training, messages there were received in the past from other agents are reevaluated according to the updated policy. This way old messages can be updated instead of discarded, which is more efficient overall.
SP:deb1448308aa429b8a2f2cf2d27f98f87e367c83
Correcting experience replay for multi-agent communication
1 INTRODUCTION . Since the introduction of deep Q-learning ( Mnih et al. , 2013 ) , it has become very common to use previous online experience , for instance stored in a replay buffer , to train agents in an offline manner . An obvious difficulty with doing this is that the information concerned may be out of date , leading the agent woefully astray in cases where the environment of an agent changes over time . One obvious strategy is to discard old experiences . However , this is wasteful – it requires many more samples from the environment before adequate policies can be learned , and may prevent agents from leveraging past experience sufficiently to act in complex environments . Here , we consider an alternative , Orwellian possibility , of using present information to correct the past , showing that it can greatly improve an agent ’ s ability to learn . We explore a paradigm case involving multiple agents that must learn to communicate to optimise their own or task-related objectives . As with deep Q-learning , modern model-free approaches often seek to learn this communication off-policy , using experience stored in a replay buffer ( Foerster et al. , 2016 ; 2017 ; Lowe et al. , 2017 ; Peng et al. , 2017 ) . However , multi-agent reinforcement learning ( MARL ) can be particularly challenging as the underlying game-theoretic structure is well known to lead to non-stationarity , with past experience becoming obsolete as agents come progressively to use different communication codes . It is this that our correction addresses . Altering previously communicated messages is particularly convenient for our purposes as it has no direct effect on the actual state of the environment ( Lowe et al. , 2019 ) , but a quantifiable effect on the observed message , which constitutes the receiver ’ s ‘ social environment ’ . We can therefore determine what the received message would be under the communicator ’ s current policy , rather than what it was when the experience was first generated . Once this is determined , we can simply relabel the past experience to better reflect the agent ’ s current social environment , a form of off-environment correction ( Ciosek & Whiteson , 2017 ) . We apply our ‘ communication correction ’ using the framework of centralised training with decentralised control ( Lowe et al. , 2017 ; Foerster et al. , 2018 ) , in which extra information – in this case the policies and observations of other agents – is used during training to learn decentralised multi- agent policies . We show how it can be combined with existing off-policy algorithms , with little computational cost , to achieve strong performance in both the cooperative and competitive cases . 2 BACKGROUND . Markov Games A partially observable Markov game ( POMG ) ( Littman , 1994 ; Hu et al. , 1998 ) for N agents is defined by a set of states S , sets of actions A1 , ... , AN and observations O1 , ... , ON for each agent . In general , the stochastic policy of agent i may depend on the set of action-observation histories Hi ≡ ( Oi × Ai ) ∗ such that πi : Hi × Ai → [ 0 , 1 ] . In this work we restrict ourselves to history-independent stochastic policies πi : Oi×Ai → [ 0 , 1 ] . The next state is generated according to the state transition functionP : S×A1×. . .×An×S → [ 0 , 1 ] . Each agent i obtains deterministic rewards defined as ri : S × A1 × ... × An → R and receives a deterministic private observation oi : S → Oi . There is an initial state distribution ρ0 : S → [ 0 , 1 ] and each agent i aims to maximise its own discounted sum of future rewards Es∼ρπ , a∼π [ ∑∞ t=0 γ tri ( s , a ) ] where π = { π1 , . . . , πn } is the set of policies for all agents , a = ( a1 , . . . , aN ) is the joint action and ρπ is the discounted state distribution induced by these policies starting from ρ0 . Experience Replay As an agent continually interacts with its environment it receives experiences ( ot , at , rt+1 , ot+1 ) at each time step . However , rather than using those experiences immediately for learning , it is possible to store such experience in a replay buffer , D , and sample them at a later point in time for learning ( Mnih et al. , 2013 ) . This breaks the correlation between samples , reducing the variance of updates and the potential to overfit to recent experience . In the single-agent case , prioritising samples from the replay buffer according to the temporal-difference error has been shown to be effective ( Schaul et al. , 2015 ) . In the multi-agent case , Foerster et al . ( 2017 ) showed that issues of non-stationarity could be partially alleviated for independent Q-learners by importance sampling and use of a low-dimensional ‘ fingerprint ’ such as the training iteration number . MADDPG Our method can be combined with a variety of algorithms , but we commonly employ it with multi-agent deep deterministic policy gradients ( MADDPG ) ( Lowe et al. , 2017 ) , which we describe here . MADDPG is an algorithm for centralised training and decentralised control of multi-agent systems ( Lowe et al. , 2017 ; Foerster et al. , 2018 ) , in which extra information is used to train each agent ’ s critic in simulation , whilst keeping policies decentralised such that they can be deployed outside of simulation . It uses deterministic policies , as in DDPG ( Lillicrap et al. , 2015 ) , which condition only on each agent ’ s local observations and actions . MADDPG handles the nonstationarity associated with the simultaneous adaptation of all the agents by introducing a separate centralised critic Qµi ( o , a ) for each agent where µ corresponds to the set of deterministic policies µi : O → A of all agents . Here we have denoted the vector of joint observations for all agents as o . The multi-agent policy gradient for policy parameters θ of agent i is : ∇θiJ ( θi ) = Eo , a∼D [ ∇θiµi ( oi ) ∇aiQ µ i ( o , a ) |ai=µi ( oi ) ] . ( 1 ) where D is the experience replay buffer which contains the tuples ( o , a , r , o′ ) . Like DDPG , each Qµi is approximated by a critic Q w i which is updated to minimise the error with the target . L ( wi ) = Eo , a , r , o′∼D [ ( Qwi ( o , a ) − y ) 2 ] ( 2 ) where y = ri + γQwi ( o ′ , a′ ) is evaluated for the next state and action , as stored in the replay buffer . We use this algorithm with some additional changes ( see Appendix A.3 for details ) . Communication One way to classify communication is whether it is explicit or implicit . Implicit communication involves transmitting information by changing the shared environment ( e.g . scattering breadcrumbs ) . By contrast , explicit communication can be modelled as being separate from the environment , only affecting the observations of other agents . In this work , we focus on explicit communication with the expectation that dedicated communication channels will be frequently integrated into artificial multi-agent systems such as driverless cars . Although explicit communication does not formally alter the environmental state , it does change the observations of the receiving agents , a change to what we call its ‘ social environment ’ 1 . For agents which act in the environment and communicate simultaneously , the set of actions for each agent Ai = Aei × Ami is the Cartesian product of the sets of regular environment actions Aei and explicit communication actions Ami . Similarly , the set of observations for each receiving agent Oi = Oei × Omi is the Cartesian product of the sets of regular environmental observations Oei and explicit communication observations Omi . Communication may be targeted to specific agents or broadcast to all agents and may be costly or free . The zero cost formulation is commonly used and is known as ‘ cheap talk ’ in the game theory community ( Farrell & Rabin , 1996 ) . In many multi-agent simulators the explicit communication action is related to the observed communication in a simple way , for example being transmitted to the targeted agent with or without noise on the next time step . Similarly , real world systems may transmit communication in a well understood way , such that the observed message can be accurately predicted given the sent message ( particularly if error-correction is used ) . By contrast , the effect of environment actions is generally difficult to predict , as the shared environment state will typically exhibit more complex dependencies . 3 METHODS . Our general starting point is to consider how explicit communication actions and observed messages might be relabelled using an explicit communication model . This model often takes a simple form , such as depending only on what was communicated on the previous timestep . The observed messages omt+1 given communication actions a m t are therefore sampled ( denoted by ∼ ) from : omt+1 ∼ p ( omt+1 | amt ) ( 3 ) Examples of such a communication model could be an agent i receiving a noiseless message from a single agent j such that omi , t+1 = a m j , t , or receiving the message corrupted by Gaussian noise omi , t+1 ∼ N ( amj , t , σ ) where σ is a variance parameter . We consider the noise-free case in the multiagent simulator for all bar one of our experiments , although the general idea can be applied to more complex , noisy communication models . A communication model such as this allows us to correct past actions and observations in a consistent way . To understand how this is possible , we consider a sample from a multi-agent replay buffer which is used for off-policy learning . In general , the multi-agent system at current time t′ receives observations ot′ , collectively takes actions at′ using the decentralised policies π , receives rewards rt′+1 ( split into environmental rewards ret′+1 and messaging costs r m t′+1 ) and the next observations ot′+1 . These experiences are stored as a tuple in the replay buffer for later use to update the multiagent critic ( s ) and policies . For communicating agents , we can describe a sample from the replay buffer D as the tuple : ( oet , o m t , a e t , a m t , r e t+1 , r m t+1 , o e t+1 , o m t+1 ) ∼ D ( 4 ) where we separately denote environmental ( e ) and communication ( m ) terms , and t indexes a time in the past ( rather than the current time t′ ) . For convenience we can ignore the environmental tuple of observations , actions and reward as we do not alter these , and consider only the communication tuple ( omt , a m t , r m t+1 , o m t+1 ) . Using the communication model at time t ′ , we can relate a change in amt to a change in omt+1 . If we also keep track of a m t−1 we can similarly change o m t . In our experiments we assume for simplicity that communication is costless ( the ‘ cheap talk ’ setting ) , which means that rmt+1 = 0 , however in general we could also relabel rewards using a model of communication cost p ( rmt+1 | amt ) . Equipped with an ability to rewrite history , we next consider how to use it , to improve multi-agent learning .
This paper considers communication games when agents use experience replay. The agents' communication protocol may change over time, leaving outdated symbols in the replay buffer which are then trained on. This paper proposes replacing the old communication actions with up-to-date actions as the transitions are sampled, and shows that this leads to greatly improved convergence speed and higher performance plateaus.
SP:deb1448308aa429b8a2f2cf2d27f98f87e367c83
Conditional Coverage Estimation for High-quality Prediction Intervals
Deep learning has achieved state-of-the-art performance to generate high-quality prediction intervals ( PIs ) for uncertainty quantification in regression tasks . The high-quality criterion requires PIs to be as narrow as possible , whilst maintaining a pre-specified level of data ( marginal ) coverage . However , most existing works for high-quality PIs lack accurate information on conditional coverage , which may cause unreliable predictions if it is significantly smaller than the marginal coverage . To address this problem , we propose a novel end-to-end framework which could output high-quality PIs and simultaneously provide their conditional coverage estimation . In doing so , we design a new loss function that is both easyto-implement and theoretically justified via an exponential concentration bound . Our evaluation on real-world benchmark datasets and synthetic examples shows that our approach not only outperforms the state-of-the-arts on high-quality PIs in terms of average PI width , but also accurately estimates conditional coverage information that is useful in assessing model uncertainty . 1 INTRODUCTION . Prediction interval ( PI ) is poised to play an increasingly prominent role in uncertainty quantification for regression tasks ( Khosravi et al. , 2010 ; 2011 ; Galván et al. , 2017 ; Rosenfeld et al. , 2018 ; Tagasovska & Lopez-Paz , 2018 ; 2019 ; Romano et al. , 2019 ; Wang et al. , 2019 ; Kivaranovic et al. , 2020 ) . A high-quality PI should be as narrow as possible , whilst maintaining a pre-specified level of data coverage or marginal coverage ( Pearce et al. , 2018 ) . Compared with PIs obtained based on coverage-only consideration , the “ high-quality ” criterion is beneficial in balancing between marginal coverage probability and interval width . However , the conditional coverage given a feature , which is critical for making reliable context-based decisions , is unassessed and missing in most existing works on high-quality PIs . In the presence of heteroskedasticity and model misspecification , the marginal coverage can be very different from the conditional coverage at a given point , which affects the downstream decision-making task that relies on the uncertainty information provided by the PI . Our main goal is to meaningfully incorporate and assess conditional coverages in high-quality PIs . Conditional coverage estimation is challenging for two reasons . First is that the natural evaluation metric of conditional coverage error , an Lp distance between the estimated and ground-truth conditional coverages , is difficult to compute as it requires obtaining the conditional probability given feature x , which is arguably as challenging as the regression problem itself . Our first goal in this paper is to address this issue by developing a new metric called calibration-based conditional coverage error for conditional coverage estimation measurement . Our approach is inspired from the calibration notion in classification ( Guo et al. , 2017 ) . The basic idea is to relax conditional coverage at any given point to being averaged over all points that bear the same estimated value . An estimator satisfying the relaxed property is regarded as well-calibrated . In regression , calibration-based conditional coverage error provides a middle ground between the enforcement of marginal coverage ( lacking any conditional information ) and conditional coverage ( computationally intractable ) . Compared with conditional coverage , this middle-ground metric can be viewed as a “ dimension reduction ” of the conditioning variable from the original sample space to the space [ 0 , 1 ] , so that we can easily discretize to compute the empirical metric values . The second challenge is the discontinuity in the above metrics that hinders efficient training of PIs that are both high-quality and possess reliable conditional coverage information . To address this , we design a new loss function based on a combination of the high-quality criterion and a coverage assessment loss . The latter can be flexibly added as a separate module to any neural network ( NN ) used to train PIs . It is based on an empirical version of a tight upper bound on the coverage error in terms of a Kullback–Leibler ( KL ) divergence , which can be readily employed for running gradient descent . We theoretically show how training with our proposed loss function attains this upperbounding value via a concentration bound . We also demonstrate the empirical performance of our approach in terms of PI quality and conditional coverage assessment compared with benchmark methods . Summary of Contributions : ( 1 ) We identify the conditional coverage estimation problem as a new challenge for high-quality PIs and introduce a new evaluation metric for coverage estimation . ( 2 ) We propose an end-to-end algorithm that can simultaneously construct high-quality PIs and generate conditional coverage estimates . In addition , we provide theoretical justifications on the effectiveness of our algorithm by developing concentration bounds relating the coverage assessment loss and conditional coverage error . ( 3 ) By evaluating on benchmark datasets and synthetic examples , we empirically demonstrate that our approach not only achieves high performance on conditional coverage estimation , but also outperforms the state-of-the-art algorithms on high-quality PI generation . 2 EVALUATING CONDITIONAL COVERAGE FOR HIGH-QUALITY PIS . Let X ∈ X and Y ∈ Y ⊂ R be random variables denoting the input feature and label , where the pair ( X , Y ) follows an ( unknown ) ground-truth joint distribution π ( X , Y ) . Let π ( Y |X ) be the conditional distribution of Y given X . We are given the training data D : = { ( xi , yi ) , i = 1 , 2 , · · · , n } where ( xi , yi ) are i.i.d . realizations of random variables ( X , Y ) . A PI refers to an interval [ L ( x ) , U ( x ) ] where L , U are two functions mapping from X to Y trained on the data D. [ L ( x ) , U ( x ) ] is called a PI at prediction level 1− α ( 0 ≤ α ≤ 1 ) if its marginal coverage is not less than 1 − α , i.e. , P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U ] ≥ 1 − α where P is with respect to a new test point ( X , Y ) ∼ π . We say that [ L ( x ) , U ( x ) ] is of high-quality if its marginal coverage attains a pre-specified target prediction level and has a short width on average . In particular , a best-quality PI at prediction level 1− α is an optimal solution to the following constrained optimization problem : min L , U E [ U ( X ) − L ( X ) ] subject to P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U ] ≥ 1− α . ( 2.1 ) The high-quality criterion has been widely adopted in previous work ( see Section 6 ) . However , this criterion alone may fail to carry important model uncertainty information at specific test points . Consider a simple example where x ∼ Uniform [ 0 , 1 ] , y = 0 for x ∈ [ 0 , 0.95 ] and y|x ∼ Uniform [ 0 , 1 ] for x ∈ ( 0.95 , 1 ] . Then according to equation 2.1 , a best-quality 95 % PI is precisely L ( x ) = U ( x ) = 0 for all x ∈ [ 0 , 1 ] . This PI has nonconstant coverage if we condition at different points ( 1 for x ∈ [ 0 , 0.95 ] and 0 for x ∈ ( 0.95 , 1 ] ) , and can deviate significantly from the overall coverage 95 % . More examples to highlight the need of obtaining conditional coverage information can be found in our numerical experiments in Section 5.1 . To mitigate the drawback of the high-quality criterion , we define : Definition 2.1 ( Conditional Coverage and Its Estimator ) . The conditional coverage associated with a PI [ L ( x ) , U ( x ) ] is A ( x ) : = P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , X = x ] for a.e . x ∈ X , where P is taken with respect to π ( Y |X ) . For a ( conditional ) coverage estimator P̂ , which is a measurable function from X to [ 0 , 1 ] , we define its Lp conditional coverage error ( C̃Ep ) as C̃Ep : = ∥∥∥P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , X ] − P̂ ( X ) ∥∥∥ Lp ( X ) where the Lp-norm is taken with respect to the randomness of X ( 1 ≤ p ≤ +∞ ) . Note that evaluating C̃Ep relies on approximating the conditional coverage A ( x ) , which can be as challenging as the original prediction problem . To address this , we leverage the similarity of estimatingA ( x ) to generating prediction probabilities in binary classification , which motivates us to borrow the notion of calibration in classification . This idea is based on a relaxed error criterion by looking at the conditional coverage among all points that bear the same coverage estimator value , instead of conditioning at any given point . The resulting error metric then only relies on probabilities conditioned on variables in a much lower-dimensional space [ 0 , 1 ] than X . To explain concretely , we introduce a “ perfect-calibrated coverage estimator ” as : Definition 2.2 ( Perfect Calibration ) . A coverage estimator P̂ is called a perfect-calibrated coverage estimator associated with [ L ( x ) , U ( x ) ] if it satisfies P̂ ( x ) = P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , P̂ ( X ) = P̂ ( x ) ] , a.e . P̂ ( x ) ∈ [ 0 , 1 ] . ( 2.2 ) where a.e . is with respect to the probability measure on [ 0 , 1 ] induced by the random variable P̂ ( X ) . Equation 2.2 means that a point x with conditional coverage estimate P̂ ( x ) = p has an average coverage of precisely p , among all points in X that possess the same conditional coverage estimated value . That is , the average coverage of the PI restricted on the subset { x ∈ X : P̂ ( x ) = p } should be precisely p. Corresponding to Definition 2.2 , we define : Definition 2.3 ( Calibration-based Error ) . An Lp ( 1 ≤ p ≤ +∞ ) calibration-based conditional coverage error , or coverage error for short ( CEp ) , of a coverage estimator P̂ is : CEp : = ∥∥∥P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , P̂ ( X ) ] − P̂ ( X ) ∥∥∥ Lp ( X ) ( 2.3 ) where Lp-norm is taken with respect to the randomness of P̂ ( X ) . In the above definition the conditional probability P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , P̂ ( X ) ] is a measurable function of random variable P̂ ( X ) , say γ ( P̂ ( X ) ) . By a change of variable , CEpp : = ∥∥∥γ ( P̂ ( X ) ) − P̂ ( X ) ∥∥∥p Lp ( X ) = ∫ 1 0 |γ ( t ) − t|pdFP̂ ( X ) ( t ) ( 2.4 ) where FP̂ ( X ) is a probability distribution of P̂ ( X ) on [ 0 , 1 ] . Here , CEp only requires estimating γ ( t ) for t ∈ [ 0 , 1 ] , which can be done easily by discretizing [ 0 , 1 ] for empirical calculation . We call the empirical Lp calibration-based conditional coverage error ECEp . More details of this empirical error can be found in Appendix A.4 . A calibration-based error CEp provides a middle ground between the enforcement of marginal coverage and conditional coverage . The ground-truth conditional coverage is perfectly calibrated , but not vice versa . However , if we enforce the perfect calibration criterion for a coverage estimator to hold when restricted to any measurable subset in X , then the choice of estimator will reduce uniquely to the conditional coverage ( Definition A.1 and Lemma A.2 ) . In this sense , CEp is an error metric that is a natural relaxation of C̃Ep , and although less precise , CEp is computationally much more tractable than C̃Ep . Evaluation Metric for Coverage Estimator . We use ECE1 as the primary evaluation metric to measure the quality of a coverage estimator . A high ECE1 value of a coverage estimator indicates an unreliable coverage estimation while a small ECE1 value indicates that the coverage estimator is close to the perfect-calibrated property . Ideally , an effective algorithm should output a coverage estimator with a small ECE1 value .
In the submitted paper, the authors study high-quality prediction intervals (PIs). The paper proposes a novel design of loss functions to generate PIs and conditional coverage estimates. The theoretical justification for using the conditional coverage error (in Ca-module) is presented and the numerical experiments with promising results are provided on multiple benchmark datasets.
SP:c966fd016af0edb014beaaee492f136ea57c77aa
Conditional Coverage Estimation for High-quality Prediction Intervals
Deep learning has achieved state-of-the-art performance to generate high-quality prediction intervals ( PIs ) for uncertainty quantification in regression tasks . The high-quality criterion requires PIs to be as narrow as possible , whilst maintaining a pre-specified level of data ( marginal ) coverage . However , most existing works for high-quality PIs lack accurate information on conditional coverage , which may cause unreliable predictions if it is significantly smaller than the marginal coverage . To address this problem , we propose a novel end-to-end framework which could output high-quality PIs and simultaneously provide their conditional coverage estimation . In doing so , we design a new loss function that is both easyto-implement and theoretically justified via an exponential concentration bound . Our evaluation on real-world benchmark datasets and synthetic examples shows that our approach not only outperforms the state-of-the-arts on high-quality PIs in terms of average PI width , but also accurately estimates conditional coverage information that is useful in assessing model uncertainty . 1 INTRODUCTION . Prediction interval ( PI ) is poised to play an increasingly prominent role in uncertainty quantification for regression tasks ( Khosravi et al. , 2010 ; 2011 ; Galván et al. , 2017 ; Rosenfeld et al. , 2018 ; Tagasovska & Lopez-Paz , 2018 ; 2019 ; Romano et al. , 2019 ; Wang et al. , 2019 ; Kivaranovic et al. , 2020 ) . A high-quality PI should be as narrow as possible , whilst maintaining a pre-specified level of data coverage or marginal coverage ( Pearce et al. , 2018 ) . Compared with PIs obtained based on coverage-only consideration , the “ high-quality ” criterion is beneficial in balancing between marginal coverage probability and interval width . However , the conditional coverage given a feature , which is critical for making reliable context-based decisions , is unassessed and missing in most existing works on high-quality PIs . In the presence of heteroskedasticity and model misspecification , the marginal coverage can be very different from the conditional coverage at a given point , which affects the downstream decision-making task that relies on the uncertainty information provided by the PI . Our main goal is to meaningfully incorporate and assess conditional coverages in high-quality PIs . Conditional coverage estimation is challenging for two reasons . First is that the natural evaluation metric of conditional coverage error , an Lp distance between the estimated and ground-truth conditional coverages , is difficult to compute as it requires obtaining the conditional probability given feature x , which is arguably as challenging as the regression problem itself . Our first goal in this paper is to address this issue by developing a new metric called calibration-based conditional coverage error for conditional coverage estimation measurement . Our approach is inspired from the calibration notion in classification ( Guo et al. , 2017 ) . The basic idea is to relax conditional coverage at any given point to being averaged over all points that bear the same estimated value . An estimator satisfying the relaxed property is regarded as well-calibrated . In regression , calibration-based conditional coverage error provides a middle ground between the enforcement of marginal coverage ( lacking any conditional information ) and conditional coverage ( computationally intractable ) . Compared with conditional coverage , this middle-ground metric can be viewed as a “ dimension reduction ” of the conditioning variable from the original sample space to the space [ 0 , 1 ] , so that we can easily discretize to compute the empirical metric values . The second challenge is the discontinuity in the above metrics that hinders efficient training of PIs that are both high-quality and possess reliable conditional coverage information . To address this , we design a new loss function based on a combination of the high-quality criterion and a coverage assessment loss . The latter can be flexibly added as a separate module to any neural network ( NN ) used to train PIs . It is based on an empirical version of a tight upper bound on the coverage error in terms of a Kullback–Leibler ( KL ) divergence , which can be readily employed for running gradient descent . We theoretically show how training with our proposed loss function attains this upperbounding value via a concentration bound . We also demonstrate the empirical performance of our approach in terms of PI quality and conditional coverage assessment compared with benchmark methods . Summary of Contributions : ( 1 ) We identify the conditional coverage estimation problem as a new challenge for high-quality PIs and introduce a new evaluation metric for coverage estimation . ( 2 ) We propose an end-to-end algorithm that can simultaneously construct high-quality PIs and generate conditional coverage estimates . In addition , we provide theoretical justifications on the effectiveness of our algorithm by developing concentration bounds relating the coverage assessment loss and conditional coverage error . ( 3 ) By evaluating on benchmark datasets and synthetic examples , we empirically demonstrate that our approach not only achieves high performance on conditional coverage estimation , but also outperforms the state-of-the-art algorithms on high-quality PI generation . 2 EVALUATING CONDITIONAL COVERAGE FOR HIGH-QUALITY PIS . Let X ∈ X and Y ∈ Y ⊂ R be random variables denoting the input feature and label , where the pair ( X , Y ) follows an ( unknown ) ground-truth joint distribution π ( X , Y ) . Let π ( Y |X ) be the conditional distribution of Y given X . We are given the training data D : = { ( xi , yi ) , i = 1 , 2 , · · · , n } where ( xi , yi ) are i.i.d . realizations of random variables ( X , Y ) . A PI refers to an interval [ L ( x ) , U ( x ) ] where L , U are two functions mapping from X to Y trained on the data D. [ L ( x ) , U ( x ) ] is called a PI at prediction level 1− α ( 0 ≤ α ≤ 1 ) if its marginal coverage is not less than 1 − α , i.e. , P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U ] ≥ 1 − α where P is with respect to a new test point ( X , Y ) ∼ π . We say that [ L ( x ) , U ( x ) ] is of high-quality if its marginal coverage attains a pre-specified target prediction level and has a short width on average . In particular , a best-quality PI at prediction level 1− α is an optimal solution to the following constrained optimization problem : min L , U E [ U ( X ) − L ( X ) ] subject to P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U ] ≥ 1− α . ( 2.1 ) The high-quality criterion has been widely adopted in previous work ( see Section 6 ) . However , this criterion alone may fail to carry important model uncertainty information at specific test points . Consider a simple example where x ∼ Uniform [ 0 , 1 ] , y = 0 for x ∈ [ 0 , 0.95 ] and y|x ∼ Uniform [ 0 , 1 ] for x ∈ ( 0.95 , 1 ] . Then according to equation 2.1 , a best-quality 95 % PI is precisely L ( x ) = U ( x ) = 0 for all x ∈ [ 0 , 1 ] . This PI has nonconstant coverage if we condition at different points ( 1 for x ∈ [ 0 , 0.95 ] and 0 for x ∈ ( 0.95 , 1 ] ) , and can deviate significantly from the overall coverage 95 % . More examples to highlight the need of obtaining conditional coverage information can be found in our numerical experiments in Section 5.1 . To mitigate the drawback of the high-quality criterion , we define : Definition 2.1 ( Conditional Coverage and Its Estimator ) . The conditional coverage associated with a PI [ L ( x ) , U ( x ) ] is A ( x ) : = P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , X = x ] for a.e . x ∈ X , where P is taken with respect to π ( Y |X ) . For a ( conditional ) coverage estimator P̂ , which is a measurable function from X to [ 0 , 1 ] , we define its Lp conditional coverage error ( C̃Ep ) as C̃Ep : = ∥∥∥P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , X ] − P̂ ( X ) ∥∥∥ Lp ( X ) where the Lp-norm is taken with respect to the randomness of X ( 1 ≤ p ≤ +∞ ) . Note that evaluating C̃Ep relies on approximating the conditional coverage A ( x ) , which can be as challenging as the original prediction problem . To address this , we leverage the similarity of estimatingA ( x ) to generating prediction probabilities in binary classification , which motivates us to borrow the notion of calibration in classification . This idea is based on a relaxed error criterion by looking at the conditional coverage among all points that bear the same coverage estimator value , instead of conditioning at any given point . The resulting error metric then only relies on probabilities conditioned on variables in a much lower-dimensional space [ 0 , 1 ] than X . To explain concretely , we introduce a “ perfect-calibrated coverage estimator ” as : Definition 2.2 ( Perfect Calibration ) . A coverage estimator P̂ is called a perfect-calibrated coverage estimator associated with [ L ( x ) , U ( x ) ] if it satisfies P̂ ( x ) = P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , P̂ ( X ) = P̂ ( x ) ] , a.e . P̂ ( x ) ∈ [ 0 , 1 ] . ( 2.2 ) where a.e . is with respect to the probability measure on [ 0 , 1 ] induced by the random variable P̂ ( X ) . Equation 2.2 means that a point x with conditional coverage estimate P̂ ( x ) = p has an average coverage of precisely p , among all points in X that possess the same conditional coverage estimated value . That is , the average coverage of the PI restricted on the subset { x ∈ X : P̂ ( x ) = p } should be precisely p. Corresponding to Definition 2.2 , we define : Definition 2.3 ( Calibration-based Error ) . An Lp ( 1 ≤ p ≤ +∞ ) calibration-based conditional coverage error , or coverage error for short ( CEp ) , of a coverage estimator P̂ is : CEp : = ∥∥∥P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , P̂ ( X ) ] − P̂ ( X ) ∥∥∥ Lp ( X ) ( 2.3 ) where Lp-norm is taken with respect to the randomness of P̂ ( X ) . In the above definition the conditional probability P [ Y ∈ [ L ( X ) , U ( X ) ] |L , U , P̂ ( X ) ] is a measurable function of random variable P̂ ( X ) , say γ ( P̂ ( X ) ) . By a change of variable , CEpp : = ∥∥∥γ ( P̂ ( X ) ) − P̂ ( X ) ∥∥∥p Lp ( X ) = ∫ 1 0 |γ ( t ) − t|pdFP̂ ( X ) ( t ) ( 2.4 ) where FP̂ ( X ) is a probability distribution of P̂ ( X ) on [ 0 , 1 ] . Here , CEp only requires estimating γ ( t ) for t ∈ [ 0 , 1 ] , which can be done easily by discretizing [ 0 , 1 ] for empirical calculation . We call the empirical Lp calibration-based conditional coverage error ECEp . More details of this empirical error can be found in Appendix A.4 . A calibration-based error CEp provides a middle ground between the enforcement of marginal coverage and conditional coverage . The ground-truth conditional coverage is perfectly calibrated , but not vice versa . However , if we enforce the perfect calibration criterion for a coverage estimator to hold when restricted to any measurable subset in X , then the choice of estimator will reduce uniquely to the conditional coverage ( Definition A.1 and Lemma A.2 ) . In this sense , CEp is an error metric that is a natural relaxation of C̃Ep , and although less precise , CEp is computationally much more tractable than C̃Ep . Evaluation Metric for Coverage Estimator . We use ECE1 as the primary evaluation metric to measure the quality of a coverage estimator . A high ECE1 value of a coverage estimator indicates an unreliable coverage estimation while a small ECE1 value indicates that the coverage estimator is close to the perfect-calibrated property . Ideally , an effective algorithm should output a coverage estimator with a small ECE1 value .
In the paper, the author addresses a calibration-based conditional coverage error in order to avoid the difficulty of conditional coverage, which provides a middle ground between marginal (no conditional information) and conditional coverage (high computational cost). The author generates the idea building on prior work and designs a new loss function combining the high-quality criterion and a coverage assessment loss. The theoretical framework about the loss function is laid out clearly, and the performances on benchmark datasets provide accurate results which outperform the other baseline algorithms on high-quality prediction intervals generation.
SP:c966fd016af0edb014beaaee492f136ea57c77aa
EXPLORING VULNERABILITIES OF BERT-BASED APIS
1 INTRODUCTION . The emergence of Bidirectional Encoder Representations from Transformers ( BERT ) ( Devlin et al. , 2018 ) has revolutionised the natural language processing ( NLP ) field , leading to state-of-the-art performance on a wide range of NLP tasks with minimal task-specific supervision . In the meantime , with the increasing success of contextualised pretrained representations for transfer learning , powerful NLP models can be easily built by fine-tuning the pretrained models like BERT or XLNet ( Yang et al. , 2019 ) . Building NLP models on pretrained representations typically only require several task-specific layers or just a single feedforward layer on top of BERT . To protect data privacy , system integrity and Intellectual Property ( IP ) , commercial NLP models such as task-specific BERT models are often made indirectly accessible through pay-per-query prediction APIs ( Krishna et al. , 2019 ) . This leaves model prediction the only information an attacker can access . Prior works have found that existing NLP APIs are still vulnerable to model extraction attack , which reconstructs a copy of the remote NLP model based on carefully-designed queries and the outputs of the API ( Krishna et al. , 2019 ; Wallace et al. , 2020 ) . Pretrained BERT models further make it easier to apply model extraction attack to specialised NLP models obtained by fine-tuning pretrained BERT models ( Krishna et al. , 2019 ) . In addition to model extraction , it is important to ask the following two questions : 1 ) will the extracted model also leaks sensitive information about the training data in the target model ; and 2 ) whether the extracted model can cause more vulnerabilities of the target model ( i.e . the black-box API ) . To answer the above two questions , in this work , we first launch a model extraction attack , where the adversary queries the target model with the goal to steal it and turn it into a white-box model . With the extracted model , we further demonstrate that : 1 ) it is possible to infer sensitive information about the training data ; and 2 ) the extracted model can be exploited to generate highly transferable adversarial attacks against the remote victim model behind the API . Our results highlight the risks of publicly-hosted NLP APIs being stolen and attacked if they are trained by fine-tuning BERT . Contributions : First , we demonstrate that the extracted model can be exploited by an attribute inference attack to expose sensitive information about the original training data , leading to a significant privacy leakage . Second , we show that adversarial examples crafted on the extracted model are highly transferable to the target model , exposing more adversarial vulnerabilities of the target model . Third , extensive experiments with the extracted model on benchmark NLP datasets highlight the potential privacy issues and adversarial vulnerabilities of BERT-based APIs . We also show that both attacks developed on the extracted model can evade the investigated defence strategies . 2 RELATED WORK . 2.1 MODEL EXTRACTION ATTACK ( MEA ) . Model extraction attacks ( also referred to as “ stealing ” or “ reverse-engineering ” ) have been studied both empirically and theoretically , for simple classification tasks ( Tramèr et al. , 2016 ) , vision tasks ( Orekondy et al. , 2019 ) , and NLP tasks ( Krishna et al. , 2019 ; Wallace et al. , 2020 ) . As opposed to stealing parameters ( Tramèr et al. , 2016 ) , hyperparameters ( Wang & Gong , 2018 ) , architectures ( Oh et al. , 2019 ) , training data information ( Shokri et al. , 2017 ) and decision boundaries ( Tramèr et al. , 2016 ; Papernot et al. , 2017 ) , in this work , we attempt to create a local copy or steal the functionality of a black-box victim model ( Krishna et al. , 2019 ; Orekondy et al. , 2019 ) , that is a model that replicates the performance of the victim model as closely as possible . If reconstruction is successful , the attacker has effectively stolen the intellectual property . Furthermore , this extracted model could be used as a reconnaissance step to facilitate later attacks ( Krishna et al. , 2019 ) . For instance , the adversary could use the extracted model to facilitate private information inference about the training data of the victim model , or to construct adversarial examples that will force the victim model to make incorrect predictions . 2.2 ATTRIBUTE INFERENCE ATTACK . Fredrikson et al . ( 2014 ) first proposed model inversion attack on biomedical data . The goal is to infer some missing attributes of an input feature vector based on the interaction with a trained ML model . Since deep neural networks have the ability to memorise arbitrary information ( Zhang et al. , 2017 ) , the private information can be memorised by BERT as well , which poses a threat to information leakage ( Krishna et al. , 2019 ) . In NLP application , the input text often provides sufficient clues to portray the author , such as gender , age , and other important attributes . For example , sentiment analysis tasks often have privacy implications for authors whose text is used to train models . Prior works ( Coavoux et al. , 2018 ) have shown that user attributes can be easily detectable from online review data , as used extensively in sentiment analysis results ( Hovy et al. , 2015 ) . One might argue that sensitive information like gender , age , location and password are all not explicitly included in model predictions . Nonetheless , model predictions are produced from the input text , it can meanwhile encode personal information which might be exploited for adversarial usages , especially a modern deep learning model owns more capacity than they need to perform well on their tasks ( Zhang et al. , 2017 ) . The naive solution of removing protected attributes is insufficient : other features may be highly correlated with , and thus predictive of , the protected attributes ( Pedreshi et al. , 2008 ) . 2.3 ADVERSARIAL TRANSFERABILITY AGAINST NLP SYSTEM . An important property of adversarial examples is their transferability ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2017 ) . It has been shown that adversarial examples generated against one network can also successfully fool other networks ( Liu et al. , 2016 ; Papernot et al. , 2017 ) , especially the adversarial image examples in computer vision . Similarly , in NLP domain , adversarial examples that are designed to manipulate the substitute model can also be misclassified by the target model are considered transferable ( Papernot et al. , 2017 ; Ebrahimi et al. , 2018b ) . Adversarial transferability against NLP system remains largely unexplored . Few recent works have attempted to transfer adversarial examples to the NLP systems ( Sun et al. , 2020 ; Wallace et al. , 2020 ) , however , it is oblivious how the transferability works against BERT-based APIs , and whether the transferability would succeed when the victim model and the substitute ( extracted ) model have different architectures . 3 ATTACKING BERT-BASED API . In this work , we consider an adversary attempting to steal or attack BERT-based APIs , either for financial gain or to exploit private information or model errors . As shown in Figure 1 , the whole attack pipeline against BERT-based APIs can be summarised into two phases . In phase one ( model extraction attack ( MEA ) ) , we first sample queries , label them by the victim API , and then train an extracted model on the resulting data . In phase two , we conduct attribute inference attack ( AIA ) and adversarial example transfer ( AET ) based on the extracted model . We empirically validate that the extracted model can help enhance privacy leakage and adversarial example transferability in Section 4.3 and Section 4.4 . We remark that our attack pipeline is applicable to many remote BERT-based APIs , as we assume : ( a ) the capabilities required are limited to observing model output by the APIs ; ( b ) the number of queries is limited . 3.1 VICTIM MODEL : BERT-BASED API . Modern NLP systems are typically based on a pretrained BERT ( Devlin et al. , 2018 ; Liu et al. , 2019a ; Nogueira & Cho , 2019 ; Joshi et al. , 2020 ) . BERT produces rich natural language representations which transfer well to most downstream NLP tasks ( sentiment analysis , topic classification , etc. ) . Modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base,1 and fine-tune the whole model . 3.2 MODEL EXTRACTION ATTACK ( MEA ) . Model extraction attack aims to steal an intellectual model from cloud services ( Tramèr et al. , 2016 ; Orekondy et al. , 2019 ; Krishna et al. , 2019 ; Wallace et al. , 2020 ) . In this attack , we assume the victim model is a commercially available black-box API . An adversary with black-box query access to the victim model attempts to reconstruct a local copy ( “ extracted model ” ) of the victim model . In a nutshell , we perform model extraction attack in a transfer learning setting , where both the adversary and the victim model fine-tune a pretrained BERT . The goal is to extract a model with comparable accuracy to the victim model . Generally , MEA can be formulated as a two-step approach , as illustrated by the top figure in Figure 1 : 1https : //github.com/google-research/bert 1 . Attacker crafts a set of inputs as queries ( transfer set ) , then sends them to the victim model ( BERT-based API ) to obtain predictions ; 2 . Attacker reconstructs a copy of the victim model as an “ extracted model ” by using the queried query-prediction pairs . Since the attacker does not have training data for the target model , we apply a task-specific query generator to construct m queries { xi } m1 to the victim model . For each xi , target model returns a K-dim posterior probability vector yi ∈ [ 0 , 1 ] k , ∑ k y k i = 1 . The resulting dataset { xi , yi } m1 is used to train the extracted model . Once the extracted model is obtained , the attacker does not have to pay the provider of the original API anymore for the prediction of new data points . 3.3 ATTRIBUTE INFERENCE ATTACK ( AIA ) . Next , we investigate how to use the extracted model to aid the attribute inference of the private training data of the victim model , i.e. , attribute inference attack ( AIA ) ( Song & Raghunathan , 2020 ) . We remark that AIA is different from inferring attribute distribution as in model inversion attack ( Yeom et al. , 2018 ) . The intuition behind AIA is that the BERT representation generated by the extracted model can be used to infer the sensitive attribute of the private training data of the victim model ( Li et al. , 2018b ; Coavoux et al. , 2018 ; Lyu et al. , 2020b ) . Note that in our work , the only explicit information that is accessible to the attacker is model prediction given by the victim model to the chosen inputs , rather than the original BERT representation . We specifically exploit BERT representation of the extracted model , as it encodes the most informative message for the follow-up classification . A more detailed description can be referred to Appendix B .
This paper is studying the vulnerabilities of modern BERT-based classifiers, which a service provider is hosting using a black-box inference API. Consistent with prior work [2], the authors succeed in extracting high performing copies of the APIs, by training models using the outputs of the API to queries (akin to distillation). The authors then study two attacks on the copy model --- private attribute identification of sentences in the API's training data & adversarial example transfer from the white-box copy model to the black-box API. The authors report high attack success rates, better than those from competitive baselines (which do not require constructing a copy model). A few defences are also explored but are ineffective to prevent these attacks.
SP:38f7fc675b764f1d88f08c6eaaa0daa4b2351d37
EXPLORING VULNERABILITIES OF BERT-BASED APIS
1 INTRODUCTION . The emergence of Bidirectional Encoder Representations from Transformers ( BERT ) ( Devlin et al. , 2018 ) has revolutionised the natural language processing ( NLP ) field , leading to state-of-the-art performance on a wide range of NLP tasks with minimal task-specific supervision . In the meantime , with the increasing success of contextualised pretrained representations for transfer learning , powerful NLP models can be easily built by fine-tuning the pretrained models like BERT or XLNet ( Yang et al. , 2019 ) . Building NLP models on pretrained representations typically only require several task-specific layers or just a single feedforward layer on top of BERT . To protect data privacy , system integrity and Intellectual Property ( IP ) , commercial NLP models such as task-specific BERT models are often made indirectly accessible through pay-per-query prediction APIs ( Krishna et al. , 2019 ) . This leaves model prediction the only information an attacker can access . Prior works have found that existing NLP APIs are still vulnerable to model extraction attack , which reconstructs a copy of the remote NLP model based on carefully-designed queries and the outputs of the API ( Krishna et al. , 2019 ; Wallace et al. , 2020 ) . Pretrained BERT models further make it easier to apply model extraction attack to specialised NLP models obtained by fine-tuning pretrained BERT models ( Krishna et al. , 2019 ) . In addition to model extraction , it is important to ask the following two questions : 1 ) will the extracted model also leaks sensitive information about the training data in the target model ; and 2 ) whether the extracted model can cause more vulnerabilities of the target model ( i.e . the black-box API ) . To answer the above two questions , in this work , we first launch a model extraction attack , where the adversary queries the target model with the goal to steal it and turn it into a white-box model . With the extracted model , we further demonstrate that : 1 ) it is possible to infer sensitive information about the training data ; and 2 ) the extracted model can be exploited to generate highly transferable adversarial attacks against the remote victim model behind the API . Our results highlight the risks of publicly-hosted NLP APIs being stolen and attacked if they are trained by fine-tuning BERT . Contributions : First , we demonstrate that the extracted model can be exploited by an attribute inference attack to expose sensitive information about the original training data , leading to a significant privacy leakage . Second , we show that adversarial examples crafted on the extracted model are highly transferable to the target model , exposing more adversarial vulnerabilities of the target model . Third , extensive experiments with the extracted model on benchmark NLP datasets highlight the potential privacy issues and adversarial vulnerabilities of BERT-based APIs . We also show that both attacks developed on the extracted model can evade the investigated defence strategies . 2 RELATED WORK . 2.1 MODEL EXTRACTION ATTACK ( MEA ) . Model extraction attacks ( also referred to as “ stealing ” or “ reverse-engineering ” ) have been studied both empirically and theoretically , for simple classification tasks ( Tramèr et al. , 2016 ) , vision tasks ( Orekondy et al. , 2019 ) , and NLP tasks ( Krishna et al. , 2019 ; Wallace et al. , 2020 ) . As opposed to stealing parameters ( Tramèr et al. , 2016 ) , hyperparameters ( Wang & Gong , 2018 ) , architectures ( Oh et al. , 2019 ) , training data information ( Shokri et al. , 2017 ) and decision boundaries ( Tramèr et al. , 2016 ; Papernot et al. , 2017 ) , in this work , we attempt to create a local copy or steal the functionality of a black-box victim model ( Krishna et al. , 2019 ; Orekondy et al. , 2019 ) , that is a model that replicates the performance of the victim model as closely as possible . If reconstruction is successful , the attacker has effectively stolen the intellectual property . Furthermore , this extracted model could be used as a reconnaissance step to facilitate later attacks ( Krishna et al. , 2019 ) . For instance , the adversary could use the extracted model to facilitate private information inference about the training data of the victim model , or to construct adversarial examples that will force the victim model to make incorrect predictions . 2.2 ATTRIBUTE INFERENCE ATTACK . Fredrikson et al . ( 2014 ) first proposed model inversion attack on biomedical data . The goal is to infer some missing attributes of an input feature vector based on the interaction with a trained ML model . Since deep neural networks have the ability to memorise arbitrary information ( Zhang et al. , 2017 ) , the private information can be memorised by BERT as well , which poses a threat to information leakage ( Krishna et al. , 2019 ) . In NLP application , the input text often provides sufficient clues to portray the author , such as gender , age , and other important attributes . For example , sentiment analysis tasks often have privacy implications for authors whose text is used to train models . Prior works ( Coavoux et al. , 2018 ) have shown that user attributes can be easily detectable from online review data , as used extensively in sentiment analysis results ( Hovy et al. , 2015 ) . One might argue that sensitive information like gender , age , location and password are all not explicitly included in model predictions . Nonetheless , model predictions are produced from the input text , it can meanwhile encode personal information which might be exploited for adversarial usages , especially a modern deep learning model owns more capacity than they need to perform well on their tasks ( Zhang et al. , 2017 ) . The naive solution of removing protected attributes is insufficient : other features may be highly correlated with , and thus predictive of , the protected attributes ( Pedreshi et al. , 2008 ) . 2.3 ADVERSARIAL TRANSFERABILITY AGAINST NLP SYSTEM . An important property of adversarial examples is their transferability ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2017 ) . It has been shown that adversarial examples generated against one network can also successfully fool other networks ( Liu et al. , 2016 ; Papernot et al. , 2017 ) , especially the adversarial image examples in computer vision . Similarly , in NLP domain , adversarial examples that are designed to manipulate the substitute model can also be misclassified by the target model are considered transferable ( Papernot et al. , 2017 ; Ebrahimi et al. , 2018b ) . Adversarial transferability against NLP system remains largely unexplored . Few recent works have attempted to transfer adversarial examples to the NLP systems ( Sun et al. , 2020 ; Wallace et al. , 2020 ) , however , it is oblivious how the transferability works against BERT-based APIs , and whether the transferability would succeed when the victim model and the substitute ( extracted ) model have different architectures . 3 ATTACKING BERT-BASED API . In this work , we consider an adversary attempting to steal or attack BERT-based APIs , either for financial gain or to exploit private information or model errors . As shown in Figure 1 , the whole attack pipeline against BERT-based APIs can be summarised into two phases . In phase one ( model extraction attack ( MEA ) ) , we first sample queries , label them by the victim API , and then train an extracted model on the resulting data . In phase two , we conduct attribute inference attack ( AIA ) and adversarial example transfer ( AET ) based on the extracted model . We empirically validate that the extracted model can help enhance privacy leakage and adversarial example transferability in Section 4.3 and Section 4.4 . We remark that our attack pipeline is applicable to many remote BERT-based APIs , as we assume : ( a ) the capabilities required are limited to observing model output by the APIs ; ( b ) the number of queries is limited . 3.1 VICTIM MODEL : BERT-BASED API . Modern NLP systems are typically based on a pretrained BERT ( Devlin et al. , 2018 ; Liu et al. , 2019a ; Nogueira & Cho , 2019 ; Joshi et al. , 2020 ) . BERT produces rich natural language representations which transfer well to most downstream NLP tasks ( sentiment analysis , topic classification , etc. ) . Modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base,1 and fine-tune the whole model . 3.2 MODEL EXTRACTION ATTACK ( MEA ) . Model extraction attack aims to steal an intellectual model from cloud services ( Tramèr et al. , 2016 ; Orekondy et al. , 2019 ; Krishna et al. , 2019 ; Wallace et al. , 2020 ) . In this attack , we assume the victim model is a commercially available black-box API . An adversary with black-box query access to the victim model attempts to reconstruct a local copy ( “ extracted model ” ) of the victim model . In a nutshell , we perform model extraction attack in a transfer learning setting , where both the adversary and the victim model fine-tune a pretrained BERT . The goal is to extract a model with comparable accuracy to the victim model . Generally , MEA can be formulated as a two-step approach , as illustrated by the top figure in Figure 1 : 1https : //github.com/google-research/bert 1 . Attacker crafts a set of inputs as queries ( transfer set ) , then sends them to the victim model ( BERT-based API ) to obtain predictions ; 2 . Attacker reconstructs a copy of the victim model as an “ extracted model ” by using the queried query-prediction pairs . Since the attacker does not have training data for the target model , we apply a task-specific query generator to construct m queries { xi } m1 to the victim model . For each xi , target model returns a K-dim posterior probability vector yi ∈ [ 0 , 1 ] k , ∑ k y k i = 1 . The resulting dataset { xi , yi } m1 is used to train the extracted model . Once the extracted model is obtained , the attacker does not have to pay the provider of the original API anymore for the prediction of new data points . 3.3 ATTRIBUTE INFERENCE ATTACK ( AIA ) . Next , we investigate how to use the extracted model to aid the attribute inference of the private training data of the victim model , i.e. , attribute inference attack ( AIA ) ( Song & Raghunathan , 2020 ) . We remark that AIA is different from inferring attribute distribution as in model inversion attack ( Yeom et al. , 2018 ) . The intuition behind AIA is that the BERT representation generated by the extracted model can be used to infer the sensitive attribute of the private training data of the victim model ( Li et al. , 2018b ; Coavoux et al. , 2018 ; Lyu et al. , 2020b ) . Note that in our work , the only explicit information that is accessible to the attacker is model prediction given by the victim model to the chosen inputs , rather than the original BERT representation . We specifically exploit BERT representation of the extracted model , as it encodes the most informative message for the follow-up classification . A more detailed description can be referred to Appendix B .
The paper is motivated by a challenging problem in deploying a neural network-based model for sensitive domain and research in this direction is essential for making such model usable for sensitive domains. The paper presents a model extraction attack, where the adversary can steal a BERT- based API (i.e. the victim model), without knowing the victim model’s architecture, parameters or the training data distribution. The model extraction attack, where the adversary queries the target model with the goal to steal it and turn it into a white-box model. They demonstrated using simulated experiments that how the extracted model can be exploited to develop effective attribute inference attack to expose sensitive information of the training data. They claimed that the extracted model can lead to highly transferable adversarial attacks against the original model (victim model).
SP:38f7fc675b764f1d88f08c6eaaa0daa4b2351d37
Sample efficient Quality Diversity for neural continuous control
1 INTRODUCTION . Natural evolution has the fascinating ability to produce organisms that are all high-performing in their respective niche . Inspired by this ability to produce a tremendous diversity of living systems within one run , Quality-Diversity ( QD ) is a new family of optimization algorithms that aim at searching for a collection of both diverse and high-performing solutions ( Pugh et al. , 2016 ) . While classic optimization methods focus on finding a single efficient solution , the role of QD optimization is to cover the range of possible solution types and to return the best solution for each type . This process is sometimes referred to as ” illumination ” in opposition to optimization , as the goal of these algorithms is to reveal ( or illuminate ) a search space of interest ( Mouret & Clune , 2015 ) . QD approaches generally build on black-box optimization methods such as evolutionary algorithms to optimize a population of solutions ( Cully & Demiris , 2017 ) . These algorithms often rely on random mutations to explore small search spaces but struggle when confronted to higher-dimensional problems . As a result , QD approaches often scale poorly in large and continuous sequential decision problems , where using controllers with many parameters such as deep neural networks is mandatory ( Colas et al. , 2020 ) . Besides , while evolutionary methods are the most valuable when the policy gradient can not be applied safely ( Cully et al. , 2015 ) , in policy search problem that can be formalized as a Markov Decision Process ( MDP ) , Policy Gradient ( PG ) methods can exploit the analytical structure of neural networks to more efficiently optimize their parameters . Therefore , it makes sense to exploit these properties when the Markov assumption holds and the controller is a neural network . From the deep reinforcement learning ( RL ) perspective , the focus on sparse or deceptive rewards led to realize that maximizing diversity independently from rewards might be a good exploration strategy ( Lehman & Stanley , 2011a ; Colas et al. , 2018 ; Eysenbach et al. , 2018 ) . More recently , it was established that if one can define a small behavior space or outcome space corresponding to what matters to determine success , maximizing diversity in this space might be the optimal strategy to find a sparse reward ( Doncieux et al. , 2019 ) . In this work , we are the first to combine QD methods with PG methods . From one side , our aim is to strongly improve the sample efficiency of QD methods to get neural controllers solving continuous action space MDPs . From the other side , it is to strongly improve the exploration capabilities of deep RL methods in the context of sparse rewards or deceptive gradients problems , such as avoid traps and dead-ends in navigation tasks . We build on off-policy PG methods to propose a new mutation operator that takes into account the Markovian nature of the problem and analytically exploits the known structure of the neural controller . Our QD-RL algorithm falls within the QD framework described by Cully & Demiris ( 2017 ) and takes advantage of its powerful exploration capabilities , but also demonstrates remarkable sample efficiency brought by off-policy RL methods . We compare QD-RL to several recent algorithms that also combine a diversity objective and a return maximization method , namely the NS-ES family ( Conti et al. , 2018 ) and the ME-ES algorithm ( Colas et al. , 2020 ) and show that QD-RL is two orders of magnitude more sample efficient . 2 PROBLEM STATEMENT . We consider the general context of a fully observable Markov Decision Problem ( MDP ) ( S , A , T , R , γ , ρ0 ) where S is the state space , A is the action space , T : S × A → S is the transition function , R : S ×A → R is the reward function , γ is a discount factor and ρ0 is the initial state distribution . We aim to find a set of parameters θ of a parameterized policy πθ : S → A so as to maximize the objective function J ( θ ) = Eτ ∑ t γ trt where τ is a trajectory obtained from πθ starting from state s0 ∼ ρ0 and rt is the reward obtained along this trajectory at time t. We define the Q-value for policy π , Qπ : S × A → R as Qπ ( s , a ) = Eτ ∑ t γ trt , where τ is a trajectory obtained from πθ starting from s and performing initial action a. QD aims at evolving a set of solutions θ that are both diverse and high performing . To measure diversity , we first define a Behavior Descriptor ( BD ) space , which characterizes solutions in functional terms , in addition to their score J ( θ ) . We note bdθ the BD of a solution θ . The solution BD space is often designed using relevant features of the task . For instance , in robot navigation , a relevant BD is the final position of the robot . In robot locomotion , it may rather be the position and/or velocity of the robot center of gravity at specific times . From BDs , we define the diversity ( or novelty ) of a solution as measuring the difference between its BD and those of the solutions obtained so far . Additionally , we define a state Behavior Descriptor , or state BD , noted bdt . It is a set of relevant features extracted from a state . From state BDs , we define the BD of a solution θ as a function of all state BDs encountered by policy πθ when interacting with the environment , as illustrated in Figure 1a . More formally , we note bdθ = IEτ [ fbd ( { bd1 , . . . , bdT } ) ] , where T is the trajectory length and fbd is an aggregation function . For instance , fbd can average over state BDs or return only the last state BD of the trajectory . If we consider again robot navigation , a state BD bdt may represent the position of the robot at time t and the solution BD bdθ may be the final position of the robot . With state BDs , we measure the novelty of a state relatively to all other seen states . The way we compute diversity at the solution and the state levels is explained in Section 4 . 3 RELATED WORK . A distinguishing feature of our approach is that we combine diversity seeking at the level of trajectories using solution BDs bdθ and diversity seeking in the state space using state BDs bdt . The former is used to select agents from the archive in the QD part of the architecture , whereas the latter is used during policy gradient steps in the RL part , see Figure 1b . We organize the literature review below according to this split between two types of diversity seeking mechanisms . Besides , some families of methods are related to our work in a lesser extent . This is the case of algorithms combining evolutionary approaches and deep RL such as CEM-RL ( Pourchot & Sigaud , 2018 ) , ERL ( Khadka & Tumer , 2018 ) and CERL ( Khadka et al. , 2019 ) , algorithms maintaining a population of RL agents for exploration without an explicit diversity criterion ( Jaderberg et al. , 2017 ) or algorithms explicitly looking for diversity but in the action space rather than in the state space like ARAC ( Doan et al. , 2019 ) , P3S-TD3 ( Jung et al. , 2020 ) and DvD ( Parker-Holder et al. , 2020 ) . We include CEM-RL as one of our baselines . Seeking for diversity and performance in the space of solutions Simultaneously maximizing diversity and performance is the central goal of QD methods ( Pugh et al. , 2016 ; Cully & Demiris , 2017 ) . Among the various possible combinations offered by the QD framework , Novelty Search with Local Competition ( NSLC ) ( Lehman & Stanley , 2011b ) and MAP-Elites ( ME ) ( Mouret & Clune , 2015 ) are the two most popular algorithms . In ME , the BD space is discretized into a grid of solution bins . Each bin is a niche of the BD space and the selection mechanism samples individuals uniformly from all bins . On the other hand , NSLC builds on the Novelty Search ( NS ) algorithm ( Lehman & Stanley , 2011a ) and maintains an unstructured archive of solutions selected for their local performance . Cully & Mouret ( 2013 ) augment the NSLC archive by replacing solutions when they outperform the already stored ones . In QD-RL , we build on the standard ME approach and on an augmented version of the NSLC algorithm where the population is selected from a global quality-diversity Pareto front inspired from Cully & Demiris ( 2017 ) . Relying on these components , algorithms such as QD-ES and NSR-ES have been applied to challenging continuous control environments in Conti et al . ( 2018 ) . But , as outlined in Colas et al . ( 2020 ) , these approaches are not sample efficient and the diversity and environment reward functions could be mixed in a more efficient way . In that respect , the most closely related work w.r.t . ours is Colas et al . ( 2020 ) . The ME-ES algorithm also optimizes both quality and diversity , using an archive , two ES populations and the MAP-ELITES approach . Using such distributional ES methods has been shown to be critically more efficient than population-based GA algorithms ( Salimans et al. , 2017 ) , but our results show that they are still less sample efficient than off-policy deep RL methods as they do not leverage the policy gradient . Finally , similarly to us , the authors of Shi et al . ( 2020 ) try to combine novelty search and deep RL by defining behavior descriptors and using an NS part on top of a deep RL part in their architecture . In contrast with our work , the transfer from novel behaviors to reward efficient behaviors is obtained through goal-conditioned policies , where the RL part uses goals corresponding to outcomes found by the most novel agents in the population . But the deep RL part does not contain a diversity seeking mechanism in the state space . Seeking for diversity and performance in the state space Seeking for diversity in the space of states or actions is generally framed into the RL framework . An exception is Stanton & Clune ( 2016 ) who define a notion of intra-life novelty that is similar to our state novelty described in Section 2 . But their novelty relies on skills rather than states . Our work is also related to algorithms using RL mechanisms to search for diversity only , such as Eysenbach et al . ( 2018 ) ; Pong et al . ( 2019 ) ; Lee et al . ( 2019 ) ; Islam et al . ( 2019 ) . These methods have proven useful in the sparse reward case , but they are inherently limited when the reward signal can orient exploration , as they ignore it . Other works sequentially combine diversity seeking and RL . The GEP-PG algorithm Colas et al . ( 2018 ) combines a diversity seeking component , namely Goal Exploration Processes ( Forestier et al. , 2017 ) and the DDPG deep RL algorithm ( Lillicrap et al. , 2015 ) . This sequential combination of exploration-then-exploitation is also present in GO-EXPLORE ( Ecoffet et al. , 2019 ) and in PBCS ( Matheron et al. , 2020 ) . Again , this approach is limited when the reward signal can help orienting the exploration process towards a satisfactory solution . These sequential approaches first look for diversity in the space of trajectories , then optimize performance in the state action space , whereas we do so simultaneously in the space of trajectories and in the state space . Thus , as far as we know , QD-RL is the first algorithm optimizing both diversity and performance in the solution and in the state space , using a sample efficient off-policy deep RL method for the latter .
This paper is addressing the problem of hard exploration / escaping local minima in continuous control, by optimizing a population of agents for both environment reward and diversity, using both off-policy RL and Quality-Diversity (QD) optimization. Each agent is individually optimized using off-policy RL for either environment reward or diversity, and at a population level, individuals are selected using a QD method such as Map-Elites. On a hard exploration problem, Ant-Maze (Colas et al 2020), the proposed method is shown to be significantly more data-efficient compared to several other SOTA ES methods (ME-ES (Colas et al, 2020), NS-ES (Conti et al 2018), NSR-ES (Conti et al 2020)).  
SP:3a035de1d4b8f02e2896e2376965b6a259ac9867
Sample efficient Quality Diversity for neural continuous control
1 INTRODUCTION . Natural evolution has the fascinating ability to produce organisms that are all high-performing in their respective niche . Inspired by this ability to produce a tremendous diversity of living systems within one run , Quality-Diversity ( QD ) is a new family of optimization algorithms that aim at searching for a collection of both diverse and high-performing solutions ( Pugh et al. , 2016 ) . While classic optimization methods focus on finding a single efficient solution , the role of QD optimization is to cover the range of possible solution types and to return the best solution for each type . This process is sometimes referred to as ” illumination ” in opposition to optimization , as the goal of these algorithms is to reveal ( or illuminate ) a search space of interest ( Mouret & Clune , 2015 ) . QD approaches generally build on black-box optimization methods such as evolutionary algorithms to optimize a population of solutions ( Cully & Demiris , 2017 ) . These algorithms often rely on random mutations to explore small search spaces but struggle when confronted to higher-dimensional problems . As a result , QD approaches often scale poorly in large and continuous sequential decision problems , where using controllers with many parameters such as deep neural networks is mandatory ( Colas et al. , 2020 ) . Besides , while evolutionary methods are the most valuable when the policy gradient can not be applied safely ( Cully et al. , 2015 ) , in policy search problem that can be formalized as a Markov Decision Process ( MDP ) , Policy Gradient ( PG ) methods can exploit the analytical structure of neural networks to more efficiently optimize their parameters . Therefore , it makes sense to exploit these properties when the Markov assumption holds and the controller is a neural network . From the deep reinforcement learning ( RL ) perspective , the focus on sparse or deceptive rewards led to realize that maximizing diversity independently from rewards might be a good exploration strategy ( Lehman & Stanley , 2011a ; Colas et al. , 2018 ; Eysenbach et al. , 2018 ) . More recently , it was established that if one can define a small behavior space or outcome space corresponding to what matters to determine success , maximizing diversity in this space might be the optimal strategy to find a sparse reward ( Doncieux et al. , 2019 ) . In this work , we are the first to combine QD methods with PG methods . From one side , our aim is to strongly improve the sample efficiency of QD methods to get neural controllers solving continuous action space MDPs . From the other side , it is to strongly improve the exploration capabilities of deep RL methods in the context of sparse rewards or deceptive gradients problems , such as avoid traps and dead-ends in navigation tasks . We build on off-policy PG methods to propose a new mutation operator that takes into account the Markovian nature of the problem and analytically exploits the known structure of the neural controller . Our QD-RL algorithm falls within the QD framework described by Cully & Demiris ( 2017 ) and takes advantage of its powerful exploration capabilities , but also demonstrates remarkable sample efficiency brought by off-policy RL methods . We compare QD-RL to several recent algorithms that also combine a diversity objective and a return maximization method , namely the NS-ES family ( Conti et al. , 2018 ) and the ME-ES algorithm ( Colas et al. , 2020 ) and show that QD-RL is two orders of magnitude more sample efficient . 2 PROBLEM STATEMENT . We consider the general context of a fully observable Markov Decision Problem ( MDP ) ( S , A , T , R , γ , ρ0 ) where S is the state space , A is the action space , T : S × A → S is the transition function , R : S ×A → R is the reward function , γ is a discount factor and ρ0 is the initial state distribution . We aim to find a set of parameters θ of a parameterized policy πθ : S → A so as to maximize the objective function J ( θ ) = Eτ ∑ t γ trt where τ is a trajectory obtained from πθ starting from state s0 ∼ ρ0 and rt is the reward obtained along this trajectory at time t. We define the Q-value for policy π , Qπ : S × A → R as Qπ ( s , a ) = Eτ ∑ t γ trt , where τ is a trajectory obtained from πθ starting from s and performing initial action a. QD aims at evolving a set of solutions θ that are both diverse and high performing . To measure diversity , we first define a Behavior Descriptor ( BD ) space , which characterizes solutions in functional terms , in addition to their score J ( θ ) . We note bdθ the BD of a solution θ . The solution BD space is often designed using relevant features of the task . For instance , in robot navigation , a relevant BD is the final position of the robot . In robot locomotion , it may rather be the position and/or velocity of the robot center of gravity at specific times . From BDs , we define the diversity ( or novelty ) of a solution as measuring the difference between its BD and those of the solutions obtained so far . Additionally , we define a state Behavior Descriptor , or state BD , noted bdt . It is a set of relevant features extracted from a state . From state BDs , we define the BD of a solution θ as a function of all state BDs encountered by policy πθ when interacting with the environment , as illustrated in Figure 1a . More formally , we note bdθ = IEτ [ fbd ( { bd1 , . . . , bdT } ) ] , where T is the trajectory length and fbd is an aggregation function . For instance , fbd can average over state BDs or return only the last state BD of the trajectory . If we consider again robot navigation , a state BD bdt may represent the position of the robot at time t and the solution BD bdθ may be the final position of the robot . With state BDs , we measure the novelty of a state relatively to all other seen states . The way we compute diversity at the solution and the state levels is explained in Section 4 . 3 RELATED WORK . A distinguishing feature of our approach is that we combine diversity seeking at the level of trajectories using solution BDs bdθ and diversity seeking in the state space using state BDs bdt . The former is used to select agents from the archive in the QD part of the architecture , whereas the latter is used during policy gradient steps in the RL part , see Figure 1b . We organize the literature review below according to this split between two types of diversity seeking mechanisms . Besides , some families of methods are related to our work in a lesser extent . This is the case of algorithms combining evolutionary approaches and deep RL such as CEM-RL ( Pourchot & Sigaud , 2018 ) , ERL ( Khadka & Tumer , 2018 ) and CERL ( Khadka et al. , 2019 ) , algorithms maintaining a population of RL agents for exploration without an explicit diversity criterion ( Jaderberg et al. , 2017 ) or algorithms explicitly looking for diversity but in the action space rather than in the state space like ARAC ( Doan et al. , 2019 ) , P3S-TD3 ( Jung et al. , 2020 ) and DvD ( Parker-Holder et al. , 2020 ) . We include CEM-RL as one of our baselines . Seeking for diversity and performance in the space of solutions Simultaneously maximizing diversity and performance is the central goal of QD methods ( Pugh et al. , 2016 ; Cully & Demiris , 2017 ) . Among the various possible combinations offered by the QD framework , Novelty Search with Local Competition ( NSLC ) ( Lehman & Stanley , 2011b ) and MAP-Elites ( ME ) ( Mouret & Clune , 2015 ) are the two most popular algorithms . In ME , the BD space is discretized into a grid of solution bins . Each bin is a niche of the BD space and the selection mechanism samples individuals uniformly from all bins . On the other hand , NSLC builds on the Novelty Search ( NS ) algorithm ( Lehman & Stanley , 2011a ) and maintains an unstructured archive of solutions selected for their local performance . Cully & Mouret ( 2013 ) augment the NSLC archive by replacing solutions when they outperform the already stored ones . In QD-RL , we build on the standard ME approach and on an augmented version of the NSLC algorithm where the population is selected from a global quality-diversity Pareto front inspired from Cully & Demiris ( 2017 ) . Relying on these components , algorithms such as QD-ES and NSR-ES have been applied to challenging continuous control environments in Conti et al . ( 2018 ) . But , as outlined in Colas et al . ( 2020 ) , these approaches are not sample efficient and the diversity and environment reward functions could be mixed in a more efficient way . In that respect , the most closely related work w.r.t . ours is Colas et al . ( 2020 ) . The ME-ES algorithm also optimizes both quality and diversity , using an archive , two ES populations and the MAP-ELITES approach . Using such distributional ES methods has been shown to be critically more efficient than population-based GA algorithms ( Salimans et al. , 2017 ) , but our results show that they are still less sample efficient than off-policy deep RL methods as they do not leverage the policy gradient . Finally , similarly to us , the authors of Shi et al . ( 2020 ) try to combine novelty search and deep RL by defining behavior descriptors and using an NS part on top of a deep RL part in their architecture . In contrast with our work , the transfer from novel behaviors to reward efficient behaviors is obtained through goal-conditioned policies , where the RL part uses goals corresponding to outcomes found by the most novel agents in the population . But the deep RL part does not contain a diversity seeking mechanism in the state space . Seeking for diversity and performance in the state space Seeking for diversity in the space of states or actions is generally framed into the RL framework . An exception is Stanton & Clune ( 2016 ) who define a notion of intra-life novelty that is similar to our state novelty described in Section 2 . But their novelty relies on skills rather than states . Our work is also related to algorithms using RL mechanisms to search for diversity only , such as Eysenbach et al . ( 2018 ) ; Pong et al . ( 2019 ) ; Lee et al . ( 2019 ) ; Islam et al . ( 2019 ) . These methods have proven useful in the sparse reward case , but they are inherently limited when the reward signal can orient exploration , as they ignore it . Other works sequentially combine diversity seeking and RL . The GEP-PG algorithm Colas et al . ( 2018 ) combines a diversity seeking component , namely Goal Exploration Processes ( Forestier et al. , 2017 ) and the DDPG deep RL algorithm ( Lillicrap et al. , 2015 ) . This sequential combination of exploration-then-exploitation is also present in GO-EXPLORE ( Ecoffet et al. , 2019 ) and in PBCS ( Matheron et al. , 2020 ) . Again , this approach is limited when the reward signal can help orienting the exploration process towards a satisfactory solution . These sequential approaches first look for diversity in the space of trajectories , then optimize performance in the state action space , whereas we do so simultaneously in the space of trajectories and in the state space . Thus , as far as we know , QD-RL is the first algorithm optimizing both diversity and performance in the solution and in the state space , using a sample efficient off-policy deep RL method for the latter .
The authors describe a QD-RL algorithm to solve continuous control problems with neural controllers. The authors state that they maximize diversity within the population and ” the return of each individual agent”. Furthermore, the authors state that QD-RL selects agents from a Pareto front or from a Map-Elites grid. This paper is weak in estimating its performance in a clear way. The overall structure in Section 4 is not well defined and difficult to follow. Descriptions of the methods and technical details of the proposed study are incomplete. Furthermore, literature review simply lists studies without presenting a coherent and systematic introduction or critical evaluation. Overall, the contribution of the paper is not significant.
SP:3a035de1d4b8f02e2896e2376965b6a259ac9867
AUL is a better optimization metric in PU learning
1 INTRODUCTION . Classic binary classification tasks in machine learning usually assume that all data are fully labeled as positive or negative ( PN learning ) . However , in real-world applications , dataset is usually nonideal and only a small fraction of positive data are labeled . Training a model from such partially labeled positive data is called positive-unlabeled ( PU ) learning . Take financial fraud detection as an example . Some fraudulent manners are found and can be labeled as positive , but we can not simply regard the remaining data as negative , because in most cases only a subset of fraud manners are detected and the remaining data may also contain undetected positive data . As a result , the remaining data can only be regarded as unlabeled . Other typical PU learning applications include text classification , drug discovery , outlier detection , malicious URL detection , online advertise , etc ( Yu et al . ( 2002 ) , Li & Liu ( 2003 ) , Li et al . ( 2009 ) , Blanchard et al . ( 2010 ) , Zhang et al . ( 2017 ) , Wu et al . ( 2018 ) ) . A naive way for PU learning is treating unlabeled data as negative and using traditional PN learning algorithms . But the model trained in this way is biased and its prediction results are not reliable ( Elkan & Noto ( 2008 ) ) . Some early works try to recover labels for unlabeled data by heuristic algorithms , such as S-EM ( Liu et al . ( 2002 ) ) , 1-DNF ( Yu et al . ( 2002 ) ) , Rocchio ( Li & Liu ( 2003 ) ) , k-means ( Chaudhari & Shevade ( 2012 ) ) . But the performance of heuristic algorithms , which is critical to these works , is not guaranteed . Some other kind of methods introduce an unbiased risk estimator to eliminate the bias ( Du Plessis et al . ( 2014 ) , Du Plessis et al . ( 2015 ) , Kiryo et al . ( 2017 ) ) . However , these methods rely on the knowledge of the proportion of positive samples in unlabeled samples , which is also unknown in practice . Another annoying problem of PU learning is how to accurately evaluate the model ’ s performance . Model performance is usually evaluated by some metrics , such as accuracy , precision , recall , Fscore , AUC ( Area Under ROC Curve ) , etc . During the life cycle of a model , its performance is usually monitored to ensure that the model is keeping a desired level of performance , with the variance and growth of data . In PU learning , the metrics above are also biased due to the lack of proportion of positive samples . Although Menon et al . ( 2015 ) proves that the ground-truth AUC ( AUC ) and the AUC estimated from PU data ( AUCPU ) is linearly correlated , which indicates that AUCPU can be used to compare the performances between two models , it ’ s still not possible to evaluate the true performance of a single model . Consider a situation when a model is evaluated on two different PU datasets generated from the same PN dataset but with different positive sample proportions . The ground-truth AUC which indicates the true performance of the model on two datasets are the same , but the AUCPU on the two datasets are different . Hence , AUCPU can not be used to directly evaluate the model ’ s performance . Jain et al . ( 2017 ) and Ramola et al . ( 2019 ) show that they can correct AUCPU , accuracyPU , balanced accuracyPU , F-scorePU and Matthews correlation coefficient , with the knowledge of proportion of positive samples . However , this proportion is difficult to obtain in practice . Recently many works focus on estimating the proportion of positive samples Du Plessis & Sugiyama ( 2014 ) , Christoffel et al . ( 2016 ) , Ramaswamy et al . ( 2016 ) , Jain et al . ( 2016 ) , Bekker & Davis ( 2018 ) , Zeiberg et al . ( 2020 ) , which are called mixture proportion estimation ( MPE ) algorithms . Yet according to our experiments on 9 datasets , the estimation methods still introduce some errors and thus make the corrected metrics inaccurate . Besides , the MPE algorithms may also introduce non-trivial computational overhead ( by up to 2,000 seconds per proportion estimation in our experiments ) , which slows down the evaluation process . In this work , we find that Area Under Lift chart ( AUL ) ( Vuk & Curk ( 2006 ) , Tufféry ( 2011 ) ) is a discriminating , unbiased and computation-friendly metric for PU learning . We make the following contributions . a ) . We theoretically prove that AUL estimation is unbiased to the ground-truth AUL and calculate a theoretical bound of the estimation error . b ) . We carry out experimental evaluation on 9 datasets and the results show that the average absolute error of AUL estimation is only 1/6 of AUC estimation , which means AUL estimation is more accurate and more stable than AUC estimation . c ) . By experiments we also find that , compared with state-of-the-art AUC-optimization algorithm , AUL-optimization algorithm can not only significantly save the computational cost , but also improve the model performance by up to 10 % . The remaining of this paper is organized as follows . Section 2 describes the background knowledge . Section 3 theoretically proves the unbiased feature of AUL estimation in PU learning . Section 4 evaluates the performance of AUL estimation by experiments on 9 datasets . Section 5 experimentally shows the performance of AUL-optimization algorithm by applying AUL in PU learning . Section 6 concludes the whole paper . 2 BACKGROUND . Binary Classification Problem : Let D = { < xi , yi > , i = 1 , ... n } be a positive and negative ( PN ) dataset which has n instances . Each tuple < xi , yi > is a record , in which xi ∈ Rd is the feature vector and yi ∈ { 1 , 0 } is the corresponding ground-truth label . Let XP , XN be the feature vectors set of positive , negative samples respectively , and nP , nN be the number of samples in these sets respectively . XP = { xi|yi = 1 , i = 1 , ... nP } XN = { xi|yi = 0 , i = 1 , ... nN } In PU learning , we use α = n P nP+nN = n P n to indicate the proportion of positive samples in all samples . Confusion Matrix : A confusion matrix is used to discriminate the model performance of different binary classification algorithms . In confusion matrix , true positive ( TP ) ( actual label and predicted label are both positive ) , true negative ( TN ) ( actual label and predicted label are both negative ) , false positive ( FP ) ( actually negative but predicted as positive ) , and false negative ( FN ) ( actually positive but predicted as negative ) are counted according to model ’ s outputs . Obviously , nTP + nFN = nP , nTN + nFP = nN . ROC : Since the numbers of TP , TN , FP and FN in a confusion matrix are highly related to the classification threshold ( θ ) , Receiver Operating Characteristic ( ROC ) curve ( Fawcett & Tom , 2003 ) is proposed to plot ( x , y ) = ( fpr ( θ ) , tpr ( θ ) ) over all possible classification thresholds θ . In some literature , tpr is also known as sensitivity and the value of 1− fpr is called specificity . true positive rate ( tpr ) = nTP nP , false positive rate ( fpr ) = nFN nN AUC : As a curve , ROC is not convenient enough to describe the model performance . Consequently , the Area Under ROC Curve ( AUC ) , which is a single value , is proposed and widely used as a metric to evaluate a binary classification algorithm . AUC provides a summary of model performance under all possible classification thresholds . It also provides an elegant probabilistic interpretation that AUC is the probability of correct ranking between a random positive sample and a random negative sample ( Hanley & McNeil , 1982 ) , which is a kind of ranking capability . According to Vuk & Curk ( 2006 ) , for a model g : Rd → R , AUC can be computed as follows , AUC = 1 nPnN ∑ xi∈XP ∑ xj∈XN S ( g ( xi ) , g ( xj ) ) ( 1 ) where S ( a , b ) = 1 a > b 1 2 a = b 0 a < b It is worth noting that , there are other ways to calculate AUC , but they are essentially the same . AUL : Lift curve , which is popular in econometrics to decide a suitable marketing strategy ( Tufféry ( 2011 ) , Vuk & Curk ( 2006 ) ) , has not been well studied in machine learning field . Lift curve can be seen as a variant of ROC and it illustrates ( x , y ) = ( Yrate ( θ ) , tpr ( θ ) ) over all possible classification thresholds θ. Yrate represents the proportion of samples predicted as positive . Yrate = nTP + nFP n In the curve figure , Lift curve has the same y-axis as ROC curve , but a different x-axis . Area Under Lift chart ( AUL ) ( Vuk & Curk ( 2006 ) , Tufféry ( 2011 ) ) , can also be used as a metric to evaluate the model performance . One way to compute AUL is AUL = 1 nPn ∑ xi∈XP ∑ xj∈XP∪XN S ( g ( xi ) , g ( xj ) ) ( 2 ) Essentially , AUL can be regarded as the probability of correct ranking between a random positive sample and a random sample . AUL and AUC is linearly related ( Tufféry , 2011 ) , i.e . AUL = 0.5α+ ( 1− α ) AUC which shows that AUL has the same discriminating power with AUC . 3 UNBIASEDNESS OF AUL ESTIMATION IN PU LEARNING : THEORETICAL PROOF . A PU dataset D′ = { < xi , yi , si > , si ∈ { 1 , 0 } , i = 1 , ... n } is generated by sampling a subset of positive data as labeled and leaving remain as unlabeled from D. In D′ , si is the observed label and yi is the ground-truth label which may be unknown . If si = 1 , we can confirm yi = 1 ( positive ) . If si = 0 , yi would be 1 or 0 . In this paper , we assume that the labeled data is Select Completely At Random ( SCAR ) ( Bekker & Davis , 2018 ) from positive data . Therefore the distribution of labeled samples in D′ are the same as the distribution of positive samples in D. Let XL , XU be the feature vectors set of labeled and unlabeled samples respectively , and nL , nU be the number of samples in these sets respectively . XL = { xi|si = 1 , i = 1 , ... nL } XU = { xi|si = 0 , i = 1 , ... nU } We use β = n L nP to indicate the proportion of labeled samples in positive samples . AUC Estimation is Biased To calculate AUC with PU dataset ( AUCPU ) , unlabeled data is regarded as negative , thus we have AUCPU = 1 nLnU ∑ xi∈XL ∑ xj∈XU S ( g ( xi ) , g ( xj ) ) ( 3 ) where function S is the same as in Eq.1 . The expectation of AUCPU over the distribution of D′ is E [ AUCPU ] = 1− α 1− αβ ( AUC − 0.5 ) + 0.5 This formula is slightly different from the one in Menon et al . ( 2015 ) . Here we define AUC on a specific dataset but not on a distribution . This formula indicates thatAUCPU is an biased estimation of AUC . We demonstrate the bias on an example dataset ( 1a ) , which contains 20 samples sorted by prediction score . Figure 1b illustrates two ROC curves on this dataset . curve-ROC is ploted with ground-truth label y and curve-ROCPU is ploted with observed label s. We can see that curve-ROC is almost above curve-ROCPU . The corresponding AUC is AUC = 0.740 and AUCPU = 0.653 respectively . There is a big difference ( 0.087 ) among the two measurements . As we discussed in section 1 , ( Jain et al. , 2017 ) tries to recover AUC from AUCPU from the estimation of 1−α1−αβ . To estimate 1−α 1−αβ , some works ( Elkan & Noto ( 2008 ) , Du Plessis & Sugiyama ( 2014 ) , Sanderson & Scott ( 2014 ) , Jain et al . ( 2016 ) , Ramaswamy et al . ( 2016 ) , Christoffel et al . ( 2016 ) , Bekker & Davis ( 2018 ) , Zeiberg et al . ( 2020 ) ) develop their mixture proportion estimation ( MPE ) algorithms . But according to our experiment on 9 datasats , these algorithms are neither accurate enough nor time saving . AUL Estimation is Unbiased Similar to AUCPU , AUL with PU dataset ( AULPU ) can be calculated as AULPU = 1 nLn ∑ xi∈XL ∑ xj∈XL∪XU S ( g ( xi ) , g ( xj ) ) ( 4 ) Unlike AUCPU , AULPU is unbiased estimation of AUL . In contrast to Figure 1b , Figure 1c illustrates two lift curves which are very close to each other . curve-lift is ploted with ground-truth label y and curve-liftPU is ploted with observed label s. The corresponding AUL , AUL = 0.620 and AULPU = 0.615 , are very close . We then prove the unbiasedness . Theorem 1 For a given classifier g : Rd → R , a PN dataset D with the proportion of labeled samples in positive samples β = n L nP , a PU dataset D′ can be generated following SCAR , the expectation and variance of AULPU over the distribution of D′ are as follows , E [ AULPU ] = AUL Var [ AULPU ] = nP − nL nP − 1 σ2 nL where σ2 is the variance of { 1 n ∑ xj∈XP∪XN S ( g ( xi ) , g ( xj ) ) , i = 1 , ... n P } . Proof Let txi = 1 n ∑ xj∈XP∪XN S ( g ( xi ) , g ( xj ) ) = 1 n ∑ xj∈XL∪XU S ( g ( xi ) , g ( xj ) ) then , AUL = 1 nP ∑ xi∈XP txi AULPU = 1 nL ∑ xi∈XL txi XL is generated by random sampling without replacement from XP , hence AULPU is the estimation of the mean of { txi , xi ∈ XP } which isAUL . According the theory of simple random sampling without replacement ( Lohr ( 2009 ) ) , the estimated population mean AULPU is an unbiased estimator of the population mean AUL , i.e . E [ AULPU ] = AUL , and variance of AULPU is Var [ AULPU ] = ( 1− n L nP ) 1 nL ( ∑ xi∈XP ( txi − t ) 2 nP − 1 ) = nP − nL nP − 1 1 nL ( ∑ xi∈XP ( txi − t ) 2 nP ) = nP − nL nP − 1 σ2 nL According to Theorem 1 , applying Chebyshev ’ s inequality , we have P ( |AUL−AULPU | ≥ ) ≤ Var = σ2 nL ( 1− β ) n P nP − 1 where Var is the variance of AULPU , note that 0 < txi < 1 , hence σ2 = E [ ( t− t ) 2 ] ≤ E [ ( t− t ) 2 − ( 0− t ) ( t− 1 ) ] = E [ −2tt+ t2 + t ] = t− t2 ≤ 1 4 then P ( |AUL−AULPU | ≥ ) ≤ 1− β 4nL nP nP − 1 ≈ 1− β 4nL This equation gives a theoretical bound for the error between AUL and AULPU . Hundreds of labeled samples nL can reduce the error to an acceptable level . score 0.92 0.82 0.73 0.66 0.6 0.58 0.54 0.5 0.45 0.43 y 1 1 0 1 1 1 1 0 0 0 s 1 0 0 1 0 0 1 0 0 0 score 0.41 0.39 0.38 0.36 0.35 0.3 0.25 0.2 0.15 0.1 y 1 0 1 1 0 0 0 1 0 0 s 1 0 0 0 0 0 0 1 0 0 ( a )
The paper argues that AUL is a better metric than AUC under the PU (positive and unlabeled data) learning setup in the sense that it leads to an unbiased estimator in this setting, which is not the case for the commonly used and known metric - AUC. It is also argued that it leads to better performance than those methods which directly optimize an AUC based metric and computationally efficient to evaluate than methods which attempt to estimate the unknown parameters (\alpha, \beta in the paper). The appropriateness of this setting in the PU learning setting is demonstrated on the UCI datasets.
SP:36bf1ac338d0c4184cad1369aabbe0662734a9aa
AUL is a better optimization metric in PU learning
1 INTRODUCTION . Classic binary classification tasks in machine learning usually assume that all data are fully labeled as positive or negative ( PN learning ) . However , in real-world applications , dataset is usually nonideal and only a small fraction of positive data are labeled . Training a model from such partially labeled positive data is called positive-unlabeled ( PU ) learning . Take financial fraud detection as an example . Some fraudulent manners are found and can be labeled as positive , but we can not simply regard the remaining data as negative , because in most cases only a subset of fraud manners are detected and the remaining data may also contain undetected positive data . As a result , the remaining data can only be regarded as unlabeled . Other typical PU learning applications include text classification , drug discovery , outlier detection , malicious URL detection , online advertise , etc ( Yu et al . ( 2002 ) , Li & Liu ( 2003 ) , Li et al . ( 2009 ) , Blanchard et al . ( 2010 ) , Zhang et al . ( 2017 ) , Wu et al . ( 2018 ) ) . A naive way for PU learning is treating unlabeled data as negative and using traditional PN learning algorithms . But the model trained in this way is biased and its prediction results are not reliable ( Elkan & Noto ( 2008 ) ) . Some early works try to recover labels for unlabeled data by heuristic algorithms , such as S-EM ( Liu et al . ( 2002 ) ) , 1-DNF ( Yu et al . ( 2002 ) ) , Rocchio ( Li & Liu ( 2003 ) ) , k-means ( Chaudhari & Shevade ( 2012 ) ) . But the performance of heuristic algorithms , which is critical to these works , is not guaranteed . Some other kind of methods introduce an unbiased risk estimator to eliminate the bias ( Du Plessis et al . ( 2014 ) , Du Plessis et al . ( 2015 ) , Kiryo et al . ( 2017 ) ) . However , these methods rely on the knowledge of the proportion of positive samples in unlabeled samples , which is also unknown in practice . Another annoying problem of PU learning is how to accurately evaluate the model ’ s performance . Model performance is usually evaluated by some metrics , such as accuracy , precision , recall , Fscore , AUC ( Area Under ROC Curve ) , etc . During the life cycle of a model , its performance is usually monitored to ensure that the model is keeping a desired level of performance , with the variance and growth of data . In PU learning , the metrics above are also biased due to the lack of proportion of positive samples . Although Menon et al . ( 2015 ) proves that the ground-truth AUC ( AUC ) and the AUC estimated from PU data ( AUCPU ) is linearly correlated , which indicates that AUCPU can be used to compare the performances between two models , it ’ s still not possible to evaluate the true performance of a single model . Consider a situation when a model is evaluated on two different PU datasets generated from the same PN dataset but with different positive sample proportions . The ground-truth AUC which indicates the true performance of the model on two datasets are the same , but the AUCPU on the two datasets are different . Hence , AUCPU can not be used to directly evaluate the model ’ s performance . Jain et al . ( 2017 ) and Ramola et al . ( 2019 ) show that they can correct AUCPU , accuracyPU , balanced accuracyPU , F-scorePU and Matthews correlation coefficient , with the knowledge of proportion of positive samples . However , this proportion is difficult to obtain in practice . Recently many works focus on estimating the proportion of positive samples Du Plessis & Sugiyama ( 2014 ) , Christoffel et al . ( 2016 ) , Ramaswamy et al . ( 2016 ) , Jain et al . ( 2016 ) , Bekker & Davis ( 2018 ) , Zeiberg et al . ( 2020 ) , which are called mixture proportion estimation ( MPE ) algorithms . Yet according to our experiments on 9 datasets , the estimation methods still introduce some errors and thus make the corrected metrics inaccurate . Besides , the MPE algorithms may also introduce non-trivial computational overhead ( by up to 2,000 seconds per proportion estimation in our experiments ) , which slows down the evaluation process . In this work , we find that Area Under Lift chart ( AUL ) ( Vuk & Curk ( 2006 ) , Tufféry ( 2011 ) ) is a discriminating , unbiased and computation-friendly metric for PU learning . We make the following contributions . a ) . We theoretically prove that AUL estimation is unbiased to the ground-truth AUL and calculate a theoretical bound of the estimation error . b ) . We carry out experimental evaluation on 9 datasets and the results show that the average absolute error of AUL estimation is only 1/6 of AUC estimation , which means AUL estimation is more accurate and more stable than AUC estimation . c ) . By experiments we also find that , compared with state-of-the-art AUC-optimization algorithm , AUL-optimization algorithm can not only significantly save the computational cost , but also improve the model performance by up to 10 % . The remaining of this paper is organized as follows . Section 2 describes the background knowledge . Section 3 theoretically proves the unbiased feature of AUL estimation in PU learning . Section 4 evaluates the performance of AUL estimation by experiments on 9 datasets . Section 5 experimentally shows the performance of AUL-optimization algorithm by applying AUL in PU learning . Section 6 concludes the whole paper . 2 BACKGROUND . Binary Classification Problem : Let D = { < xi , yi > , i = 1 , ... n } be a positive and negative ( PN ) dataset which has n instances . Each tuple < xi , yi > is a record , in which xi ∈ Rd is the feature vector and yi ∈ { 1 , 0 } is the corresponding ground-truth label . Let XP , XN be the feature vectors set of positive , negative samples respectively , and nP , nN be the number of samples in these sets respectively . XP = { xi|yi = 1 , i = 1 , ... nP } XN = { xi|yi = 0 , i = 1 , ... nN } In PU learning , we use α = n P nP+nN = n P n to indicate the proportion of positive samples in all samples . Confusion Matrix : A confusion matrix is used to discriminate the model performance of different binary classification algorithms . In confusion matrix , true positive ( TP ) ( actual label and predicted label are both positive ) , true negative ( TN ) ( actual label and predicted label are both negative ) , false positive ( FP ) ( actually negative but predicted as positive ) , and false negative ( FN ) ( actually positive but predicted as negative ) are counted according to model ’ s outputs . Obviously , nTP + nFN = nP , nTN + nFP = nN . ROC : Since the numbers of TP , TN , FP and FN in a confusion matrix are highly related to the classification threshold ( θ ) , Receiver Operating Characteristic ( ROC ) curve ( Fawcett & Tom , 2003 ) is proposed to plot ( x , y ) = ( fpr ( θ ) , tpr ( θ ) ) over all possible classification thresholds θ . In some literature , tpr is also known as sensitivity and the value of 1− fpr is called specificity . true positive rate ( tpr ) = nTP nP , false positive rate ( fpr ) = nFN nN AUC : As a curve , ROC is not convenient enough to describe the model performance . Consequently , the Area Under ROC Curve ( AUC ) , which is a single value , is proposed and widely used as a metric to evaluate a binary classification algorithm . AUC provides a summary of model performance under all possible classification thresholds . It also provides an elegant probabilistic interpretation that AUC is the probability of correct ranking between a random positive sample and a random negative sample ( Hanley & McNeil , 1982 ) , which is a kind of ranking capability . According to Vuk & Curk ( 2006 ) , for a model g : Rd → R , AUC can be computed as follows , AUC = 1 nPnN ∑ xi∈XP ∑ xj∈XN S ( g ( xi ) , g ( xj ) ) ( 1 ) where S ( a , b ) = 1 a > b 1 2 a = b 0 a < b It is worth noting that , there are other ways to calculate AUC , but they are essentially the same . AUL : Lift curve , which is popular in econometrics to decide a suitable marketing strategy ( Tufféry ( 2011 ) , Vuk & Curk ( 2006 ) ) , has not been well studied in machine learning field . Lift curve can be seen as a variant of ROC and it illustrates ( x , y ) = ( Yrate ( θ ) , tpr ( θ ) ) over all possible classification thresholds θ. Yrate represents the proportion of samples predicted as positive . Yrate = nTP + nFP n In the curve figure , Lift curve has the same y-axis as ROC curve , but a different x-axis . Area Under Lift chart ( AUL ) ( Vuk & Curk ( 2006 ) , Tufféry ( 2011 ) ) , can also be used as a metric to evaluate the model performance . One way to compute AUL is AUL = 1 nPn ∑ xi∈XP ∑ xj∈XP∪XN S ( g ( xi ) , g ( xj ) ) ( 2 ) Essentially , AUL can be regarded as the probability of correct ranking between a random positive sample and a random sample . AUL and AUC is linearly related ( Tufféry , 2011 ) , i.e . AUL = 0.5α+ ( 1− α ) AUC which shows that AUL has the same discriminating power with AUC . 3 UNBIASEDNESS OF AUL ESTIMATION IN PU LEARNING : THEORETICAL PROOF . A PU dataset D′ = { < xi , yi , si > , si ∈ { 1 , 0 } , i = 1 , ... n } is generated by sampling a subset of positive data as labeled and leaving remain as unlabeled from D. In D′ , si is the observed label and yi is the ground-truth label which may be unknown . If si = 1 , we can confirm yi = 1 ( positive ) . If si = 0 , yi would be 1 or 0 . In this paper , we assume that the labeled data is Select Completely At Random ( SCAR ) ( Bekker & Davis , 2018 ) from positive data . Therefore the distribution of labeled samples in D′ are the same as the distribution of positive samples in D. Let XL , XU be the feature vectors set of labeled and unlabeled samples respectively , and nL , nU be the number of samples in these sets respectively . XL = { xi|si = 1 , i = 1 , ... nL } XU = { xi|si = 0 , i = 1 , ... nU } We use β = n L nP to indicate the proportion of labeled samples in positive samples . AUC Estimation is Biased To calculate AUC with PU dataset ( AUCPU ) , unlabeled data is regarded as negative , thus we have AUCPU = 1 nLnU ∑ xi∈XL ∑ xj∈XU S ( g ( xi ) , g ( xj ) ) ( 3 ) where function S is the same as in Eq.1 . The expectation of AUCPU over the distribution of D′ is E [ AUCPU ] = 1− α 1− αβ ( AUC − 0.5 ) + 0.5 This formula is slightly different from the one in Menon et al . ( 2015 ) . Here we define AUC on a specific dataset but not on a distribution . This formula indicates thatAUCPU is an biased estimation of AUC . We demonstrate the bias on an example dataset ( 1a ) , which contains 20 samples sorted by prediction score . Figure 1b illustrates two ROC curves on this dataset . curve-ROC is ploted with ground-truth label y and curve-ROCPU is ploted with observed label s. We can see that curve-ROC is almost above curve-ROCPU . The corresponding AUC is AUC = 0.740 and AUCPU = 0.653 respectively . There is a big difference ( 0.087 ) among the two measurements . As we discussed in section 1 , ( Jain et al. , 2017 ) tries to recover AUC from AUCPU from the estimation of 1−α1−αβ . To estimate 1−α 1−αβ , some works ( Elkan & Noto ( 2008 ) , Du Plessis & Sugiyama ( 2014 ) , Sanderson & Scott ( 2014 ) , Jain et al . ( 2016 ) , Ramaswamy et al . ( 2016 ) , Christoffel et al . ( 2016 ) , Bekker & Davis ( 2018 ) , Zeiberg et al . ( 2020 ) ) develop their mixture proportion estimation ( MPE ) algorithms . But according to our experiment on 9 datasats , these algorithms are neither accurate enough nor time saving . AUL Estimation is Unbiased Similar to AUCPU , AUL with PU dataset ( AULPU ) can be calculated as AULPU = 1 nLn ∑ xi∈XL ∑ xj∈XL∪XU S ( g ( xi ) , g ( xj ) ) ( 4 ) Unlike AUCPU , AULPU is unbiased estimation of AUL . In contrast to Figure 1b , Figure 1c illustrates two lift curves which are very close to each other . curve-lift is ploted with ground-truth label y and curve-liftPU is ploted with observed label s. The corresponding AUL , AUL = 0.620 and AULPU = 0.615 , are very close . We then prove the unbiasedness . Theorem 1 For a given classifier g : Rd → R , a PN dataset D with the proportion of labeled samples in positive samples β = n L nP , a PU dataset D′ can be generated following SCAR , the expectation and variance of AULPU over the distribution of D′ are as follows , E [ AULPU ] = AUL Var [ AULPU ] = nP − nL nP − 1 σ2 nL where σ2 is the variance of { 1 n ∑ xj∈XP∪XN S ( g ( xi ) , g ( xj ) ) , i = 1 , ... n P } . Proof Let txi = 1 n ∑ xj∈XP∪XN S ( g ( xi ) , g ( xj ) ) = 1 n ∑ xj∈XL∪XU S ( g ( xi ) , g ( xj ) ) then , AUL = 1 nP ∑ xi∈XP txi AULPU = 1 nL ∑ xi∈XL txi XL is generated by random sampling without replacement from XP , hence AULPU is the estimation of the mean of { txi , xi ∈ XP } which isAUL . According the theory of simple random sampling without replacement ( Lohr ( 2009 ) ) , the estimated population mean AULPU is an unbiased estimator of the population mean AUL , i.e . E [ AULPU ] = AUL , and variance of AULPU is Var [ AULPU ] = ( 1− n L nP ) 1 nL ( ∑ xi∈XP ( txi − t ) 2 nP − 1 ) = nP − nL nP − 1 1 nL ( ∑ xi∈XP ( txi − t ) 2 nP ) = nP − nL nP − 1 σ2 nL According to Theorem 1 , applying Chebyshev ’ s inequality , we have P ( |AUL−AULPU | ≥ ) ≤ Var = σ2 nL ( 1− β ) n P nP − 1 where Var is the variance of AULPU , note that 0 < txi < 1 , hence σ2 = E [ ( t− t ) 2 ] ≤ E [ ( t− t ) 2 − ( 0− t ) ( t− 1 ) ] = E [ −2tt+ t2 + t ] = t− t2 ≤ 1 4 then P ( |AUL−AULPU | ≥ ) ≤ 1− β 4nL nP nP − 1 ≈ 1− β 4nL This equation gives a theoretical bound for the error between AUL and AULPU . Hundreds of labeled samples nL can reduce the error to an acceptable level . score 0.92 0.82 0.73 0.66 0.6 0.58 0.54 0.5 0.45 0.43 y 1 1 0 1 1 1 1 0 0 0 s 1 0 0 1 0 0 1 0 0 0 score 0.41 0.39 0.38 0.36 0.35 0.3 0.25 0.2 0.15 0.1 y 1 0 1 1 0 0 0 1 0 0 s 1 0 0 0 0 0 0 1 0 0 ( a )
In this paper, the author proposed to use Area Under Lift chart (AUL) as a new optimization metric for positive unlabeled (PU) learning. The proposed AUL can be estimated unbiasedly from PU data, without the need to estimate the mixture proportions. Experiments on several datasets show that the proposed method outperforms AUC optimization algorithms.
SP:36bf1ac338d0c4184cad1369aabbe0662734a9aa
Model-based Asynchronous Hyperparameter and Neural Architecture Search
1 INTRODUCTION . The goal of hyperparameter and neural architecture search ( HNAS ) is to automate the process of finding the right architecture or hyperparameters x ? ∈ argminx∈X f ( x ) of a deep neural network by minimizing the validation loss f ( x ) , observed through noise : yi = f ( xi ) + i , i ∼ N ( 0 , σ2 ) , i = 1 , . . . , n. Bayesian optimization ( BO ) is an effective model-based approach for solving expensive black-box optimization problems ( Jones et al. , 1998 ; Shahriari et al. , 2016 ) . It constructs a probabilistic surrogate model of the loss function p ( f | D ) based on previous evaluations D = { ( xi , yi ) } ni=1 . Searching for the global minimum of f is then driven by trading off exploration in regions of uncertainty and exploitation in regions where the global optimum is likely to reside . However , for HNAS problems , standard BO needs to be augmented in order to remain competitive . For example , training runs for unpromising configurations x can be stopped early , but serve as low-fidelity approximations of f ( x ) ( Swersky et al. , 2014 ; Domhan et al. , 2015 ; Klein et al. , 2017b ) . Further , evaluations of f can be executed in parallel to reduce the wall-clock time required for finding a good solution . Several methods for multi-fidelity and/or distributed BO have been proposed ( Kandasamy et al. , 2017 ; 2016 ; Takeno et al. , 2020 ) , but they rely on rather complicated approximations to either select the fidelity level or to compute an information theoretic acquisition function to determine the next candidate . In this work we aim to adopt the desiderata of Falkner et al . ( 2018 ) , namely , that of simplicity , which often leads to more robust methods in practice . A simple , easily parallelizable multi-fidelity scheduling algorithm is successive halving ( SH ) ( Karnin et al. , 2013 ; Jamieson & Talwalkar , 2016 ) , which iteratively eliminates poorly performing neural networks over time . Hyperband ( Li et al. , 2017 ) iterates over multiple rounds of SH with varying ratios between the number of configurations and the minimum amount of resources spent per configuration . Falkner et al . ( 2018 ) introduced a hybrid model , called BOHB , that uses a probabilistic model to guide the search while retaining the efficient any-time performance of Hyperband . However , both SH and BOHB can be bottlenecked due to their synchronous nature : stopping decisions are done only after synchronizing all training jobs at certain resource levels ( called rungs ) . This approach is wasteful when the evaluation of some configurations take longer than others , as it is often the case when training neural networks ( Ying et al. , 2019 ) , and can substantially delay progress towards high-performing configurations ( see example shown in Figure 1 ) . Recently , Li et al . ( 2018 ) proposed ASHA , which adapts successive halving to the asynchronous parallel case . Even though ASHA only relies on the random sampling of new configurations , it has been shown to outperform synchronous SH and BOHB . In this work , we augment ASHA with a Gaussian process ( GP ) surrogate , which jointly models the performance across configurations and rungs to improve the already strong performance of ASHA . The asynchronous nature further requires to handle pending evaluations in principled way to obtain an efficient model-based searcher . 1.1 CONTRIBUTIONS . Dealing with hyperparameter optimization ( HPO ) in general , and HNAS for neural networks in particular , we would like to make the most efficient use of a given parallel computation budget ( e.g. , number of compute instances ) in order to find high-accuracy solutions in the shortest wall-clock time possible . Besides exploiting low-fidelity approximations , we demonstrate that asynchronous parallel scheduling is a decisive factor in cost-efficient search . We further handle pending evaluations through fantasizing ( Snoek et al. , 2012 ) , which is critical for asynchronous searchers . Our novel combination of asynchronous SH with multi-fidelity BO , dubbed MOdel Based aSynchronous mulTi fidelity optimizER ( MOBSTER ) substantially outperforms either of them in isolation . More specifically : • We clarify differences between existing asynchronous SH extensions : ASHA , as described by Li et al . ( 2018 ) and an arguably simpler stopping rule variant , related to the median rule ( Golovin et al. , 2017 ) and first implemented in Ray Tune ( Liaw et al. , 2018 ) . Although their difference seem subtle , they lead to substantially different behaviour in practice . • We present an extensive ablation study , comparing and analysing different components for asynchronous HNAS . While these individual components are not novel , one of our main contributions is to show that , by systematically combining them to form MOBSTER , we obtain a reliable and more efficient method than other recently proposed approaches . Due to limited space , we present a detailed description of the technical nuances and complexities of model-based asynchronous multi-fidelity HNAS in Appendix A.3 . • On a variety of neural network benchmarks , we show that MOBSTER is more efficient in terms of wall-clock time than other state-of-the-art algorithms . Unlike BOHB , it does not suffer from substantial synchronization overheads when evaluations are expensive . As a result , we can achieve the same performance in the same amount of wall-clock time but with often just half the computational resources compared to random-sampling based ASHA . Next , we relate our work to approaches recently published in the literature . In Section 2 , we review synchronous SH , as well as two asynchronous extensions . Our novel method is presented in Section 3 . We present empirical evaluations for the HNAS of various neural architecture types in Section 4 , and finish with conclusions and future work in Section 5 . 1.2 RELATED WORK . A range of prior Gaussian process ( GP ) based BO work exploits multiple fidelities of the objective ( Kennedy & O ’ Hagan , 2000 ; Klein et al. , 2017a ; Poloczek et al. , 2018 ; Cutajar et al. , 2018 ) . A joint GP model across configurations and tasks was used by Swersky et al . ( 2013 ) , allowing for trade-offs between cheap auxiliaries and the expensive target . Klein et al . ( 2017a ) presented a continuous multi-fidelity BO method , where training set size is an input . It relies on a complicated and expensive acquisition function , and experiments with asynchronous scheduling are not provided . Kandasamy et al . ( 2017 ) presented BOCA , a Bayesian optimization method that exploits general continuous fidelities of the objective function . While BOCA uses a model similar to ours , they employ a different strategy to select the fidelity , which appears to work less efficiently in practice , as we show in our experiments . The “ freeze-thaw ” approach ( Swersky et al. , 2014 ) allows for asynchronous parallel evaluations , yet its scheduling is quite different from asynchronous SH . They propose an exponential decay kernel , a refinement of which we use here . No results for deep neural network tuning were presented , and no implementation is publicly available . Previous work on asynchronous BO has demonstrated performance gains over synchronous BO methods ( Alvin et al. , 2019 ; Kandasamy et al. , 2016 ) , yet this work did not exploit multiple fidelities of the objective . Concurrently to our work , Takeno et al . ( 2020 ) proposed an asynchronous multi-fidelity BO method . However , it is based on an information-theoretic acquisition function that is substantially more complex than our method . Since no implementation is publicly available it is omitted from our comparisons . Falkner et al . ( 2018 ) combine synchronous Hyperband with TPE ( Bergstra et al. , 2011 ) as a model . Their method , called BOHB , combines early speed-ups of Hyperband with fast convergence later on , and shows competitive performance for NAS ( Ying et al. , 2019 ; Dong & Yang , 2020 ) . However , compared to our approach , it fits independent kernel density estimator at each rung level , which prevents the interpolation to higher rung levels . Apart from this , kernel density estimators do not permit fantasizing the potential outcome of a pending configuration , which makes it unclear how to extend BOHB to the asynchronous setting , and are notoriously sensitive to their bandwidth parameters . 2 SYNCHRONOUS AND ASYNCHRONOUS MULTI-FIDELITY SCHEDULING . Multi-fidelity HNAS considers an objective f ( x , r ) , indexed by the resource level r ∈ { rmin , . . . , rmax } . The target function of interest is f ( x ) = f ( x , rmax ) , while evaluations of f ( x , r ) for r < rmax are cheaper , lower-fidelity evaluations that may be correlated with f ( x ) . Here , we consider the number of training epochs ( i.e. , full sweeps over the data ) as the resource r , though other choices are possible , such as training subset ratios ( Klein et al. , 2017a ) . Two kinds of decisions are made in multi-fidelity HNAS : choosing the configurations to evaluate and scheduling their evaluations . In this work , configurations are chosen non-uniformly based on the GP surrogate model that accounts for pending and actual evaluations . Scheduling decisions are stop/go decisions that happen at certain resource levels called rungs . We make use of successive halving ( SH ) for its simplicity and strong empirical performance ( Jamieson & Talwalkar , 2016 ; Li et al. , 2018 ) . Let η ∈ { 2 , 3 , 4 } be the halving constant , rmax and rmin the maximum and minimum resource level for an evaluation . We assume , for simplicity , that rmax/rmin = ηK , where K ∈ N. The full set of rungs is R = { rminηk | k = 0 , . . . , K } . In the sequel , the term “ rung ” will be overloaded , denoting both the resource level at which stop/go decisions are made and the list of configurations that reached it . In synchronous SH , each rung has an a priori fixed size ( i.e. , number of slots for configurations evaluated until the rung level ) , the size ratio between successive rungs being η−1 ( e.g. , n ( r ) = rmax/r for r ∈ R ) . A round of the algorithm starts with evaluating n ( rmin ) configurations to the lowest rung r = rmin , making use of parallel computation if available . Once all of them finish , the η−1 fraction of top performing configurations are promoted to the next rung while the others are terminated . Each rung is a synchronization point for many training tasks : it has to be populated entirely before any configuration can be promoted ( see Figure 1 left for a visualization ) . Synchronous scheduling makes less efficient use of parallel computation than asynchronous scheduling . Configurations in the same rung may require quite different compute times , for example , if we search over hyperparameters that control the network size . Hence , some workers may just be idle at a synchronization point . Evaluations at larger resource levels are observed earlier with asynchronous scheduling . However , the risk is to continue mediocre configurations because they were selected earlier . Figure 1 illustrates the differences between asynchronous and synchronous scheduling . We distinguish two variants of asynchronous SH proposed in prior work . During the optimization process , for any hyperparameter and architecture configuration x ∈ X that is currently being evaluated , a decision needs to be made once it reaches the next rung r ∈ R. Given the new observed data point y at level r , the binary predicate continue ( x , r , y ) evaluates to true iff y is in the top η−1 fraction of records at the rung . Stopping variant . This is a simple extension of the median stopping rule ( Golovin et al. , 2017 ) , which is implemented in Ray Tune ( Liaw et al. , 2018 ) . As soon as a job for x reaches a rung r , if continue ( x , r , y ) is true , it continues towards the next rung level . Otherwise , it is stopped , and the worker becomes free to pick up a novel evaluation . As long as fewer than η configurations have reached a rung , the job is always continued . Promotion variant . This variant of asynchronous SH was presented by Li et al . ( 2018 ) and called ASHA . Note that , in the remaining text , we will refer to general asynchronous SH as ASHA and indicate the both variants with promotion or stopping based ASHA . Once a job for some x reaches a rung r , it is paused there , and the worker is released . The evaluation of x can be promoted ( i.e. , continued to the next rung ) later on . When a worker becomes available , rungs are scanned in descending order , running a job to promote the first paused x for which continue ( x , r , y ) is true . When no such paused configuration exists , a new configuration evaluation is started . With fewer than η metrics recorded at a rung , no promotions happen there . The stopping and promotion variants can exhibit a rather different behaviour ( see Figure 2 ) , as also demonstrated in our experiments . Initially , the stopping variant gives most configurations the benefit of doubt , while the promotion variant pauses evaluations until they can be compared against a sufficient number of competitors . Finally , note that asynchronous SH can be generalized to asynchronous Hyperband in much the same way as in the synchronous case , as shown by Li et al . ( 2018 ) . However , it was reported that asynchronous SH typically outperforms the Hyperband for expensive tuning problems , which we also show in our experiments in Section 4.1 . After some initial exploration , both with random and model-based variants , we confirmed this observation . In the following , we will thus restrict our attention to SH scheduling .
The paper proposes a model-based asynchronous multi-fidelity method to optimize hyperparameters and perform a neural architecture search (NAS). The paper begins by addressing the differences between synchronous and asynchronous scheduling in Successive Halvining (SH) and its variants. It also analyzes the different stopping rule criteria. Based on these insights, the paper proposes a unified approach called BOBSTER that combines asynchronous scheduling and multi-fidelity Bayesian optimization. The authors empirically demonstrate that BOBSTER can find the optimal hyperparameters (architecture) more efficiently in terms of wall-clock time than other state-of-the-art algorithms on NAS, image classification task, and language modelling task.
SP:91933924103285d6fcac3649cf407b7021371625
Model-based Asynchronous Hyperparameter and Neural Architecture Search
1 INTRODUCTION . The goal of hyperparameter and neural architecture search ( HNAS ) is to automate the process of finding the right architecture or hyperparameters x ? ∈ argminx∈X f ( x ) of a deep neural network by minimizing the validation loss f ( x ) , observed through noise : yi = f ( xi ) + i , i ∼ N ( 0 , σ2 ) , i = 1 , . . . , n. Bayesian optimization ( BO ) is an effective model-based approach for solving expensive black-box optimization problems ( Jones et al. , 1998 ; Shahriari et al. , 2016 ) . It constructs a probabilistic surrogate model of the loss function p ( f | D ) based on previous evaluations D = { ( xi , yi ) } ni=1 . Searching for the global minimum of f is then driven by trading off exploration in regions of uncertainty and exploitation in regions where the global optimum is likely to reside . However , for HNAS problems , standard BO needs to be augmented in order to remain competitive . For example , training runs for unpromising configurations x can be stopped early , but serve as low-fidelity approximations of f ( x ) ( Swersky et al. , 2014 ; Domhan et al. , 2015 ; Klein et al. , 2017b ) . Further , evaluations of f can be executed in parallel to reduce the wall-clock time required for finding a good solution . Several methods for multi-fidelity and/or distributed BO have been proposed ( Kandasamy et al. , 2017 ; 2016 ; Takeno et al. , 2020 ) , but they rely on rather complicated approximations to either select the fidelity level or to compute an information theoretic acquisition function to determine the next candidate . In this work we aim to adopt the desiderata of Falkner et al . ( 2018 ) , namely , that of simplicity , which often leads to more robust methods in practice . A simple , easily parallelizable multi-fidelity scheduling algorithm is successive halving ( SH ) ( Karnin et al. , 2013 ; Jamieson & Talwalkar , 2016 ) , which iteratively eliminates poorly performing neural networks over time . Hyperband ( Li et al. , 2017 ) iterates over multiple rounds of SH with varying ratios between the number of configurations and the minimum amount of resources spent per configuration . Falkner et al . ( 2018 ) introduced a hybrid model , called BOHB , that uses a probabilistic model to guide the search while retaining the efficient any-time performance of Hyperband . However , both SH and BOHB can be bottlenecked due to their synchronous nature : stopping decisions are done only after synchronizing all training jobs at certain resource levels ( called rungs ) . This approach is wasteful when the evaluation of some configurations take longer than others , as it is often the case when training neural networks ( Ying et al. , 2019 ) , and can substantially delay progress towards high-performing configurations ( see example shown in Figure 1 ) . Recently , Li et al . ( 2018 ) proposed ASHA , which adapts successive halving to the asynchronous parallel case . Even though ASHA only relies on the random sampling of new configurations , it has been shown to outperform synchronous SH and BOHB . In this work , we augment ASHA with a Gaussian process ( GP ) surrogate , which jointly models the performance across configurations and rungs to improve the already strong performance of ASHA . The asynchronous nature further requires to handle pending evaluations in principled way to obtain an efficient model-based searcher . 1.1 CONTRIBUTIONS . Dealing with hyperparameter optimization ( HPO ) in general , and HNAS for neural networks in particular , we would like to make the most efficient use of a given parallel computation budget ( e.g. , number of compute instances ) in order to find high-accuracy solutions in the shortest wall-clock time possible . Besides exploiting low-fidelity approximations , we demonstrate that asynchronous parallel scheduling is a decisive factor in cost-efficient search . We further handle pending evaluations through fantasizing ( Snoek et al. , 2012 ) , which is critical for asynchronous searchers . Our novel combination of asynchronous SH with multi-fidelity BO , dubbed MOdel Based aSynchronous mulTi fidelity optimizER ( MOBSTER ) substantially outperforms either of them in isolation . More specifically : • We clarify differences between existing asynchronous SH extensions : ASHA , as described by Li et al . ( 2018 ) and an arguably simpler stopping rule variant , related to the median rule ( Golovin et al. , 2017 ) and first implemented in Ray Tune ( Liaw et al. , 2018 ) . Although their difference seem subtle , they lead to substantially different behaviour in practice . • We present an extensive ablation study , comparing and analysing different components for asynchronous HNAS . While these individual components are not novel , one of our main contributions is to show that , by systematically combining them to form MOBSTER , we obtain a reliable and more efficient method than other recently proposed approaches . Due to limited space , we present a detailed description of the technical nuances and complexities of model-based asynchronous multi-fidelity HNAS in Appendix A.3 . • On a variety of neural network benchmarks , we show that MOBSTER is more efficient in terms of wall-clock time than other state-of-the-art algorithms . Unlike BOHB , it does not suffer from substantial synchronization overheads when evaluations are expensive . As a result , we can achieve the same performance in the same amount of wall-clock time but with often just half the computational resources compared to random-sampling based ASHA . Next , we relate our work to approaches recently published in the literature . In Section 2 , we review synchronous SH , as well as two asynchronous extensions . Our novel method is presented in Section 3 . We present empirical evaluations for the HNAS of various neural architecture types in Section 4 , and finish with conclusions and future work in Section 5 . 1.2 RELATED WORK . A range of prior Gaussian process ( GP ) based BO work exploits multiple fidelities of the objective ( Kennedy & O ’ Hagan , 2000 ; Klein et al. , 2017a ; Poloczek et al. , 2018 ; Cutajar et al. , 2018 ) . A joint GP model across configurations and tasks was used by Swersky et al . ( 2013 ) , allowing for trade-offs between cheap auxiliaries and the expensive target . Klein et al . ( 2017a ) presented a continuous multi-fidelity BO method , where training set size is an input . It relies on a complicated and expensive acquisition function , and experiments with asynchronous scheduling are not provided . Kandasamy et al . ( 2017 ) presented BOCA , a Bayesian optimization method that exploits general continuous fidelities of the objective function . While BOCA uses a model similar to ours , they employ a different strategy to select the fidelity , which appears to work less efficiently in practice , as we show in our experiments . The “ freeze-thaw ” approach ( Swersky et al. , 2014 ) allows for asynchronous parallel evaluations , yet its scheduling is quite different from asynchronous SH . They propose an exponential decay kernel , a refinement of which we use here . No results for deep neural network tuning were presented , and no implementation is publicly available . Previous work on asynchronous BO has demonstrated performance gains over synchronous BO methods ( Alvin et al. , 2019 ; Kandasamy et al. , 2016 ) , yet this work did not exploit multiple fidelities of the objective . Concurrently to our work , Takeno et al . ( 2020 ) proposed an asynchronous multi-fidelity BO method . However , it is based on an information-theoretic acquisition function that is substantially more complex than our method . Since no implementation is publicly available it is omitted from our comparisons . Falkner et al . ( 2018 ) combine synchronous Hyperband with TPE ( Bergstra et al. , 2011 ) as a model . Their method , called BOHB , combines early speed-ups of Hyperband with fast convergence later on , and shows competitive performance for NAS ( Ying et al. , 2019 ; Dong & Yang , 2020 ) . However , compared to our approach , it fits independent kernel density estimator at each rung level , which prevents the interpolation to higher rung levels . Apart from this , kernel density estimators do not permit fantasizing the potential outcome of a pending configuration , which makes it unclear how to extend BOHB to the asynchronous setting , and are notoriously sensitive to their bandwidth parameters . 2 SYNCHRONOUS AND ASYNCHRONOUS MULTI-FIDELITY SCHEDULING . Multi-fidelity HNAS considers an objective f ( x , r ) , indexed by the resource level r ∈ { rmin , . . . , rmax } . The target function of interest is f ( x ) = f ( x , rmax ) , while evaluations of f ( x , r ) for r < rmax are cheaper , lower-fidelity evaluations that may be correlated with f ( x ) . Here , we consider the number of training epochs ( i.e. , full sweeps over the data ) as the resource r , though other choices are possible , such as training subset ratios ( Klein et al. , 2017a ) . Two kinds of decisions are made in multi-fidelity HNAS : choosing the configurations to evaluate and scheduling their evaluations . In this work , configurations are chosen non-uniformly based on the GP surrogate model that accounts for pending and actual evaluations . Scheduling decisions are stop/go decisions that happen at certain resource levels called rungs . We make use of successive halving ( SH ) for its simplicity and strong empirical performance ( Jamieson & Talwalkar , 2016 ; Li et al. , 2018 ) . Let η ∈ { 2 , 3 , 4 } be the halving constant , rmax and rmin the maximum and minimum resource level for an evaluation . We assume , for simplicity , that rmax/rmin = ηK , where K ∈ N. The full set of rungs is R = { rminηk | k = 0 , . . . , K } . In the sequel , the term “ rung ” will be overloaded , denoting both the resource level at which stop/go decisions are made and the list of configurations that reached it . In synchronous SH , each rung has an a priori fixed size ( i.e. , number of slots for configurations evaluated until the rung level ) , the size ratio between successive rungs being η−1 ( e.g. , n ( r ) = rmax/r for r ∈ R ) . A round of the algorithm starts with evaluating n ( rmin ) configurations to the lowest rung r = rmin , making use of parallel computation if available . Once all of them finish , the η−1 fraction of top performing configurations are promoted to the next rung while the others are terminated . Each rung is a synchronization point for many training tasks : it has to be populated entirely before any configuration can be promoted ( see Figure 1 left for a visualization ) . Synchronous scheduling makes less efficient use of parallel computation than asynchronous scheduling . Configurations in the same rung may require quite different compute times , for example , if we search over hyperparameters that control the network size . Hence , some workers may just be idle at a synchronization point . Evaluations at larger resource levels are observed earlier with asynchronous scheduling . However , the risk is to continue mediocre configurations because they were selected earlier . Figure 1 illustrates the differences between asynchronous and synchronous scheduling . We distinguish two variants of asynchronous SH proposed in prior work . During the optimization process , for any hyperparameter and architecture configuration x ∈ X that is currently being evaluated , a decision needs to be made once it reaches the next rung r ∈ R. Given the new observed data point y at level r , the binary predicate continue ( x , r , y ) evaluates to true iff y is in the top η−1 fraction of records at the rung . Stopping variant . This is a simple extension of the median stopping rule ( Golovin et al. , 2017 ) , which is implemented in Ray Tune ( Liaw et al. , 2018 ) . As soon as a job for x reaches a rung r , if continue ( x , r , y ) is true , it continues towards the next rung level . Otherwise , it is stopped , and the worker becomes free to pick up a novel evaluation . As long as fewer than η configurations have reached a rung , the job is always continued . Promotion variant . This variant of asynchronous SH was presented by Li et al . ( 2018 ) and called ASHA . Note that , in the remaining text , we will refer to general asynchronous SH as ASHA and indicate the both variants with promotion or stopping based ASHA . Once a job for some x reaches a rung r , it is paused there , and the worker is released . The evaluation of x can be promoted ( i.e. , continued to the next rung ) later on . When a worker becomes available , rungs are scanned in descending order , running a job to promote the first paused x for which continue ( x , r , y ) is true . When no such paused configuration exists , a new configuration evaluation is started . With fewer than η metrics recorded at a rung , no promotions happen there . The stopping and promotion variants can exhibit a rather different behaviour ( see Figure 2 ) , as also demonstrated in our experiments . Initially , the stopping variant gives most configurations the benefit of doubt , while the promotion variant pauses evaluations until they can be compared against a sufficient number of competitors . Finally , note that asynchronous SH can be generalized to asynchronous Hyperband in much the same way as in the synchronous case , as shown by Li et al . ( 2018 ) . However , it was reported that asynchronous SH typically outperforms the Hyperband for expensive tuning problems , which we also show in our experiments in Section 4.1 . After some initial exploration , both with random and model-based variants , we confirmed this observation . In the following , we will thus restrict our attention to SH scheduling .
This paper has proposed to exploit a GP model to represent the correlation between configuration-rung tuples (as is typical in multi-fidelity BO) in asynchronous successive halving (ASHA) (Li et al. 2018), which has resulted in performance improvement over the state of the art, as shown in the experimental results. The experimental results are extensive and compelling. Introducing a GP to model the correlation between configuration-rung tuples in the existing ASHA work offers minimal technical merit though. Can the authors elaborate on whether there is any nontrivial, novel technical challenge with such an integration? This question does not seem to be adequately addressed in the paper.
SP:91933924103285d6fcac3649cf407b7021371625
ATOM3D: Tasks On Molecules in Three Dimensions
1 INTRODUCTION . A molecule ’ s three-dimensional ( 3D ) shape is critical to understanding its physical mechanisms of action , and can be used to answer a number of questions relating to drug discovery , molecular design , and fundamental biology . A molecule ’ s atoms often adopt specific 3D configurations that minimize its free energy , and by representing these 3D positions—the atomistic geometry—we can model this 3D shape in ways that would not be possible with 1D or 2D representations such as linear sequences or chemical bond graphs ( Table 1 ) . However , existing works that examine diverse molecular tasks , such as MoleculeNet ( Wu et al. , 2018 ) or TAPE ( Rao et al. , 2019 ) , focus on these lower dimensional representations . In this work , we demonstrate the benefit yielded by learning on 3D atomistic geometry and promote the development of 3D molecular learning by providing a collection of datasets leveraging this representation . Furthermore , we argue that the atom should be considered a “ machine learning datatype ” in its own right , deserving focused study much like images in computer vision or text in natural language processing . All molecules , including proteins , small molecule compounds , and nucleic acids , can be homogeneously represented as atoms in 3D space . These atoms can only belong to a fixed class of element types ( e.g . carbon , nitrogen , oxygen ) , and are all governed by the same underlying laws of physics , leading to important rotational , translational , and permutational symmetries . These systems also contain higher-level patterns that are poorly characterized , creating a ripe opportunity for learning them from data : though certain basic components are well understood ( e.g . amino acids , nucleic acids , functional groups ) , many others can not easily be defined . These patterns are in turn composed in a hierarchy that itself is only partially elucidated . While deep learning methods such as graph neural networks ( GNNs ) and convolutional neural networks ( CNNs ) seem especially well suited to atomistic geometry , to date there has been no systematic evaluation of such methods on molecular tasks . Additionally , despite the growing number of 3D structures available in databases such as the Protein Data Bank ( PDB ) ( Berman et al. , 2000 ) , they require significant processing before they are useful for machine learning tasks . Inspired by the success of accessible databases such as ImageNet ( Jia Deng et al. , 2009 ) and SQuAD ( Rajpurkar et al. , 2016 ) in sparking progress in their respective fields , we create and curate benchmark datasets for atomistic tasks , process them into a simple and standardized format , systematically benchmark 3D molecular learning methods , and present a set of best practices for other machine learning researchers interested in entering the field of 3D molecular learning . We develop new methods for several datasets and reveal a number of insights related to 3D molecular learning , including the consistent improvements yielded by using atomistic geometry , the lack of a single dominant method , and the presence of several tasks that can be improved through 3D molecular learning . 2 RELATED WORK . Three dimensional molecular data have long been pursued as an attractive source of information in molecular learning and chemoinformatics , but until recently have achieved underwhelming results relative to 1D and 2D representations ( Swamidass et al. , 2005 ; Azencott et al. , 2007 ) . However , due to increases in data availability and methodological advances , machine learning methods based on 3D molecular structure have begun to demonstrate significant impact in the last couple of years on specific tasks such as protein structure prediction ( Senior et al. , 2020 ) , equilibrium state sampling ( Noé et al. , 2019 ) , and drug design ( Zhavoronkov et al. , 2019 ) . While there have been some broader assessments of groups of related biological tasks , these have focused on on either 1D ( Rao et al. , 2019 ) or 2D ( Wu et al. , 2018 ) representations . By focusing instead on atomistic geometry , we can consistently improve performance and address disparate problems involving any combination of small molecules , proteins , and nucleic acids through a unified lens . Graph neural networks ( GNNs ) have grown to be a major area of study , providing a natural way of learning from data with complex spatial structure . Many GNN implementations have been motivated by applications to atomic systems , including molecular fingerprinting ( Duvenaud et al. , 2015 ) , property prediction ( Schütt et al. , 2017 ; Gilmer et al. , 2017 ; Liu et al. , 2019 ) , protein interface prediction ( Fout et al. , 2017 ) , and protein design ( Ingraham et al. , 2019 ) . Instead of encoding points in Euclidean space , GNNs encode their pairwise connectivity , capturing a structured representation of atomistic data . Three-dimensional CNNs ( 3DCNNs ) have also become popular as a way to capture these complex 3D geometries . They have been applied to a number of biomolecular applications such as protein interface prediction ( Townshend et al. , 2019 ) , protein model quality assessment ( Pagès et al. , 2019 ; Derevyanko et al. , 2018 ) , protein sequence design ( Anand et al. , 2020 ) , and structure-based drug discovery ( Wallach et al. , 2015 ; Torng & Altman , 2017 ; Ragoza et al. , 2017 ; Jiménez et al. , 2018 ) . These 3DCNNs can encode translational and permutational symmetries , but incur significant computational expense and can not capture rotational symmetries without data augmentation . In an attempt to address many of the problems of representing atomistic geometries , equivariant neural networks ( ENNs ) have emerged as a new class of methods for learning from molecular systems . These networks are built such that geometric transformations of their inputs lead to well-defined transformations of their outputs . This setup leads to the neurons of the network learning rules that resemble physical interactions . Tensor field networks ( Thomas et al. , 2018 ) and Cormorant ( Kondor , 2018 ; Anderson et al. , 2019 ) have applied these principles to atomic systems and begun to demonstrate promise on extended systems ( Eismann et al. , 2020 ; Weiler et al. , 2018 ) . However , in general , these methods have not been applied to larger-scale molecular tasks . 3 3D MOLECULAR LEARNING . We define 3D molecular learning as the set of tasks where the input space is atoms in three dimensions . We write this space as AN where A = P × E. P = R3 is the position space and E = { C , H , O , N , P , S , ... } is the element space . We select 3D molecular learning tasks from structural biophysics and medicinal chemistry that span a variety of molecule types and address a range of important problems . Multiple of these datasets are novel , while others are extracted from existing sources ( Table 2 ) . We provide all datasets in a standardized format that requires no specialized libraries . Alongside these datasets , we present corresponding best practices , including splitting and filtering criteria , to minimize data leakage concerns and ensure generalizability and reproducibility . Taken together , we hope these efforts will lower the barrier to entry for machine learning researchers interested in developing methods for 3D molecular learning and encourage rapid progress in the field . Detailed descriptions of the preparation of each dataset can be found in Appendix C.1 . 3.1 SMALL MOLECULE PROPERTIES ( SMP ) . Impact – Predicting physico-chemical properties of small molecules is a common task in medicinal chemistry and materials design . Quantum chemical calculations can save expensive experiments but are themselves costly and can not cover the huge chemical space spanned by candidate molecules . Dataset – The QM9 dataset ( Ruddigkeit et al. , 2012 ; Ramakrishnan et al. , 2014 ) contains structures and energetic , electronic , and thermodynamic properties for 134,000 stable small organic molecules , obtained from quantum-chemical calculations . Metrics – We predict the molecular properties from the ground-state structure . Split – We split molecules randomly . 3.2 PROTEIN INTERFACE PREDICTION ( PIP ) . Impact – Proteins interact with each other in many scenarios—for example , antibody proteins recognize diseases by binding to antigens . A critical problem in understanding these interactions is to identify which amino acids of two given proteins will interact upon binding . Dataset – For training , we use the Database of Interacting Protein Structures ( DIPS ) , a comprehensive dataset of protein complexes mined from the PDB ( Townshend et al. , 2019 ) . We predict on the Docking Benchmark 5 ( Vreven et al. , 2015 ) , a smaller gold standard dataset . Metrics – We predict if two amino acids will come into contact when their respective proteins bind . Split – We split protein complexes by sequence identity at 30 % . 3.3 RESIDUE IDENTITY ( RES ) . Impact – Understanding the structural role of individual amino acids is important for engineering new proteins . We can understand this role by predicting the propensity for different amino acids at a given protein site based on the surrounding structural environment ( Torng & Altman , 2017 ) . Dataset – We generate a novel dataset consisting of atomic environments extracted from nonredundant structures in the PDB . Metrics – We formulate this as a classification task where we predict the identity of the amino acid in the center of the environment based on all other atoms . Split – We split residue environments by protein topology class . 3.4 MUTATION STABILITY PREDICTION ( MSP ) . Impact – Identifying mutations that stabilize a protein ’ s interactions is a key task in designing new proteins . Experimental techniques for probing these are labor-intensive ( Antikainen & Martin , 2005 ; Lefèvre et al. , 1997 ) , motivating the development of efficient computational methods . Dataset – We derive a novel dataset by collecting single-point mutations from the SKEMPI database ( Jankauskaitė et al. , 2019 ) and model each mutation into the structure to produce mutated structures . Metrics – We formulate this as a binary classification task where we predict whether the stability of the complex increases as a result of the mutation . Split – We split protein complexes by sequence identity at 30 % . 3.5 LIGAND BINDING AFFINITY ( LBA ) . Impact – Most therapeutic drugs and many molecules critical for biological signaling take the form of small molecules . Predicting the strength of the protein-small molecule interaction is a challenging but crucial task for drug discovery applications . Dataset – We use the PDBBind database ( Wang et al. , 2004 ; Liu et al. , 2015 ) , a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths . Metrics – We predict pK = − log ( K ) , where K is the binding affinity in Molar units . Split – We split protein-ligand complexes by protein sequence identity at 30 % . 3.6 LIGAND EFFICACY PREDICTION ( LEP ) . Impact – Many proteins switch on or off their function by changing shape . Predicting which shape a drug will favor is thus an important task in drug design . Dataset – We develop a novel dataset by curating proteins from several families with both ” active ” and ” inactive ” state structures , and model in 527 small molecules with known activating or inactivating function using the program Glide ( Friesner et al. , 2004 ) . Metrics – We formulate this as a binary classification task where we predict whether or not a molecule bound to the structures will be an activator of the protein ’ s function or not . Split – We split complex pairs by protein .
This paper presents a large benchmark of machine learning tasks for molecules represented by the 3D coordinates of their atoms. The benchmark is a combination of existing data sets and newly created ones, and covers a variety of applications and tasks, from small molecules to RNA or protein structures, and including classification, regression and ranking tasks. In addition, three deep-learning algorithms are implemented and evaluated on these benchmarks, and compared to state-of-the-art methods that do not use 3D information, and empirically demonstrate the benefit of incorporating 3D information in the networks.
SP:9bc80503d9771b780501b2dacac2cc37e4f5cd95
ATOM3D: Tasks On Molecules in Three Dimensions
1 INTRODUCTION . A molecule ’ s three-dimensional ( 3D ) shape is critical to understanding its physical mechanisms of action , and can be used to answer a number of questions relating to drug discovery , molecular design , and fundamental biology . A molecule ’ s atoms often adopt specific 3D configurations that minimize its free energy , and by representing these 3D positions—the atomistic geometry—we can model this 3D shape in ways that would not be possible with 1D or 2D representations such as linear sequences or chemical bond graphs ( Table 1 ) . However , existing works that examine diverse molecular tasks , such as MoleculeNet ( Wu et al. , 2018 ) or TAPE ( Rao et al. , 2019 ) , focus on these lower dimensional representations . In this work , we demonstrate the benefit yielded by learning on 3D atomistic geometry and promote the development of 3D molecular learning by providing a collection of datasets leveraging this representation . Furthermore , we argue that the atom should be considered a “ machine learning datatype ” in its own right , deserving focused study much like images in computer vision or text in natural language processing . All molecules , including proteins , small molecule compounds , and nucleic acids , can be homogeneously represented as atoms in 3D space . These atoms can only belong to a fixed class of element types ( e.g . carbon , nitrogen , oxygen ) , and are all governed by the same underlying laws of physics , leading to important rotational , translational , and permutational symmetries . These systems also contain higher-level patterns that are poorly characterized , creating a ripe opportunity for learning them from data : though certain basic components are well understood ( e.g . amino acids , nucleic acids , functional groups ) , many others can not easily be defined . These patterns are in turn composed in a hierarchy that itself is only partially elucidated . While deep learning methods such as graph neural networks ( GNNs ) and convolutional neural networks ( CNNs ) seem especially well suited to atomistic geometry , to date there has been no systematic evaluation of such methods on molecular tasks . Additionally , despite the growing number of 3D structures available in databases such as the Protein Data Bank ( PDB ) ( Berman et al. , 2000 ) , they require significant processing before they are useful for machine learning tasks . Inspired by the success of accessible databases such as ImageNet ( Jia Deng et al. , 2009 ) and SQuAD ( Rajpurkar et al. , 2016 ) in sparking progress in their respective fields , we create and curate benchmark datasets for atomistic tasks , process them into a simple and standardized format , systematically benchmark 3D molecular learning methods , and present a set of best practices for other machine learning researchers interested in entering the field of 3D molecular learning . We develop new methods for several datasets and reveal a number of insights related to 3D molecular learning , including the consistent improvements yielded by using atomistic geometry , the lack of a single dominant method , and the presence of several tasks that can be improved through 3D molecular learning . 2 RELATED WORK . Three dimensional molecular data have long been pursued as an attractive source of information in molecular learning and chemoinformatics , but until recently have achieved underwhelming results relative to 1D and 2D representations ( Swamidass et al. , 2005 ; Azencott et al. , 2007 ) . However , due to increases in data availability and methodological advances , machine learning methods based on 3D molecular structure have begun to demonstrate significant impact in the last couple of years on specific tasks such as protein structure prediction ( Senior et al. , 2020 ) , equilibrium state sampling ( Noé et al. , 2019 ) , and drug design ( Zhavoronkov et al. , 2019 ) . While there have been some broader assessments of groups of related biological tasks , these have focused on on either 1D ( Rao et al. , 2019 ) or 2D ( Wu et al. , 2018 ) representations . By focusing instead on atomistic geometry , we can consistently improve performance and address disparate problems involving any combination of small molecules , proteins , and nucleic acids through a unified lens . Graph neural networks ( GNNs ) have grown to be a major area of study , providing a natural way of learning from data with complex spatial structure . Many GNN implementations have been motivated by applications to atomic systems , including molecular fingerprinting ( Duvenaud et al. , 2015 ) , property prediction ( Schütt et al. , 2017 ; Gilmer et al. , 2017 ; Liu et al. , 2019 ) , protein interface prediction ( Fout et al. , 2017 ) , and protein design ( Ingraham et al. , 2019 ) . Instead of encoding points in Euclidean space , GNNs encode their pairwise connectivity , capturing a structured representation of atomistic data . Three-dimensional CNNs ( 3DCNNs ) have also become popular as a way to capture these complex 3D geometries . They have been applied to a number of biomolecular applications such as protein interface prediction ( Townshend et al. , 2019 ) , protein model quality assessment ( Pagès et al. , 2019 ; Derevyanko et al. , 2018 ) , protein sequence design ( Anand et al. , 2020 ) , and structure-based drug discovery ( Wallach et al. , 2015 ; Torng & Altman , 2017 ; Ragoza et al. , 2017 ; Jiménez et al. , 2018 ) . These 3DCNNs can encode translational and permutational symmetries , but incur significant computational expense and can not capture rotational symmetries without data augmentation . In an attempt to address many of the problems of representing atomistic geometries , equivariant neural networks ( ENNs ) have emerged as a new class of methods for learning from molecular systems . These networks are built such that geometric transformations of their inputs lead to well-defined transformations of their outputs . This setup leads to the neurons of the network learning rules that resemble physical interactions . Tensor field networks ( Thomas et al. , 2018 ) and Cormorant ( Kondor , 2018 ; Anderson et al. , 2019 ) have applied these principles to atomic systems and begun to demonstrate promise on extended systems ( Eismann et al. , 2020 ; Weiler et al. , 2018 ) . However , in general , these methods have not been applied to larger-scale molecular tasks . 3 3D MOLECULAR LEARNING . We define 3D molecular learning as the set of tasks where the input space is atoms in three dimensions . We write this space as AN where A = P × E. P = R3 is the position space and E = { C , H , O , N , P , S , ... } is the element space . We select 3D molecular learning tasks from structural biophysics and medicinal chemistry that span a variety of molecule types and address a range of important problems . Multiple of these datasets are novel , while others are extracted from existing sources ( Table 2 ) . We provide all datasets in a standardized format that requires no specialized libraries . Alongside these datasets , we present corresponding best practices , including splitting and filtering criteria , to minimize data leakage concerns and ensure generalizability and reproducibility . Taken together , we hope these efforts will lower the barrier to entry for machine learning researchers interested in developing methods for 3D molecular learning and encourage rapid progress in the field . Detailed descriptions of the preparation of each dataset can be found in Appendix C.1 . 3.1 SMALL MOLECULE PROPERTIES ( SMP ) . Impact – Predicting physico-chemical properties of small molecules is a common task in medicinal chemistry and materials design . Quantum chemical calculations can save expensive experiments but are themselves costly and can not cover the huge chemical space spanned by candidate molecules . Dataset – The QM9 dataset ( Ruddigkeit et al. , 2012 ; Ramakrishnan et al. , 2014 ) contains structures and energetic , electronic , and thermodynamic properties for 134,000 stable small organic molecules , obtained from quantum-chemical calculations . Metrics – We predict the molecular properties from the ground-state structure . Split – We split molecules randomly . 3.2 PROTEIN INTERFACE PREDICTION ( PIP ) . Impact – Proteins interact with each other in many scenarios—for example , antibody proteins recognize diseases by binding to antigens . A critical problem in understanding these interactions is to identify which amino acids of two given proteins will interact upon binding . Dataset – For training , we use the Database of Interacting Protein Structures ( DIPS ) , a comprehensive dataset of protein complexes mined from the PDB ( Townshend et al. , 2019 ) . We predict on the Docking Benchmark 5 ( Vreven et al. , 2015 ) , a smaller gold standard dataset . Metrics – We predict if two amino acids will come into contact when their respective proteins bind . Split – We split protein complexes by sequence identity at 30 % . 3.3 RESIDUE IDENTITY ( RES ) . Impact – Understanding the structural role of individual amino acids is important for engineering new proteins . We can understand this role by predicting the propensity for different amino acids at a given protein site based on the surrounding structural environment ( Torng & Altman , 2017 ) . Dataset – We generate a novel dataset consisting of atomic environments extracted from nonredundant structures in the PDB . Metrics – We formulate this as a classification task where we predict the identity of the amino acid in the center of the environment based on all other atoms . Split – We split residue environments by protein topology class . 3.4 MUTATION STABILITY PREDICTION ( MSP ) . Impact – Identifying mutations that stabilize a protein ’ s interactions is a key task in designing new proteins . Experimental techniques for probing these are labor-intensive ( Antikainen & Martin , 2005 ; Lefèvre et al. , 1997 ) , motivating the development of efficient computational methods . Dataset – We derive a novel dataset by collecting single-point mutations from the SKEMPI database ( Jankauskaitė et al. , 2019 ) and model each mutation into the structure to produce mutated structures . Metrics – We formulate this as a binary classification task where we predict whether the stability of the complex increases as a result of the mutation . Split – We split protein complexes by sequence identity at 30 % . 3.5 LIGAND BINDING AFFINITY ( LBA ) . Impact – Most therapeutic drugs and many molecules critical for biological signaling take the form of small molecules . Predicting the strength of the protein-small molecule interaction is a challenging but crucial task for drug discovery applications . Dataset – We use the PDBBind database ( Wang et al. , 2004 ; Liu et al. , 2015 ) , a curated database containing protein-ligand complexes from the PDB and their corresponding binding strengths . Metrics – We predict pK = − log ( K ) , where K is the binding affinity in Molar units . Split – We split protein-ligand complexes by protein sequence identity at 30 % . 3.6 LIGAND EFFICACY PREDICTION ( LEP ) . Impact – Many proteins switch on or off their function by changing shape . Predicting which shape a drug will favor is thus an important task in drug design . Dataset – We develop a novel dataset by curating proteins from several families with both ” active ” and ” inactive ” state structures , and model in 527 small molecules with known activating or inactivating function using the program Glide ( Friesner et al. , 2004 ) . Metrics – We formulate this as a binary classification task where we predict whether or not a molecule bound to the structures will be an activator of the protein ’ s function or not . Split – We split complex pairs by protein .
In this paper, the authors introduce a repository of datasets for several atomistic learning tasks. These datasets are processed into a simple and standardized format. A systematic benchmark with atomistic learning methods is presented, showcasing the value of using 3D atom-level data instead of 1D or 2D features. The authors argue that these datasets will serve as a stepping stone for machine learning researchers interested in developing methods for atomistic learning and rapidly advance this field. The paper also presents the best practices for each of the tasks, as well as the splitting and filtering criteria to ensure generalizability and reproducibility.
SP:9bc80503d9771b780501b2dacac2cc37e4f5cd95
A Unifying Perspective on Neighbor Embeddings along the Attraction-Repulsion Spectrum
1 Introduction . T-distributed stochastic neighbor embedding ( t-SNE ) ( van der Maaten & Hinton , 2008 ) is arguably among the most popular methods for low-dimensional visualizations of complex high-dimensional datasets . It defines pairwise similarities called affinities between points in the high-dimensional space and aims to arrange the points in a low-dimensional space to match these affinities ( Hinton & Roweis , 2003 ) . Affinities decay exponentially with high-dimensional distance , making them infinitesimal for most pairs of points and making the n×n affinity matrix effectively sparse . Efficient implementations of t-SNE suitable for large sample sizes n ( van der Maaten , 2014 ; Linderman et al. , 2019 ) explicitly truncate the affinities and use the k-nearest-neighbor ( kNN ) graph of the data with k n as the input . We use the term neighbor embedding ( NE ) to refer to all dimensionality reduction methods that operate on the kNN graph of the data and aim to preserve neighborhood relationships ( Yang et al. , 2013 ; 2014 ) . A prominent recent example of this class of algorithms is UMAP ( McInnes et al. , 2018 ) , which has become popular in applied fields such as single-cell transcriptomics ( Becht et al. , 2019 ) . It is based on stochastic optimization and typically produces more compact clusters than t-SNE . Another example of neighbor embeddings are force-directed graph layouts ( Noack , 2007 ; 2009 ) , originally developed for graph drawing . One specific algorithm called ForceAtlas2 ( Jacomy et al. , 2014 ) has recently gained popularity in the single-cell transcriptomic community to visualize datasets capturing cells at different stages of development ( Weinreb et al. , 2018 ; 2020 ; Wagner et al. , 2018a ; Tusi et al. , 2018 ; Kanton et al. , 2019 ; Sharma et al. , 2020 ) . Here we provide a unifying account of these algorithms . We studied the spectrum of t-SNE embeddings that are obtained when increasing/decreasing the attractive forces between kNN graph neighbors , thereby changing the balance between attraction and repulsion . This led to a trade-off between faithful representations of continuous and discrete structures ( Figure 1 ) . Remarkably , we found that ForceAtlas2 and UMAP could both be accurately positioned on this spectrum ( Figure 1 ) . For UMAP , we used mathematical analysis and Barnes-Hut re-implementation to show that increased attraction is due to the negative sampling optimisation strategy . 2 Related work . Various trade-offs in t-SNE generalizations have been studied previously ( Yang et al. , 2009 ; Kobak et al. , 2020 ; Venna et al. , 2010 ; Amid et al. , 2015 ; Amid & Warmuth , 2019 ; Narayan et al. , 2015 ; Im et al. , 2018 ) , but our work is the first to study the exaggeration-induced trade-off . Prior work used ‘ early exaggeration ’ only as an optimisation trick ( van der Maaten & Hinton , 2008 ) that allows to separate well-defined clusters ( Linderman & Steinerberger , 2019 ; Arora et al. , 2018 ) . Carreira-Perpinán ( 2010 ) introduced elastic embedding algorithm that has an explicit parameter λ controlling the attraction-repulsion balance . However , that paper suggests slowly increasing λ during optimization , as an optimisation trick similar to the early exaggeration , and does not discuss tradeoffs between high and low values of λ . Our results on UMAP go against the common wisdom on what makes UMAP perform as it does ( McInnes et al. , 2018 ; Becht et al. , 2019 ) . No previous work suggested that negative sampling may have a drastic effect on the resulting embedding . 3 Neighbor embeddings . We first cast t-SNE , UMAP , and ForceAtlas2 in a common mathematical framework , using consistent notation and highlighting the similarities between the algorithms , beforewe investigate the relationships between them empirically and analytically in more detail . We denote the original high-dimensional points as xi and their low-dimensional positions as yi . 3.1 t-SNE . T-SNE measures similarities between points by affinities vij and normalized affinities pij : pij = vij n , vij = pi|j + pj|i 2 , pj|i = vj|i∑ k 6=i vk|i , vj|i = exp ( − ‖xi − xj‖ 2 2σ2i ) . ( 1 ) For fixed i , pj|i is a probability distribution over all points j 6= i ( all pi|i are set to zero ) , and the variance of the Gaussian kernel σ2i is chosen to yield a pre-specified value of the perplexity of this probability distribution , P = 2H , where H = − ∑ j 6=i pj|i log2 pj|i . The affinities vij are normalized by n for pij to form a probability distribution on the set of all pairs of points ( i , j ) . Modern implementations ( van der Maaten , 2014 ; Linderman et al. , 2019 ) construct a kNN graph with k = 3P neighbors and only consider affinities between connected nodes as non-zero . The default perplexity value in most implementations is P = 30 . Similarities in the low-dimensional space are defined as qij = wij Z , wij = 1 1 + d2ij , dij = ‖yi − yj‖ , Z = ∑ k 6=l wkl , ( 2 ) with all qii set to 0 . The points yi are then rearranged in order to minimise the Kullback-Leibler ( KL ) divergence DKL ( { pij } ‖ { qij } ) = ∑ i , j pij log ( pij/qij ) between pij and qij : Lt-SNE = − ∑ i , j pij log wij Z = − ∑ i , j pij logwij + log ∑ i , j wij , ( 3 ) where we dropped constant terms and took into account that ∑ pij = 1 . The first term can be interpreted as contributing attractive forces to the gradient while the second term yields repulsive forces . Using ∂wij/∂yi = −2w2ij ( yi − yj ) , the gradient , up to a constant factor , can be written as : ∂Lt-SNE ∂yi ∼ ∑ j vijwij ( yi − yj ) − n Z ∑ j w2ij ( yi − yj ) . ( 4 ) 3.2 Exaggeration in t-SNE . A standard optimisation trick for t-SNE called early exaggeration ( van der Maaten & Hinton , 2008 ; van der Maaten , 2014 ) is to multiply the first sum in the gradient by a factor ρ = 12 during the initial iterations of gradient descent . This increases the attractive forces and allows similar points to gather into clusters more effectively . Carreira-Perpinán ( 2010 ) and Linderman & Steinerberger ( 2019 ) noticed that the attractive term in the t-SNE loss function is related to the loss function of Laplacian eigenmaps ( LE ) ( Belkin & Niyogi , 2002 ; Coifman & Lafon , 2006 ) . Indeed , if ρ→∞ , the relative repulsion strength goes to zero and the embedding shrinks to a point with all wij → 1 . This implies that , asymptotically , gradient descent becomes equivalent to Markov chain iterations with the transition matrix closely related to the graph Laplacian L = D−V of the affinity matrix V = [ vij ] ( here D is diagonal matrix with row sums ofV ; see Appendix ) . The entire embedding shrinks to a single point , but the leading eigenvectors of the Laplacian shrink the slowest . This makes t-SNE with ρ → ∞ produce embeddings very similar to LE , which computes the leading eigenvectors of the normalized Laplacian ( see Appendix and Figure 1 ) . This theoretical finding immediately suggests that it might be interesting to study t-SNE with exaggeration ρ > 1 not only as an optimisation trick , but in itself , as an intermediate method between LE and standard t-SNE . The gradient of t-SNE with exaggeration can be written as ∂Lt-SNE ( ρ ) ∂yi ∼ ∑ j vijwij ( yi − yj ) − n ρZ ∑ j w2ij ( yi − yj ) ( 5 ) and the corresponding loss function is Lt-SNE ( ρ ) = DKL ( { pij } ‖ { wij/Z 1 ρ } ) = ∑ i , j pij log pij wij/Z 1 ρ . ( 6 ) 3.3 UMAP . Using the same notation as above , UMAP optimizes the cross-entropy loss between vij and wij , without normalizing them into probabilities : LUMAP = ∑ i , j [ vij log vij wij + ( 1− vij ) log 1− vij 1− wij ] , ( 7 ) where the 1− vij term is approximated by 1 as most vij are 0 . Note that UMAP differs from t-SNE in how exactly it defines vij but this difference is negligible , at least for the data considered here.1 Dropping constant terms , we obtain LUMAP ∼ − ∑ i , j vij logwij − ∑ i , j log ( 1− wij ) , ( 8 ) which is the same loss function as the one introduced earlier by LargeVis ( Tang et al. , 2016 ) . The first term , corresponding to attractive forces , is the same as in t-SNE , but the second , repulsive , term is different . Taking wij = 1/ ( 1 + d2ij ) as in t-SNE,2 the UMAP gradient is given by ∂LUMAP ∂yi ∼ ∑ j vijwij ( yi − yj ) − ∑ j 1 d2ij + wij ( yi − yj ) , ( 9 ) where = 0.001 is added to the denominator to prevent numerical problems for dij ≈ 0 . If = 1 ( which does not strongly affect the result ; Figure S1 ) , the gradient becomes identical to the t-SNE gradient , up to the n/Z factor in front of the repulsive forces . Moreover , UMAP allows to use an arbitrary γ factor in front of the repulsive forces , which makes it easier to compare the loss functions3 : ∂LUMAP ( γ ) ∂yi ∼ ∑ j vijwij ( yi − yj ) − γ ∑ j 1 d2ij + wij ( yi − yj ) . ( 10 ) Whereas it is possible to approximate the full repulsive term with the same techniques as used in t-SNE ( van der Maaten , 2014 ; Linderman et al. , 2019 ) , UMAP took a different approach and followed LargeVis in using negative sampling ( Mikolov et al. , 2013 ) of repulsive forces : on each gradient descent iteration , only a small number ν of randomly picked repulsive forces are applied to each point for each of the ∼k attractive forces that it feels . Other repulsive terms are ignored . The default value is ν = 5 . The effect of this negative sampling on the resulting embedding has not been studied before . 3.4 ForceAtlas2 . Force-directed graph layouts are usually introduced directly via attractive and repulsive forces , even though it is easy to write down a suitable loss function ( Noack , 2007 ) . ForceAtlas2 ( FA2 ) has attractive forces proportional to dij and repulsive forces proportional to 1/dij ( Jacomy et al. , 2014 ) : ∂LFA2 ∂yi = ∑ j vij ( yi − yj ) − ∑ j ( hi + 1 ) ( hj + 1 ) d2ij ( yi − yj ) , ( 11 ) where hi denotes the degree of node i in the input graph . This is known as edge repulsion in the graph layout literature ( Noack , 2007 ; 2009 ) and is important for embedding graphs that have nodes of very different degrees . For symmetrized kNN graphs , hi ≈ k , so ( hi + 1 ) ( hj + 1 ) term contributes a roughly constant factor of ∼k2 to the repulsive forces .
the authors study a number of neighbor embedding methods in terms of attraction-repulsion forces. The authors show that t-SNE, UMAP, FA2, and LE can be (approximately) unified as a common approach that use different levels of tradeoff between these two terms. They also discuss the increased attraction in UMAP as a result of negative sampling.
SP:d616f1a6c241f03f2ddf2d171ecfb7689d61857d
A Unifying Perspective on Neighbor Embeddings along the Attraction-Repulsion Spectrum
1 Introduction . T-distributed stochastic neighbor embedding ( t-SNE ) ( van der Maaten & Hinton , 2008 ) is arguably among the most popular methods for low-dimensional visualizations of complex high-dimensional datasets . It defines pairwise similarities called affinities between points in the high-dimensional space and aims to arrange the points in a low-dimensional space to match these affinities ( Hinton & Roweis , 2003 ) . Affinities decay exponentially with high-dimensional distance , making them infinitesimal for most pairs of points and making the n×n affinity matrix effectively sparse . Efficient implementations of t-SNE suitable for large sample sizes n ( van der Maaten , 2014 ; Linderman et al. , 2019 ) explicitly truncate the affinities and use the k-nearest-neighbor ( kNN ) graph of the data with k n as the input . We use the term neighbor embedding ( NE ) to refer to all dimensionality reduction methods that operate on the kNN graph of the data and aim to preserve neighborhood relationships ( Yang et al. , 2013 ; 2014 ) . A prominent recent example of this class of algorithms is UMAP ( McInnes et al. , 2018 ) , which has become popular in applied fields such as single-cell transcriptomics ( Becht et al. , 2019 ) . It is based on stochastic optimization and typically produces more compact clusters than t-SNE . Another example of neighbor embeddings are force-directed graph layouts ( Noack , 2007 ; 2009 ) , originally developed for graph drawing . One specific algorithm called ForceAtlas2 ( Jacomy et al. , 2014 ) has recently gained popularity in the single-cell transcriptomic community to visualize datasets capturing cells at different stages of development ( Weinreb et al. , 2018 ; 2020 ; Wagner et al. , 2018a ; Tusi et al. , 2018 ; Kanton et al. , 2019 ; Sharma et al. , 2020 ) . Here we provide a unifying account of these algorithms . We studied the spectrum of t-SNE embeddings that are obtained when increasing/decreasing the attractive forces between kNN graph neighbors , thereby changing the balance between attraction and repulsion . This led to a trade-off between faithful representations of continuous and discrete structures ( Figure 1 ) . Remarkably , we found that ForceAtlas2 and UMAP could both be accurately positioned on this spectrum ( Figure 1 ) . For UMAP , we used mathematical analysis and Barnes-Hut re-implementation to show that increased attraction is due to the negative sampling optimisation strategy . 2 Related work . Various trade-offs in t-SNE generalizations have been studied previously ( Yang et al. , 2009 ; Kobak et al. , 2020 ; Venna et al. , 2010 ; Amid et al. , 2015 ; Amid & Warmuth , 2019 ; Narayan et al. , 2015 ; Im et al. , 2018 ) , but our work is the first to study the exaggeration-induced trade-off . Prior work used ‘ early exaggeration ’ only as an optimisation trick ( van der Maaten & Hinton , 2008 ) that allows to separate well-defined clusters ( Linderman & Steinerberger , 2019 ; Arora et al. , 2018 ) . Carreira-Perpinán ( 2010 ) introduced elastic embedding algorithm that has an explicit parameter λ controlling the attraction-repulsion balance . However , that paper suggests slowly increasing λ during optimization , as an optimisation trick similar to the early exaggeration , and does not discuss tradeoffs between high and low values of λ . Our results on UMAP go against the common wisdom on what makes UMAP perform as it does ( McInnes et al. , 2018 ; Becht et al. , 2019 ) . No previous work suggested that negative sampling may have a drastic effect on the resulting embedding . 3 Neighbor embeddings . We first cast t-SNE , UMAP , and ForceAtlas2 in a common mathematical framework , using consistent notation and highlighting the similarities between the algorithms , beforewe investigate the relationships between them empirically and analytically in more detail . We denote the original high-dimensional points as xi and their low-dimensional positions as yi . 3.1 t-SNE . T-SNE measures similarities between points by affinities vij and normalized affinities pij : pij = vij n , vij = pi|j + pj|i 2 , pj|i = vj|i∑ k 6=i vk|i , vj|i = exp ( − ‖xi − xj‖ 2 2σ2i ) . ( 1 ) For fixed i , pj|i is a probability distribution over all points j 6= i ( all pi|i are set to zero ) , and the variance of the Gaussian kernel σ2i is chosen to yield a pre-specified value of the perplexity of this probability distribution , P = 2H , where H = − ∑ j 6=i pj|i log2 pj|i . The affinities vij are normalized by n for pij to form a probability distribution on the set of all pairs of points ( i , j ) . Modern implementations ( van der Maaten , 2014 ; Linderman et al. , 2019 ) construct a kNN graph with k = 3P neighbors and only consider affinities between connected nodes as non-zero . The default perplexity value in most implementations is P = 30 . Similarities in the low-dimensional space are defined as qij = wij Z , wij = 1 1 + d2ij , dij = ‖yi − yj‖ , Z = ∑ k 6=l wkl , ( 2 ) with all qii set to 0 . The points yi are then rearranged in order to minimise the Kullback-Leibler ( KL ) divergence DKL ( { pij } ‖ { qij } ) = ∑ i , j pij log ( pij/qij ) between pij and qij : Lt-SNE = − ∑ i , j pij log wij Z = − ∑ i , j pij logwij + log ∑ i , j wij , ( 3 ) where we dropped constant terms and took into account that ∑ pij = 1 . The first term can be interpreted as contributing attractive forces to the gradient while the second term yields repulsive forces . Using ∂wij/∂yi = −2w2ij ( yi − yj ) , the gradient , up to a constant factor , can be written as : ∂Lt-SNE ∂yi ∼ ∑ j vijwij ( yi − yj ) − n Z ∑ j w2ij ( yi − yj ) . ( 4 ) 3.2 Exaggeration in t-SNE . A standard optimisation trick for t-SNE called early exaggeration ( van der Maaten & Hinton , 2008 ; van der Maaten , 2014 ) is to multiply the first sum in the gradient by a factor ρ = 12 during the initial iterations of gradient descent . This increases the attractive forces and allows similar points to gather into clusters more effectively . Carreira-Perpinán ( 2010 ) and Linderman & Steinerberger ( 2019 ) noticed that the attractive term in the t-SNE loss function is related to the loss function of Laplacian eigenmaps ( LE ) ( Belkin & Niyogi , 2002 ; Coifman & Lafon , 2006 ) . Indeed , if ρ→∞ , the relative repulsion strength goes to zero and the embedding shrinks to a point with all wij → 1 . This implies that , asymptotically , gradient descent becomes equivalent to Markov chain iterations with the transition matrix closely related to the graph Laplacian L = D−V of the affinity matrix V = [ vij ] ( here D is diagonal matrix with row sums ofV ; see Appendix ) . The entire embedding shrinks to a single point , but the leading eigenvectors of the Laplacian shrink the slowest . This makes t-SNE with ρ → ∞ produce embeddings very similar to LE , which computes the leading eigenvectors of the normalized Laplacian ( see Appendix and Figure 1 ) . This theoretical finding immediately suggests that it might be interesting to study t-SNE with exaggeration ρ > 1 not only as an optimisation trick , but in itself , as an intermediate method between LE and standard t-SNE . The gradient of t-SNE with exaggeration can be written as ∂Lt-SNE ( ρ ) ∂yi ∼ ∑ j vijwij ( yi − yj ) − n ρZ ∑ j w2ij ( yi − yj ) ( 5 ) and the corresponding loss function is Lt-SNE ( ρ ) = DKL ( { pij } ‖ { wij/Z 1 ρ } ) = ∑ i , j pij log pij wij/Z 1 ρ . ( 6 ) 3.3 UMAP . Using the same notation as above , UMAP optimizes the cross-entropy loss between vij and wij , without normalizing them into probabilities : LUMAP = ∑ i , j [ vij log vij wij + ( 1− vij ) log 1− vij 1− wij ] , ( 7 ) where the 1− vij term is approximated by 1 as most vij are 0 . Note that UMAP differs from t-SNE in how exactly it defines vij but this difference is negligible , at least for the data considered here.1 Dropping constant terms , we obtain LUMAP ∼ − ∑ i , j vij logwij − ∑ i , j log ( 1− wij ) , ( 8 ) which is the same loss function as the one introduced earlier by LargeVis ( Tang et al. , 2016 ) . The first term , corresponding to attractive forces , is the same as in t-SNE , but the second , repulsive , term is different . Taking wij = 1/ ( 1 + d2ij ) as in t-SNE,2 the UMAP gradient is given by ∂LUMAP ∂yi ∼ ∑ j vijwij ( yi − yj ) − ∑ j 1 d2ij + wij ( yi − yj ) , ( 9 ) where = 0.001 is added to the denominator to prevent numerical problems for dij ≈ 0 . If = 1 ( which does not strongly affect the result ; Figure S1 ) , the gradient becomes identical to the t-SNE gradient , up to the n/Z factor in front of the repulsive forces . Moreover , UMAP allows to use an arbitrary γ factor in front of the repulsive forces , which makes it easier to compare the loss functions3 : ∂LUMAP ( γ ) ∂yi ∼ ∑ j vijwij ( yi − yj ) − γ ∑ j 1 d2ij + wij ( yi − yj ) . ( 10 ) Whereas it is possible to approximate the full repulsive term with the same techniques as used in t-SNE ( van der Maaten , 2014 ; Linderman et al. , 2019 ) , UMAP took a different approach and followed LargeVis in using negative sampling ( Mikolov et al. , 2013 ) of repulsive forces : on each gradient descent iteration , only a small number ν of randomly picked repulsive forces are applied to each point for each of the ∼k attractive forces that it feels . Other repulsive terms are ignored . The default value is ν = 5 . The effect of this negative sampling on the resulting embedding has not been studied before . 3.4 ForceAtlas2 . Force-directed graph layouts are usually introduced directly via attractive and repulsive forces , even though it is easy to write down a suitable loss function ( Noack , 2007 ) . ForceAtlas2 ( FA2 ) has attractive forces proportional to dij and repulsive forces proportional to 1/dij ( Jacomy et al. , 2014 ) : ∂LFA2 ∂yi = ∑ j vij ( yi − yj ) − ∑ j ( hi + 1 ) ( hj + 1 ) d2ij ( yi − yj ) , ( 11 ) where hi denotes the degree of node i in the input graph . This is known as edge repulsion in the graph layout literature ( Noack , 2007 ; 2009 ) and is important for embedding graphs that have nodes of very different degrees . For symmetrized kNN graphs , hi ≈ k , so ( hi + 1 ) ( hj + 1 ) term contributes a roughly constant factor of ∼k2 to the repulsive forces .
In this paper, a unified view of embedding methods for visualization is presented. The main message is that, Laplacian eigenmaps and t-SNE are governed by a single formula, and the difference of them can be seen as a difference of a hyperparameter value. We can also approximately recover two different embedding methods --- UMAP and ForceAtlas2. Using a few benchmark data sets, the relationships of these methods are visualized.
SP:d616f1a6c241f03f2ddf2d171ecfb7689d61857d
Leveraging affinity cycle consistency to isolate factors of variation in learned representations
Identifying the dominant factors of variation across a dataset is a central goal of representation learning . Generative approaches lead to descriptions that are rich enough to recreate the data , but often only a partial description is needed to complete downstream tasks or to gain insights about the dataset . In this work , we operate in the setting where limited information is known about the data in the form of groupings , or set membership , and the task is to learn representations which isolate the factors of variation that are common across the groupings . Our key insight is the use of affinity cycle consistency ( ACC ) between the learned embeddings of images belonging to different sets . In contrast to prior work , we demonstrate that ACC can be applied with significantly fewer constraints on the factors of variation , across a remarkably broad range of settings , and without any supervision for half of the data . By curating datasets from Shapes3D , we quantify the effectiveness of ACC through mutual information between the learned representations and the known generative factors . In addition , we demonstrate the applicability of ACC to the tasks of digit style isolation and synthetic-to-real object pose transfer and compare to generative approaches utilizing the same supervision . 1 INTRODUCTION . Isolating desired factors of variation in a dataset requires learning representations that retain information only pertaining to those desired factors while suppressing or being invariant to remaining “ nuisance ” factors . This is a fundamental task in representation learning which is of great practical importance for numerous applications . For example , image retrieval based on certain specific attributes ( e.g . object pose , shape , or color ) requires representations that have effectively isolated those particular factors . In designing approaches for such a task , the possibilities for the structure of the learned representation are inextricably linked to the types of supervision available . As an example , complete supervision of the desired factors of variation provides maximum flexibility in obtaining fully disentangled representations , where there is a simple and interpretable mapping between elements and the factors of the variation ( Bengio et al. , 2013 ) . However , such supervision is unrealistic for most tasks since many common factors of variation in image data , such as 3D pose or lighting , are difficult to annotate at scale in real-world settings . At the other extreme , unsupervised representation learning makes the fewest limiting assumptions about the data but does not allow control over the discovered factors of variation . The challenge is in designing a learning process that best utilizes the supervision that can be realistically obtained in different real-world scenarios . In this paper , we consider weak supervision in the form of set membership ( Kulkarni et al. , 2015 ; Denton & Birodkar , 2017 ) . Specifically , this weak set supervision assumes only that we can curate subsets of training data where only the desired factors of variation to be isolated vary , and the remaining nuisance factors are fixed to same values . We will refer to the factors that vary within a set as the active factors , and those that have fixed and same values as inactive . To illustrate this set supervision , consider the problem of isolating 3D object pose from images belonging to an object category ( say , car images ) . The weak set supervision assumption can be satisfied by simply imaging each object from multiple viewpoints . Note , this would not require consistency or correspondence in viewpoints across object instances , nor any target pose val- ues attached to the images . In practice , collecting multiple views of an object in a static environment is much more reasonable than collecting views of different objects with identical poses . In this paper we propose a novel approach for isolating factors of variation by formulating the problem as one of finding alignment between two sets with some common active factors of variation . Considering the application of synthetic-to-real object pose transfer , Figure 1 illustrates two sample sets of car images where pose is the only active factor in the first set P ( I|d0 ) of synthetic car images and the second set P ( I ) is comprised of both real and synthetic car images . Given these sets , without any other supervision , the aim is to automatically learn the embeddings that can find meaningful correspondences between the points in the two sets . The key idea behind our approach is a novel utilization of cycle consistency . A cycle consistent mapping can be described broadly as some non-trivial mapping that brings an input back to itself , and in our case the mapping is between sets of points in embedding space . We denote our application of cycle consistency as affinity cycle consistency ( ACC ) as it uses a differentiable version of soft nearest neighbors since the correspondence forming the cycle or not known a priori1 . Further , no explicit pairwise correspondence between the input sets is needed ; it is found by the loss . We posit that this process of finding correspondences is crucial to isolating the desired factors of variation : to match across sets , the representations must ignore commonality within a set ( the inactive factors ) and focus on the active factors common to both the sets . For example , ACC-learned embeddings from the two sets of car images in Figure 1 can isolate the object pose factor as that is the common active factor across both the sets . We also show how our ACC model can be generalized to the partial set supervision setting : ACC can learn to isolate factors of variation even when set supervision is provided for only one set , while the second set is virtually unrestricted . This has practical importance as it allows us to integrate unsupervised data during training . In Section 4.3 we show how this process can be applied to isolate 3D pose in real images without ever seeing any supervised real images during training . In the following two sections we cover the related works and formally introduce our ACC method . Given the novelty of our approach for isolating factors of variation , we present a progression of experiments to develop an intuition for the technique as it operates in different scenarios . In Section 4.1 we evaluate ACC in various settings using the synthetic Shapes3D dataset where the latent factor values are known , allowing a quantitative analysis . Later , in Section 4.2 we demonstrate the use of ACC in isolating handwritten digit style from its content ( class id ) . In Section 4.3 , we show how ACC can be applied in its most general form to isolate 3D object pose in real images with a training 1This specific loss has been used previously in Dwibedi et al . ( 2019 ) to align different videos of the same action , and here we show this loss is much more general . We term it affinity cycle consistency as opposed to the prior work ’ s terminology , temporal cycle consistency , to indicate as such . process that combines a collection of set-supervised synthetic data with unsupervised real images . We conclude with a discussion and analysis . 2 RELATED WORK . Disentangled representations . Most approaches toward disentangled respresentations are unsupervised , and are generally based on generative modeling frameworks such as variational autoencoders ( Kingma & Welling , 2014 ) or generative adversarial networks ( Goodfellow et al. , 2014 ) . The VAE is a latent variable model that encourages disentanglement through its isotropic Gaussian prior , which is a factorized distribution . Numerous variations of the VAE have been proposed to further disentanglement , and these include β-VAE ( Higgins et al. , 2017 ) , β-TCVAE ( Chen et al. , 2018 ) , FactorVAE ( Kim & Mnih , 2018 ) , DIP-VAE ( Kumar et al. , 2018 ) , JointVAE ( Dupont , 2018 ) , and ML-VAE ( Bouchacourt et al. , 2018 ) . InfoGAN ( Chen et al. , 2016 ) encourages an interpretable latent representation by maximizing mutual information between the input and a small subset of latent variables . In Hu et al . ( 2018 ) adversarial training is combined with mixing autoencoders . In Locatello et al . ( 2019 ) it is shown that true unsupervised disentanglement is impossible in generative models , and inductive biases or implicit supervision must be exploited . Supervision has been incorporated in different ways . Graphical model structures are integrated into the encoder/decoder of a VAE to allow for partial supervision ( Siddharth et al. , 2017 ) . In Sanchez et al . ( 2020 ) disentanglement without generative modeling is proposed employing similar set supervision , but requires sequential training to learn all the factors of variation so that those varying across a set may be encoded . In contrast to these approaches , ACC produces entangled representations which directly target and isolate factors of variation complementary to those for which the set supervision is known . Cycle consistency . Often , cycle consistency has been used as a constraint for establishing point correspondences on images ( Zhou et al. , 2016 ; Oron et al. , 2016 ) or 3D point clouds ( Yang et al. , 2020 ; Navaneet et al. , 2020 ) . In a different setting , the time window between the image frames of Atari games can be learned using a discrete version of cycle consistency ( Aytar et al. , 2018 ) . In contrast to Aytar et al . ( 2018 ) , we use a discriminative approach and do not recover a disentangled representation . Cycle consistency has also been used in disentangling factors of variation with variational autoencoders using weak supervision in the form of set supervision ( Jha et al. , 2018 ) . The most closely related work to ours is the work on temporal cycle consistency ( TCC ) ( Dwibedi et al. , 2019 ) , where a differential soft nearest-neighbor loss is used to find correspondences across time in multiple videos of the same action . Our approach differs in two key ways . First , we generalize the approach to a broader class of problems where the data does not provide an ordering and is less likely to permit a 1-1 correspondence between sets . Second , and most significantly , we show the fundamental constraint of TCC , that both sets must share common active factors of variation , can be relaxed to allow one unconstrained set ( no inactive factors ) , which allows for incorporating training data with no set supervision . Weak supervision . For recovering semantics , there exists several self-supervision methods ( Chen et al. , 2020 ; Misra & van der Maaten , 2019 ) that rely on elaborate data augmentation and selfsupervision tasks such as jigsaw and rotations . Augmentation can be effective if it is known how factors of variation act on the image space , but this is only true for some fully observable factors such as 2D position and orientation . This restriction similarly applies to models that bake transformations into the architecture , as with spatial transformers ( Hinton et al. , 2011 ) or capsules ( Jaderberg et al. , 2015 ) . Often , we use fully self-supervised methods for recovering semantics necessary for downstream tasks . In this work , we are interested in isolating geometric factors of variation such as pose that are difficult to annotate . In order to do this , we rely on set supervision ( Kulkarni et al. , 2015 ; Mathieu et al. , 2016 ; Cohen & Welling , 2015 ) . In Cohen & Welling ( 2015 ) the latent representations of the training images are optimized limiting view synthesis to objects seen at training time . In practice , getting full supervision with ground truth parameters for geometric entities such as lighting and pose is challenging . On the other hand , one can often capture videos where we fix one or more of these factors of variation , and allow the others to vary . This form of set supervision is a good tradeoff between labor-intensive manual annotation required for fully supervised methods , and fully self-supervised methods . 3D pose aware representations . An important factor of variation for many image tasks is 3D object pose , and not surprisingly there have been attempts to learn representations which encode this property . The SO ( 3 ) -VAE ( Falorsi et al. , 2018 ) places a uniform prior on the 3D rotation group SO ( 3 ) , which allows learning manifold-valued latent variables . Latent representations that are pose- equivariant have been proposed in Worrall et al . ( 2017 ) ; Rhodin et al . ( 2018 ) , and this allows for pose to be directly transformed in the latent space . Generative techniques for pose disentanglement include Kulkarni et al . ( 2015 ) ; Yang et al . ( 2015a ) . In Kulkarni et al . ( 2015 ) a simplistic experimental setting is considered ( fewer factors of variation with synthetic or grayscale images , and disentangling only the 1D azimuth angle ) .
This paper applies a weakly-supervised learning approach to identify factors of object postures in an image dataset. The core idea is to introduce two sets of images. The first set is the reference data set with grouped objects of different active/inactive posture constraints. This set is used to provide weak supervision information in posture identification. The second set is the probe set. It does not necessarily require posture grouping of objects. Affinity Cycle Consistency loss is set up to automatically map objects of similar active postures between the two image sets (objects of similar postures are supposed to be the nearest neighbors in the learned embedding space). The experimental study verifies the validity of the proposed factor isolation algorithm.
SP:b65c6ca0d33243de3419efafb5f102512960d994
Leveraging affinity cycle consistency to isolate factors of variation in learned representations
Identifying the dominant factors of variation across a dataset is a central goal of representation learning . Generative approaches lead to descriptions that are rich enough to recreate the data , but often only a partial description is needed to complete downstream tasks or to gain insights about the dataset . In this work , we operate in the setting where limited information is known about the data in the form of groupings , or set membership , and the task is to learn representations which isolate the factors of variation that are common across the groupings . Our key insight is the use of affinity cycle consistency ( ACC ) between the learned embeddings of images belonging to different sets . In contrast to prior work , we demonstrate that ACC can be applied with significantly fewer constraints on the factors of variation , across a remarkably broad range of settings , and without any supervision for half of the data . By curating datasets from Shapes3D , we quantify the effectiveness of ACC through mutual information between the learned representations and the known generative factors . In addition , we demonstrate the applicability of ACC to the tasks of digit style isolation and synthetic-to-real object pose transfer and compare to generative approaches utilizing the same supervision . 1 INTRODUCTION . Isolating desired factors of variation in a dataset requires learning representations that retain information only pertaining to those desired factors while suppressing or being invariant to remaining “ nuisance ” factors . This is a fundamental task in representation learning which is of great practical importance for numerous applications . For example , image retrieval based on certain specific attributes ( e.g . object pose , shape , or color ) requires representations that have effectively isolated those particular factors . In designing approaches for such a task , the possibilities for the structure of the learned representation are inextricably linked to the types of supervision available . As an example , complete supervision of the desired factors of variation provides maximum flexibility in obtaining fully disentangled representations , where there is a simple and interpretable mapping between elements and the factors of the variation ( Bengio et al. , 2013 ) . However , such supervision is unrealistic for most tasks since many common factors of variation in image data , such as 3D pose or lighting , are difficult to annotate at scale in real-world settings . At the other extreme , unsupervised representation learning makes the fewest limiting assumptions about the data but does not allow control over the discovered factors of variation . The challenge is in designing a learning process that best utilizes the supervision that can be realistically obtained in different real-world scenarios . In this paper , we consider weak supervision in the form of set membership ( Kulkarni et al. , 2015 ; Denton & Birodkar , 2017 ) . Specifically , this weak set supervision assumes only that we can curate subsets of training data where only the desired factors of variation to be isolated vary , and the remaining nuisance factors are fixed to same values . We will refer to the factors that vary within a set as the active factors , and those that have fixed and same values as inactive . To illustrate this set supervision , consider the problem of isolating 3D object pose from images belonging to an object category ( say , car images ) . The weak set supervision assumption can be satisfied by simply imaging each object from multiple viewpoints . Note , this would not require consistency or correspondence in viewpoints across object instances , nor any target pose val- ues attached to the images . In practice , collecting multiple views of an object in a static environment is much more reasonable than collecting views of different objects with identical poses . In this paper we propose a novel approach for isolating factors of variation by formulating the problem as one of finding alignment between two sets with some common active factors of variation . Considering the application of synthetic-to-real object pose transfer , Figure 1 illustrates two sample sets of car images where pose is the only active factor in the first set P ( I|d0 ) of synthetic car images and the second set P ( I ) is comprised of both real and synthetic car images . Given these sets , without any other supervision , the aim is to automatically learn the embeddings that can find meaningful correspondences between the points in the two sets . The key idea behind our approach is a novel utilization of cycle consistency . A cycle consistent mapping can be described broadly as some non-trivial mapping that brings an input back to itself , and in our case the mapping is between sets of points in embedding space . We denote our application of cycle consistency as affinity cycle consistency ( ACC ) as it uses a differentiable version of soft nearest neighbors since the correspondence forming the cycle or not known a priori1 . Further , no explicit pairwise correspondence between the input sets is needed ; it is found by the loss . We posit that this process of finding correspondences is crucial to isolating the desired factors of variation : to match across sets , the representations must ignore commonality within a set ( the inactive factors ) and focus on the active factors common to both the sets . For example , ACC-learned embeddings from the two sets of car images in Figure 1 can isolate the object pose factor as that is the common active factor across both the sets . We also show how our ACC model can be generalized to the partial set supervision setting : ACC can learn to isolate factors of variation even when set supervision is provided for only one set , while the second set is virtually unrestricted . This has practical importance as it allows us to integrate unsupervised data during training . In Section 4.3 we show how this process can be applied to isolate 3D pose in real images without ever seeing any supervised real images during training . In the following two sections we cover the related works and formally introduce our ACC method . Given the novelty of our approach for isolating factors of variation , we present a progression of experiments to develop an intuition for the technique as it operates in different scenarios . In Section 4.1 we evaluate ACC in various settings using the synthetic Shapes3D dataset where the latent factor values are known , allowing a quantitative analysis . Later , in Section 4.2 we demonstrate the use of ACC in isolating handwritten digit style from its content ( class id ) . In Section 4.3 , we show how ACC can be applied in its most general form to isolate 3D object pose in real images with a training 1This specific loss has been used previously in Dwibedi et al . ( 2019 ) to align different videos of the same action , and here we show this loss is much more general . We term it affinity cycle consistency as opposed to the prior work ’ s terminology , temporal cycle consistency , to indicate as such . process that combines a collection of set-supervised synthetic data with unsupervised real images . We conclude with a discussion and analysis . 2 RELATED WORK . Disentangled representations . Most approaches toward disentangled respresentations are unsupervised , and are generally based on generative modeling frameworks such as variational autoencoders ( Kingma & Welling , 2014 ) or generative adversarial networks ( Goodfellow et al. , 2014 ) . The VAE is a latent variable model that encourages disentanglement through its isotropic Gaussian prior , which is a factorized distribution . Numerous variations of the VAE have been proposed to further disentanglement , and these include β-VAE ( Higgins et al. , 2017 ) , β-TCVAE ( Chen et al. , 2018 ) , FactorVAE ( Kim & Mnih , 2018 ) , DIP-VAE ( Kumar et al. , 2018 ) , JointVAE ( Dupont , 2018 ) , and ML-VAE ( Bouchacourt et al. , 2018 ) . InfoGAN ( Chen et al. , 2016 ) encourages an interpretable latent representation by maximizing mutual information between the input and a small subset of latent variables . In Hu et al . ( 2018 ) adversarial training is combined with mixing autoencoders . In Locatello et al . ( 2019 ) it is shown that true unsupervised disentanglement is impossible in generative models , and inductive biases or implicit supervision must be exploited . Supervision has been incorporated in different ways . Graphical model structures are integrated into the encoder/decoder of a VAE to allow for partial supervision ( Siddharth et al. , 2017 ) . In Sanchez et al . ( 2020 ) disentanglement without generative modeling is proposed employing similar set supervision , but requires sequential training to learn all the factors of variation so that those varying across a set may be encoded . In contrast to these approaches , ACC produces entangled representations which directly target and isolate factors of variation complementary to those for which the set supervision is known . Cycle consistency . Often , cycle consistency has been used as a constraint for establishing point correspondences on images ( Zhou et al. , 2016 ; Oron et al. , 2016 ) or 3D point clouds ( Yang et al. , 2020 ; Navaneet et al. , 2020 ) . In a different setting , the time window between the image frames of Atari games can be learned using a discrete version of cycle consistency ( Aytar et al. , 2018 ) . In contrast to Aytar et al . ( 2018 ) , we use a discriminative approach and do not recover a disentangled representation . Cycle consistency has also been used in disentangling factors of variation with variational autoencoders using weak supervision in the form of set supervision ( Jha et al. , 2018 ) . The most closely related work to ours is the work on temporal cycle consistency ( TCC ) ( Dwibedi et al. , 2019 ) , where a differential soft nearest-neighbor loss is used to find correspondences across time in multiple videos of the same action . Our approach differs in two key ways . First , we generalize the approach to a broader class of problems where the data does not provide an ordering and is less likely to permit a 1-1 correspondence between sets . Second , and most significantly , we show the fundamental constraint of TCC , that both sets must share common active factors of variation , can be relaxed to allow one unconstrained set ( no inactive factors ) , which allows for incorporating training data with no set supervision . Weak supervision . For recovering semantics , there exists several self-supervision methods ( Chen et al. , 2020 ; Misra & van der Maaten , 2019 ) that rely on elaborate data augmentation and selfsupervision tasks such as jigsaw and rotations . Augmentation can be effective if it is known how factors of variation act on the image space , but this is only true for some fully observable factors such as 2D position and orientation . This restriction similarly applies to models that bake transformations into the architecture , as with spatial transformers ( Hinton et al. , 2011 ) or capsules ( Jaderberg et al. , 2015 ) . Often , we use fully self-supervised methods for recovering semantics necessary for downstream tasks . In this work , we are interested in isolating geometric factors of variation such as pose that are difficult to annotate . In order to do this , we rely on set supervision ( Kulkarni et al. , 2015 ; Mathieu et al. , 2016 ; Cohen & Welling , 2015 ) . In Cohen & Welling ( 2015 ) the latent representations of the training images are optimized limiting view synthesis to objects seen at training time . In practice , getting full supervision with ground truth parameters for geometric entities such as lighting and pose is challenging . On the other hand , one can often capture videos where we fix one or more of these factors of variation , and allow the others to vary . This form of set supervision is a good tradeoff between labor-intensive manual annotation required for fully supervised methods , and fully self-supervised methods . 3D pose aware representations . An important factor of variation for many image tasks is 3D object pose , and not surprisingly there have been attempts to learn representations which encode this property . The SO ( 3 ) -VAE ( Falorsi et al. , 2018 ) places a uniform prior on the 3D rotation group SO ( 3 ) , which allows learning manifold-valued latent variables . Latent representations that are pose- equivariant have been proposed in Worrall et al . ( 2017 ) ; Rhodin et al . ( 2018 ) , and this allows for pose to be directly transformed in the latent space . Generative techniques for pose disentanglement include Kulkarni et al . ( 2015 ) ; Yang et al . ( 2015a ) . In Kulkarni et al . ( 2015 ) a simplistic experimental setting is considered ( fewer factors of variation with synthetic or grayscale images , and disentangling only the 1D azimuth angle ) .
The paper presents an approach to isolate factors of variation using weak supervision in the form of group labels. The proposed method Affinity Cycle Consistency (ACC) claims to work with these group labels, which are weaker than the more common, one factor per group type labeling. An important aspect of this approach is that it does not attempt to disentangle the factors of variation, but only capture (or isolate) them in the latent space.
SP:b65c6ca0d33243de3419efafb5f102512960d994
X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback
1 INTRODUCTION . Recent advances in user interfaces have enabled people with sensorimotor impairments to more effectively communicate their intent to machines . For example , Ward et al . ( 2000 ) enable users to type characters using an eye gaze tracker instead of a keyboard , and Willett et al . ( 2020 ) enable a paralyzed human patient to type using a brain implant that records neural activity . The main challenge in building such interfaces is translating high-dimensional , continuous user input into desired actions . Standard methods typically calibrate the interface on predefined training tasks for which expert demonstrations are available , then deploy the trained interface . Unfortunately , this does not enable the interface to improve with use or adapt to distributional shift in the user inputs . In this paper , we focus on the problem of assistive typing : helping a user select words or characters without access to a keyboard , using eye gaze inputs ( Ward et al. , 2000 ) ; handwriting inputs ( see Figure 7 in the appendix ) , which can be easier to provide than direct keystrokes ( Willett et al. , 2020 ) ; or inputs from an electrocorticography-based brain implant ( Leuthardt et al. , 2004 ; Silversmith et al. , 2020 ) . To enable any existing , default interface to continually adapt to the user , we train a model using online learning from user feedback . The key insight is that the user provides feedback on the interface ’ s actions via backspaces , which indicate that the interface did not perform the desired action in response to a given input . By learning from this naturally-occurring feedback signal instead of an explicit label , we do not require any additional effort from the user to improve the interface . Furthermore , because our method is applied on top of the user ’ s default interface , our approach is complementary to other work that develops state-of-the-art , domain-specific methods for problems like gaze tracking and handwriting recognition . Figure 1 describes our algorithm : we initialize our model using offline data generated by the default interface , deploy our interface as an augmentation to the default interface , collect online feedback , and update our model . We formulate assistive typing as an online decision-making problem , in which the interface receives observations of user inputs , performs actions that select words or characters , and receives a reward signal that is automatically constructed from the user ’ s backspaces . To improve the default interface ’ s actions , we fit a neural network reward model that predicts the reward signal given the user ’ s input and the interface ’ s action . Upon observing a user input , our interface uses the trained reward model to update the prior policy given by the default interface to a posterior policy conditioned on optimality , then samples an action from this posterior ( see Figure 1 ) . We call this method x-to-text ( X2T ) , where x refers to the arbitrary type of user input ; e.g. , eye gaze or brain activity . Our primary contribution is the X2T algorithm for continual learning of a communication interface from user feedback . We primarily evaluate X2T through an online user study with 12 participants who use a webcam-based gaze tracking system to select words from a display . To run ablation experiments that would be impractical in the online study , we also conduct an observational study with 60 users who use a tablet and stylus to draw pictures of individual characters . The results show that X2T quickly learns to map input images of eye gaze or character drawings to discrete word or character selections . By learning from online feedback , X2T improves upon a default interface that is only trained once using supervised learning and , as a result , suffers from distribution shift ( e.g. , caused by changes in the user ’ s head position and lighting over time in the gaze experiment ) . X2T automatically overcomes calibration problems with the gaze tracker by adapting to the miscalibrations over time , without the need for explicit re-calibration . Furthermore , X2T leverages offline data generated by the default interface to accelerate online learning , stimulates co-adaptation from the user in the online study , and personalizes the interface to the handwriting style of each user in the observational study . 2 LEARNING TO INFER INTENT FROM USER INPUT . In our problem setting , the user can not directly perform actions ; e.g. , due to a sensorimotor impairment . Instead , the user relies on an assistive typing interface , where the user ’ s intended action is inferred from available inputs such as webcam images of eye gaze or handwritten character drawings . As such , we formulate assistive typing as a contextual bandit problem ( Langford & Zhang , 2008 ; Yue & Joachims , 2009 ; Li et al. , 2010 ; Lan & Baraniuk , 2016 ; Gordon et al. , 2019 ) . At each timestep , the user provides the interface with a context x ∈ X , where X is the set of possible user inputs ( e.g. , webcam images ) . The interface then performs an action u ∈ U , where U is the set of possible actions ( e.g. , word selections ) . We assume the true reward function is unknown , since the user can not directly specify their desired task ( e.g. , writing an email or filling out a form ) . Instead of eliciting a reward function or explicit reward signal from the user , we automatically construct a reward signal from the user ’ s backspaces . The key idea is to treat backspaces as feedback on the accuracy of the interface ’ s actions . Our approach to this problem is outlined in Figure 1 . We aim to minimize expected regret , which , in our setting , is characterized by the total number of backspaces throughout the lifetime of the interface . While a number of contextual bandit algorithms with lower regret bounds have been proposed in prior work ( Lattimore & Szepesvári , 2020 ) , we use a simple strategy that works well in our experiments : train a neural network reward model to predict the reward given the user ’ s input and the interface ’ s action , and select actions with probability proportional to their predicted optimality . Our approach is similar to prior work on deep contextual multi-armed bandits ( Collier & Llorens , 2018 ) and NeuralUCB ( Zhou et al. , 2019 ) , except that instead of using Thompson sampling or UCB to balance exploration and exploitation , we use a simple , stochastic policy . 2.1 MODELING USER BEHAVIOR AND FEEDBACK . Unlike in the standard multi-armed bandit framework , we do not get to observe an extrinsic reward signal that captures the underlying task that the user aims to perform . To address this issue , we infer rewards from naturally-occurring user behavior . In particular , in the assistive typing setting , we take advantage of the fact that we can observe when the user backspaces ; i.e. , when they delete the most recent word or character typed by the interface . To infer rewards from backspaces , we make two assumptions about user behavior : ( 1 ) the user can perform a backspace action independently of our interface ( e.g. , by pressing a button ) ; ( 2 ) the user tends to backspace incorrect actions ; and ( 3 ) the user does not tend to backspace correct actions . Hence , we assign a positive reward to actions that were not backspaced , and assign zero reward to backspaced actions . Formally , let r ∈ { 0 , 1 } denote this reward signal , where r = 0 indicates an incorrect action and r = 1 indicates a correct action . 2.2 TRAINING THE REWARD MODEL TO PREDICT FEEDBACK . In order to perform actions that minimize expected regret – i.e. , the total number of backspaces over time – we need to learn a model that predicts whether or not the user will backspace a given action in a given context . To do so , we learn a reward model pθ ( r|x , u ) , where pθ is a neural network and θ are the weights . Since the reward r ∈ { 0 , 1 } can only take on one of two values , pθ is a binary classifier . We train this binary classifier on a dataset D of input-action-reward triples ( x , u , r ) . In particular , we fit the model pθ by optimizing the maximum-likelihood objective ; i.e. , the binary cross-entropy loss ( see Equation 2 in the appendix ) . Since X2T learns from human-in-the-loop feedback , the amount of training data is limited by how frequently the user operates the interface . To reduce the amount of online interaction data needed to train the reward model , we use offline pretraining . We assume that the user already has access to some default interface for typing . We also assume access to an offline dataset of input-action pairs generated by the user and this default interface . We assign zero rewards to the backspaced actions and positive rewards to the non-backspaced actions in this offline dataset , and initially train our reward model to predict these rewards given the user ’ s inputs and the default interface ’ s actions . Thus , when X2T is initially deployed , the reward model has already been trained on the offline data , and requires less online interaction data to reach peak accuracy . Algorithm 1 X-to-Text ( X2T ) Require π̄ , θinit . default interface , pretrained reward model parameters while true do x ∼ puser ( x ) . user gives input u ∼ π ( u|x ) ∝ pθ ( r = 1|x , u ) π̄ ( u|x ) . interface performs action r ← 0 if user backspaces else 1 . infer reward from user feedback D ← D ∪ { ( x , u , r ) } . store online input-action-reward data θ ← θ +∇θ ∑ ( x , u , r ) ∼D log ( pθ ( r|x , u ) ) . update reward model w/SGD 2.3 USING THE REWARD MODEL TO SELECT ACTIONS . Even with offline pretraining , the initial reward model may not be accurate enough for practical use . To further improve the initial performance of our interface at the onset of online training , we combine our reward model pθ ( r|x , u ) with the default interface π̄ ( u|x ) . We assume that π̄ is a stochastic policy and that we can evaluate it on specific inputs , but do not require access to its implementation . We set our policy π ( u|x ) = p ( u|x , r = 1 ) to be the probability of an action conditional on optimality , following the control-as-inference framework ( Levine , 2018 ) . Applying Bayes ’ theorem , we get p ( u|x , r = 1 ) ∝ p ( r = 1|x , u ) p ( u|x ) . The first term is given by our reward model pθ , and the second term is given by the default interface . Combining these , we get the policy π ( u|x ) ∝ pθ ( r = 1|x , u ) π̄ ( u|x ) . ( 1 ) This decomposition of the policy improves the initial performance of our interface at the onset of online training , and guides exploration for training the reward model . It also provides a framework for incorporating a language model into our interface , as described in Section 4.3 . Our x-to-text ( X2T ) method is summarized in Algorithm 1 . In the beginning , we assume the user has already been operating the default interface π̄ for some time . In doing so , they generate an ‘ offline ’ dataset that we use to train the initial reward model parameters θinit . When the user starts using X2T , our interface π already improves upon the default interface π̄ by combining the default interface with the initial reward model via Equation 1 . As the user continues operating our interface , the resulting online data is used to maintain or improve the accuracy of the reward model . At each timestep , the user provides the interface with input x ∼ puser ( x ) . Although standard contextual bandit methods assume that the inputs x are i.i.d. , we find that X2T performs well even when the inputs are correlated due to user adaptation ( see Section 4.2 ) or the constraints of natural language ( see Section 4.4 ) . The interface then uses the policy in Equation 1 to select an action u . We then update the reward model pθ , by taking one step of stochastic gradient descent to optimize the maximum-likelihood objective in Equation 2 . Appendix A.1 discusses the implementation details .
This work presents a method for online learning of an assistive typing user interface (XT2) with implicit user feedback. User inputs for such an assistive typing interface are assumed to be in the form of eye gaze or handwritten characters. However, the implicit human feedback is assumed to be backspaces typed on a keyboard. Backspaces are used to delete words predicted by the assistive typing interface based on the user’s input. The online learning of such an interface to improve its assistive performance and adapt to the user over time is framed as a contextual bandit problem. A reward prediction network is trained to predict the use of backspaces (implicit feedback) by the user. This reward prediction network combined with the default interface policy using Baye’s theorem is used to update the policy of the typing interface. The experimental results with two user studies reveal that the presented method performs better than a non-adaptive default interface, stimulates user co-adaptation to the interface, and offline learning accelerates online learning.
SP:75fbb95d000d888615a32a695fd6c673055b3678
X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback
1 INTRODUCTION . Recent advances in user interfaces have enabled people with sensorimotor impairments to more effectively communicate their intent to machines . For example , Ward et al . ( 2000 ) enable users to type characters using an eye gaze tracker instead of a keyboard , and Willett et al . ( 2020 ) enable a paralyzed human patient to type using a brain implant that records neural activity . The main challenge in building such interfaces is translating high-dimensional , continuous user input into desired actions . Standard methods typically calibrate the interface on predefined training tasks for which expert demonstrations are available , then deploy the trained interface . Unfortunately , this does not enable the interface to improve with use or adapt to distributional shift in the user inputs . In this paper , we focus on the problem of assistive typing : helping a user select words or characters without access to a keyboard , using eye gaze inputs ( Ward et al. , 2000 ) ; handwriting inputs ( see Figure 7 in the appendix ) , which can be easier to provide than direct keystrokes ( Willett et al. , 2020 ) ; or inputs from an electrocorticography-based brain implant ( Leuthardt et al. , 2004 ; Silversmith et al. , 2020 ) . To enable any existing , default interface to continually adapt to the user , we train a model using online learning from user feedback . The key insight is that the user provides feedback on the interface ’ s actions via backspaces , which indicate that the interface did not perform the desired action in response to a given input . By learning from this naturally-occurring feedback signal instead of an explicit label , we do not require any additional effort from the user to improve the interface . Furthermore , because our method is applied on top of the user ’ s default interface , our approach is complementary to other work that develops state-of-the-art , domain-specific methods for problems like gaze tracking and handwriting recognition . Figure 1 describes our algorithm : we initialize our model using offline data generated by the default interface , deploy our interface as an augmentation to the default interface , collect online feedback , and update our model . We formulate assistive typing as an online decision-making problem , in which the interface receives observations of user inputs , performs actions that select words or characters , and receives a reward signal that is automatically constructed from the user ’ s backspaces . To improve the default interface ’ s actions , we fit a neural network reward model that predicts the reward signal given the user ’ s input and the interface ’ s action . Upon observing a user input , our interface uses the trained reward model to update the prior policy given by the default interface to a posterior policy conditioned on optimality , then samples an action from this posterior ( see Figure 1 ) . We call this method x-to-text ( X2T ) , where x refers to the arbitrary type of user input ; e.g. , eye gaze or brain activity . Our primary contribution is the X2T algorithm for continual learning of a communication interface from user feedback . We primarily evaluate X2T through an online user study with 12 participants who use a webcam-based gaze tracking system to select words from a display . To run ablation experiments that would be impractical in the online study , we also conduct an observational study with 60 users who use a tablet and stylus to draw pictures of individual characters . The results show that X2T quickly learns to map input images of eye gaze or character drawings to discrete word or character selections . By learning from online feedback , X2T improves upon a default interface that is only trained once using supervised learning and , as a result , suffers from distribution shift ( e.g. , caused by changes in the user ’ s head position and lighting over time in the gaze experiment ) . X2T automatically overcomes calibration problems with the gaze tracker by adapting to the miscalibrations over time , without the need for explicit re-calibration . Furthermore , X2T leverages offline data generated by the default interface to accelerate online learning , stimulates co-adaptation from the user in the online study , and personalizes the interface to the handwriting style of each user in the observational study . 2 LEARNING TO INFER INTENT FROM USER INPUT . In our problem setting , the user can not directly perform actions ; e.g. , due to a sensorimotor impairment . Instead , the user relies on an assistive typing interface , where the user ’ s intended action is inferred from available inputs such as webcam images of eye gaze or handwritten character drawings . As such , we formulate assistive typing as a contextual bandit problem ( Langford & Zhang , 2008 ; Yue & Joachims , 2009 ; Li et al. , 2010 ; Lan & Baraniuk , 2016 ; Gordon et al. , 2019 ) . At each timestep , the user provides the interface with a context x ∈ X , where X is the set of possible user inputs ( e.g. , webcam images ) . The interface then performs an action u ∈ U , where U is the set of possible actions ( e.g. , word selections ) . We assume the true reward function is unknown , since the user can not directly specify their desired task ( e.g. , writing an email or filling out a form ) . Instead of eliciting a reward function or explicit reward signal from the user , we automatically construct a reward signal from the user ’ s backspaces . The key idea is to treat backspaces as feedback on the accuracy of the interface ’ s actions . Our approach to this problem is outlined in Figure 1 . We aim to minimize expected regret , which , in our setting , is characterized by the total number of backspaces throughout the lifetime of the interface . While a number of contextual bandit algorithms with lower regret bounds have been proposed in prior work ( Lattimore & Szepesvári , 2020 ) , we use a simple strategy that works well in our experiments : train a neural network reward model to predict the reward given the user ’ s input and the interface ’ s action , and select actions with probability proportional to their predicted optimality . Our approach is similar to prior work on deep contextual multi-armed bandits ( Collier & Llorens , 2018 ) and NeuralUCB ( Zhou et al. , 2019 ) , except that instead of using Thompson sampling or UCB to balance exploration and exploitation , we use a simple , stochastic policy . 2.1 MODELING USER BEHAVIOR AND FEEDBACK . Unlike in the standard multi-armed bandit framework , we do not get to observe an extrinsic reward signal that captures the underlying task that the user aims to perform . To address this issue , we infer rewards from naturally-occurring user behavior . In particular , in the assistive typing setting , we take advantage of the fact that we can observe when the user backspaces ; i.e. , when they delete the most recent word or character typed by the interface . To infer rewards from backspaces , we make two assumptions about user behavior : ( 1 ) the user can perform a backspace action independently of our interface ( e.g. , by pressing a button ) ; ( 2 ) the user tends to backspace incorrect actions ; and ( 3 ) the user does not tend to backspace correct actions . Hence , we assign a positive reward to actions that were not backspaced , and assign zero reward to backspaced actions . Formally , let r ∈ { 0 , 1 } denote this reward signal , where r = 0 indicates an incorrect action and r = 1 indicates a correct action . 2.2 TRAINING THE REWARD MODEL TO PREDICT FEEDBACK . In order to perform actions that minimize expected regret – i.e. , the total number of backspaces over time – we need to learn a model that predicts whether or not the user will backspace a given action in a given context . To do so , we learn a reward model pθ ( r|x , u ) , where pθ is a neural network and θ are the weights . Since the reward r ∈ { 0 , 1 } can only take on one of two values , pθ is a binary classifier . We train this binary classifier on a dataset D of input-action-reward triples ( x , u , r ) . In particular , we fit the model pθ by optimizing the maximum-likelihood objective ; i.e. , the binary cross-entropy loss ( see Equation 2 in the appendix ) . Since X2T learns from human-in-the-loop feedback , the amount of training data is limited by how frequently the user operates the interface . To reduce the amount of online interaction data needed to train the reward model , we use offline pretraining . We assume that the user already has access to some default interface for typing . We also assume access to an offline dataset of input-action pairs generated by the user and this default interface . We assign zero rewards to the backspaced actions and positive rewards to the non-backspaced actions in this offline dataset , and initially train our reward model to predict these rewards given the user ’ s inputs and the default interface ’ s actions . Thus , when X2T is initially deployed , the reward model has already been trained on the offline data , and requires less online interaction data to reach peak accuracy . Algorithm 1 X-to-Text ( X2T ) Require π̄ , θinit . default interface , pretrained reward model parameters while true do x ∼ puser ( x ) . user gives input u ∼ π ( u|x ) ∝ pθ ( r = 1|x , u ) π̄ ( u|x ) . interface performs action r ← 0 if user backspaces else 1 . infer reward from user feedback D ← D ∪ { ( x , u , r ) } . store online input-action-reward data θ ← θ +∇θ ∑ ( x , u , r ) ∼D log ( pθ ( r|x , u ) ) . update reward model w/SGD 2.3 USING THE REWARD MODEL TO SELECT ACTIONS . Even with offline pretraining , the initial reward model may not be accurate enough for practical use . To further improve the initial performance of our interface at the onset of online training , we combine our reward model pθ ( r|x , u ) with the default interface π̄ ( u|x ) . We assume that π̄ is a stochastic policy and that we can evaluate it on specific inputs , but do not require access to its implementation . We set our policy π ( u|x ) = p ( u|x , r = 1 ) to be the probability of an action conditional on optimality , following the control-as-inference framework ( Levine , 2018 ) . Applying Bayes ’ theorem , we get p ( u|x , r = 1 ) ∝ p ( r = 1|x , u ) p ( u|x ) . The first term is given by our reward model pθ , and the second term is given by the default interface . Combining these , we get the policy π ( u|x ) ∝ pθ ( r = 1|x , u ) π̄ ( u|x ) . ( 1 ) This decomposition of the policy improves the initial performance of our interface at the onset of online training , and guides exploration for training the reward model . It also provides a framework for incorporating a language model into our interface , as described in Section 4.3 . Our x-to-text ( X2T ) method is summarized in Algorithm 1 . In the beginning , we assume the user has already been operating the default interface π̄ for some time . In doing so , they generate an ‘ offline ’ dataset that we use to train the initial reward model parameters θinit . When the user starts using X2T , our interface π already improves upon the default interface π̄ by combining the default interface with the initial reward model via Equation 1 . As the user continues operating our interface , the resulting online data is used to maintain or improve the accuracy of the reward model . At each timestep , the user provides the interface with input x ∼ puser ( x ) . Although standard contextual bandit methods assume that the inputs x are i.i.d. , we find that X2T performs well even when the inputs are correlated due to user adaptation ( see Section 4.2 ) or the constraints of natural language ( see Section 4.4 ) . The interface then uses the policy in Equation 1 to select an action u . We then update the reward model pθ , by taking one step of stochastic gradient descent to optimize the maximum-likelihood objective in Equation 2 . Appendix A.1 discusses the implementation details .
The authors propose a simple algorithm for using online learning from implicit human feedback to improve systems that operate in the contextual bandit setting. The main idea is to capture the presence/absence of corrective actions and use this information to infer a reward signal that the system can use to make decisions later. The proposed method is instantiated for text-entry tasks as a system called XT2, and the authors perform extensive empirical evaluation that seems to show that the system is very successful.
SP:75fbb95d000d888615a32a695fd6c673055b3678
Learning to Use Future Information in Simultaneous Translation
1 INTRODUCTION . Neural machine translation ( NMT ) is an important task for the machine learning community and many advanced models have been designed ( Sutskever et al. , 2014 ; Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) . In this work , we work on a more challenging task in NMT , simultaneous translation ( also known as simultaneous interpretation ) , which is widely used in international conferences , summits and business . Different from standard NMT , simultaneous NMT has a stricter requirement for latency . We can not wait to the end of a source sentence but have to start the translation right after reading the first few words . That is , the translator is required to provide instant translation based on a partial source sentence . Simultaneous NMT is formulated as a prefix-to-prefix problem ( Ma et al. , 2019 ; 2020 ; Xiong et al. , 2019 ) , where a prefix refers to a subsequence starting from the beginning of the sentence to be translated . In simultaneous NMT , we face more uncertainty than conventional NMT , since the translation starts with a partial source sentence rather than the complete one . Wait-k inference ( Ma et al. , 2019 ) is a simple yet effective strategy in simultaneous NMT where the translation is k words behind the source input . Rather than instant translation of each word , wait-k inference actually leverages k more future words during inference phase . Obviously , a larger k can bring more future information , and therefore results in better translation quality but at the cost of larger latency . Thus , when used in real-world applications , we should have a relatively small k for simultaneous NMT . While only small k values are allowed in inference , we observe that wait-m training with m > k will lead to better accuracy for wait-k inference . Figure 1 shows the results of training with wait-m but test with wait-3 on IWSLT ’ 14 English→German translation dataset . If training with m = 3 , we will obtain a 22.79 BLEU score . If we set m to larger values such as 7 , 13 or 21 and test with wait-3 , we can get better BLEU scores . That is , the model can benefit from the availability of more future information in training . This is consistent with the observation in ( Ma et al. , 2019 ) . The challenge is how much future information we should use in training . As shown in Figure 1 , using more future information does not monotonically improve the translation accuracy of wait-k inference , mainly because that more future information results in a larger mismatch between training and inference ( i.e. , m − k more words are used in training than inference ) . Besides , due to the diversity of the natural language , intuitively , using different m ’ s for different sentences will lead to better performance . Even for the same sentence pair , the optimal m for training might vary in different training stages . In this work , we propose an algorithm that can automatically determine how much future information to use in training for simultaneous NMT . Given a pre-defined k , we want to maximize the performance of wait-k inference . We have a set of M training strategies wait-m with different waiting thresholds m ( m ∈ { 1 , 2 , · · · , M } ) . We introduce a controller such that given a training sample , the controller dynamically selects one of these training strategies so as to maximize the validation performance on waitk inference . Which wait-m training strategy to select is based on the data itself and the network status of the current translation model . The controller and the translation model are jointly trained , and the learning process is formulated as a bi-level optimization problem ( Sinha et al. , 2018 ) , where one optimization problem is nested within another . Our contribution is summarized as follows : ( 1 ) We propose a new method for simultaneous NMT , where a controller is introduced to adaptively determine how much future information to use for training . The controller and the translation model are jointly learned through bi-level optimization . ( 2 ) Experiments on four datasets show that our method improves the wait-k baseline by 1 to 3 BLEU scores , and also consistently outperforms several heuristic baselines leveraging future information . 2 RELATED WORK . Previous work on simultaneous translation can be categorized by whether using a fixed decoding scheduler or an adaptive one . Fixed policies usually use pre-defined rules to determine when to read or to write a new token ( Dalvi et al. , 2018 ; Ma et al. , 2019 ) . Wait-k is the representative method for fixed scheduler ( Ma et al. , 2019 ) , where the decoding is always k words behind the source input . Wait-k achieves good results in terms of translation quality and controllable latency , and has been used in speech-related simultaneous translation ( Zhang et al. , 2019 ; Ren et al. , 2020 ) . For methods that use adaptive schedulers , Cho & Esipova ( 2016 ) proposed wait-if-worse ( WIW ) and wait-if-diff ( WID ) methods which generate a new target word if its probability does not decrease ( for WIW ) or the generated word is unchanged ( for WID ) after reading a new source token . Grissom II et al . ( 2014 ) and Gu et al . ( 2017 ) used reinforcement learning to train the read/write controller , while Zheng et al . ( 2019a ) obtained it in a supervised way . Alinejad et al . ( 2018 ) added a “ predict ” operator to the controller so that it can anticipate future source inputs . Zheng et al . ( 2019b ) introduced a “ delay ” token into the target vocabulary indicating that the model should read a new word instead of generating a new one . Arivazhagan et al . ( 2019 ) proposed monotonic infinite lookback attention ( MILk ) , which first used a hard attention model to determine when to read new source tokens , and then a soft attention model to perform translation . Ma et al . ( 2020 ) extended MILk into a multi-head version and proposed monotonic multihead attention ( MMA ) with two variants : MMA-IL ( Infinite Lookback ) which has higher translation quality by looking back at all available source tokens , and MMA-H ( ard ) which is more computational efficient by limiting the attention span . Besides , Zheng et al . ( 2020a ) extended wait-k to an adaptive strategy by training multiple wait-m models with different m ’ s and adaptively selecting a decoding strategy during inference . Zheng et al . ( 2020b ) explored a new setting , where at each timestep , the translation model over generates the target words and corrects them in a timely fashion . 3 PROBLEM FORMULATION AND BACKGROUND . In this section , we first introduce the notations used in this work , followed by the formulation of wait-k strategy , and then we introduce our network architecture adapted from ( Ma et al. , 2019 ) . 3.1 NOTATIONS AND FORMULATION . Let X and Y denote the source language domain and target language domain . For any x ∈ X and y ∈ Y , let xi and yi denote the i-th token in x and y respectively . Lx and Ly are the numbers of tokens in x and y. x≤t represents a prefix of x , which is the subsequence x1 , x2 , · · · , xt , and similarly for y≤t . Let Dtr and Dva denote the training and validation sets , both of which are collections of bilingual sentence pairs . The wait-k strategy ( Ma et al. , 2019 ) is defined as follows : given an input x ∈ X , the generation of the translation y is always k tokens behind reading x . That is , at the t-th decoding step , we generate token yt based on x≤t+k−1 ( more strictly , x≤min { t+k−1 , Lx } ) . Our goal is to obtain a model f : X 7→ Y with parameter θ that can achieve better results with wait-k inference . 3.2 MODEL ARCHITECTURE . Our model for simultaneous NMT is based on Transformer model ( Vaswani et al. , 2017 ) . The model includes an encoder and a decoder , which are used for incrementally processing the source and target sentences respectively . Both the encoder and decoder are stacked of L blocks . We mainly introduce the differences compared with the standard Transformer . ( 1 ) Incremental encoding : Let hlt denote the output of the t-th position from block l. For ease of reference , let H l≤t denote { hl1 , hl2 , · · · , hlt } , and let h0t denote the embeddings of the t-th token . An attention model attn ( q , K , V ) , takes a query q ∈ Rd , a set of keys K and values V as inputs . K and V are of equal size , q ∈ Rd where d ∈ Z+ is the dimension , ki ∈ Rd and vi ∈ Rd are the i-th key and value . attn is defined as follows : attn ( q , K , V ) = |K|∑ i=1 αiWvvi , αi = exp ( ( Wqq ) > ( Wkki ) ) Z , Z = |K|∑ i=1 exp ( ( Wqq ) > ( Wkki ) ) , ( 1 ) where W ’ s are the parameters to be optimized . In the encoder side , hlt is obtained in a unidirectional way : hlt = attn ( h l−1 t , H l−1 ≤t , H l−1 ≤t ) . That is , the model can only attend to the previously generated hidden representations , and the computation complexity is O ( L2x ) . In comparison , ( Ma et al. , 2019 ) still leverages bidirectional attention , whose computation complexity is O ( L3x ) . We find that unidirectional attention is much more efficient than bidirectional attention without much accuracy drop ( see Appendix D.1 for details ) . ( 2 ) Incremental decoding : Since we use wait-k strategy , the decoding starts before reading all inputs . At the t-th decoding step , the decoder can only read x≤t+k−1 . When t ≤ Lx−k , the decoder greedily generates one token at each step , i.e. , the token is yt = argmaxw∈V P ( w|y≤t−1 ; HL≤t+k−1 ) , where V is the vocabulary of the target language . When t > Lx − k , the model has read the full input sentence and can generate words using beam search ( Ma et al. , 2019 ) . 4 OUR METHOD . We first introduce our algorithm in Section 4.1 , and then we discuss its relationship with several other heuristic algorithms that leverage future information in Section 4.2 . 4.1 ALGORITHM . Let f ( · · · ; θ ) denote a translation model parameterized by θ , and let ϕ denote the controller parameterized by ω to guide the training process of f . f ( · · · ; θ∗ ( ω ) ) is the translation model obtained under the guidance of the controller ϕ ( · · · ; ω ) , where θ∗ ( ω ) is the corresponding parameter . For each training data ( x , y ) , the controller ϕ adaptively assigns a training task wait-m , where m ∈ { 1 , 2 , · · · , M } , andM ∈ Z+ is a pre-defined hyperparameter . The input of ϕ consists of two parts : ( i ) the information of the training data ( x , y ) ; ( ii ) the network status of the translation model f . For ease of reference , denote these input features as Ix , y , f . We will discuss how to design Ix , y , f in Section 5.1 . LetMk ( Dva ; θ∗ ( ω ) ) denote the validation metric , which is evaluated on the validation set Dva with model f ( · · · ; θ∗ ( ω ) ) and wait-k inference . We formulate the training process of f and ϕ as a bi-level optimization , where two optimization problems are nested together . In the inner-optimization , given a ω , we want to obtain the model f ( · · · , θ∗ ( ω ) ) that can minimize the loss function ` on the training set Dtr under the guidance of the controller ϕ ( · · · , ω ) . In the outer-optimization , given a translation model θ∗ ( ω ) , we optimize ω to maximize the validation performance Mk . The mathematical formulation is shown as follows : max ω Mk ( Dva ; θ∗ ( ω ) ) ; s.t . θ∗ ( ω ) = argmin θ 1 |Dtr| ∑ ( x , y ) ∼Dtr Em∼ϕ ( Ix , y , f ; ω ) ` ( x , y , m ; θ ) ; where ` ( x , y , m ; θ ) = ∑ ( x , y ) logP ( y|x ; θ , m ) = ∑ ( x , y ) |y|∑ t=1 logP ( yt|y≤t−1 , x≤t+m−1 ) . ( 2 ) We optimize Equation 2 in an alternative way , where we first optimize θ with a given ω , and then update ω using the REINFORCE algorithm . We repeat the above process until convergence . Details are in Algorithm 1 : Algorithm 1 : The optimization algorithm . 1 Input : Training episode E ; internal update iterations T ; learning rate ηθ of the translation model ; learning rate ηω of the controller ; batch size B ; initial parameters ω , θ ; 2 for e← 1 : E do 3 Init a buffer to store states and actions : B = { } ; 4 for t← 1 : T do 5 Randomly sample a mini-batch of data De , t from Dtr with batch size B ; 6 Assign a wait-m task to each data : D̃ = { ( x , y , m ) | ( x , y ) ∈ De , t , m ∼ ϕ ( Ix , y , f ; ω ) } , where the batch size is B , and m is sampled from to the output distribution of ϕ ; 7 Update the buffer : B ← B ∪ { ( Ix , y , f , m ) | ( x , y , m ) ∈ D̃ } ; 8 Update the translation model : θ ← θ − ( ηθ/B ) ∇θ ∑ ( x , y , m ) ∈D̃ ` ( x , y , m ; θ ) ; 9 Calculate the validation performance as the reward : Re =Mk ( Dva ; θe , T ) ; 10 Update the controller : ω ← ω + ηωRe ∑ ( I , m ) ∈B∇ω logP ( ϕ ( I ; ω ) = m ) . 11 Return θ. Algorithm 1 consists of E episodes ( i.e. , the outer loop ) , and each episode consists of T update iterations ( i.e. , the inner loop ) . The inner loop ( from step 4 to step 8 ) aims to optimize the θ , where we can update the parameter with any gradient based optimizer like momentum SGD , Adam ( Kingma & Ba , 2015 ) , etc . The outer loop ( from step 2 to step 10 ) aims to optimize ω. ϕ ( Ix , y , f ; ω ) can be regarded as a policy network , where the state is Ix , y , f , the action is the choice of the task wait-m , m ∈ { 1 , 2 , · · · , M } , and the reward is the validation performance Re ( step 9 ) . At the end of each episode , we update ω using REINFORCE algorithm ( step 10 ) .
This paper proposes a new training method for wait-k simultaneous translation. Rather than training on prefix pairs where the target prefix lags the source by k tokens, it uses an RL controller to determine an optimal lag for each sentence pair. The controller uses a small set of features intended to capture training progress, and is trained with REINFORCE to minimize wait-k loss on a validation set, in alternation with main training steps. This method shows consistent gains over various wait-k training heuristics, and some gains over other approaches that adapt the lag at inference time.
SP:28a8d17fa8de3d51a3837f4e306facaafd416768
Learning to Use Future Information in Simultaneous Translation
1 INTRODUCTION . Neural machine translation ( NMT ) is an important task for the machine learning community and many advanced models have been designed ( Sutskever et al. , 2014 ; Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) . In this work , we work on a more challenging task in NMT , simultaneous translation ( also known as simultaneous interpretation ) , which is widely used in international conferences , summits and business . Different from standard NMT , simultaneous NMT has a stricter requirement for latency . We can not wait to the end of a source sentence but have to start the translation right after reading the first few words . That is , the translator is required to provide instant translation based on a partial source sentence . Simultaneous NMT is formulated as a prefix-to-prefix problem ( Ma et al. , 2019 ; 2020 ; Xiong et al. , 2019 ) , where a prefix refers to a subsequence starting from the beginning of the sentence to be translated . In simultaneous NMT , we face more uncertainty than conventional NMT , since the translation starts with a partial source sentence rather than the complete one . Wait-k inference ( Ma et al. , 2019 ) is a simple yet effective strategy in simultaneous NMT where the translation is k words behind the source input . Rather than instant translation of each word , wait-k inference actually leverages k more future words during inference phase . Obviously , a larger k can bring more future information , and therefore results in better translation quality but at the cost of larger latency . Thus , when used in real-world applications , we should have a relatively small k for simultaneous NMT . While only small k values are allowed in inference , we observe that wait-m training with m > k will lead to better accuracy for wait-k inference . Figure 1 shows the results of training with wait-m but test with wait-3 on IWSLT ’ 14 English→German translation dataset . If training with m = 3 , we will obtain a 22.79 BLEU score . If we set m to larger values such as 7 , 13 or 21 and test with wait-3 , we can get better BLEU scores . That is , the model can benefit from the availability of more future information in training . This is consistent with the observation in ( Ma et al. , 2019 ) . The challenge is how much future information we should use in training . As shown in Figure 1 , using more future information does not monotonically improve the translation accuracy of wait-k inference , mainly because that more future information results in a larger mismatch between training and inference ( i.e. , m − k more words are used in training than inference ) . Besides , due to the diversity of the natural language , intuitively , using different m ’ s for different sentences will lead to better performance . Even for the same sentence pair , the optimal m for training might vary in different training stages . In this work , we propose an algorithm that can automatically determine how much future information to use in training for simultaneous NMT . Given a pre-defined k , we want to maximize the performance of wait-k inference . We have a set of M training strategies wait-m with different waiting thresholds m ( m ∈ { 1 , 2 , · · · , M } ) . We introduce a controller such that given a training sample , the controller dynamically selects one of these training strategies so as to maximize the validation performance on waitk inference . Which wait-m training strategy to select is based on the data itself and the network status of the current translation model . The controller and the translation model are jointly trained , and the learning process is formulated as a bi-level optimization problem ( Sinha et al. , 2018 ) , where one optimization problem is nested within another . Our contribution is summarized as follows : ( 1 ) We propose a new method for simultaneous NMT , where a controller is introduced to adaptively determine how much future information to use for training . The controller and the translation model are jointly learned through bi-level optimization . ( 2 ) Experiments on four datasets show that our method improves the wait-k baseline by 1 to 3 BLEU scores , and also consistently outperforms several heuristic baselines leveraging future information . 2 RELATED WORK . Previous work on simultaneous translation can be categorized by whether using a fixed decoding scheduler or an adaptive one . Fixed policies usually use pre-defined rules to determine when to read or to write a new token ( Dalvi et al. , 2018 ; Ma et al. , 2019 ) . Wait-k is the representative method for fixed scheduler ( Ma et al. , 2019 ) , where the decoding is always k words behind the source input . Wait-k achieves good results in terms of translation quality and controllable latency , and has been used in speech-related simultaneous translation ( Zhang et al. , 2019 ; Ren et al. , 2020 ) . For methods that use adaptive schedulers , Cho & Esipova ( 2016 ) proposed wait-if-worse ( WIW ) and wait-if-diff ( WID ) methods which generate a new target word if its probability does not decrease ( for WIW ) or the generated word is unchanged ( for WID ) after reading a new source token . Grissom II et al . ( 2014 ) and Gu et al . ( 2017 ) used reinforcement learning to train the read/write controller , while Zheng et al . ( 2019a ) obtained it in a supervised way . Alinejad et al . ( 2018 ) added a “ predict ” operator to the controller so that it can anticipate future source inputs . Zheng et al . ( 2019b ) introduced a “ delay ” token into the target vocabulary indicating that the model should read a new word instead of generating a new one . Arivazhagan et al . ( 2019 ) proposed monotonic infinite lookback attention ( MILk ) , which first used a hard attention model to determine when to read new source tokens , and then a soft attention model to perform translation . Ma et al . ( 2020 ) extended MILk into a multi-head version and proposed monotonic multihead attention ( MMA ) with two variants : MMA-IL ( Infinite Lookback ) which has higher translation quality by looking back at all available source tokens , and MMA-H ( ard ) which is more computational efficient by limiting the attention span . Besides , Zheng et al . ( 2020a ) extended wait-k to an adaptive strategy by training multiple wait-m models with different m ’ s and adaptively selecting a decoding strategy during inference . Zheng et al . ( 2020b ) explored a new setting , where at each timestep , the translation model over generates the target words and corrects them in a timely fashion . 3 PROBLEM FORMULATION AND BACKGROUND . In this section , we first introduce the notations used in this work , followed by the formulation of wait-k strategy , and then we introduce our network architecture adapted from ( Ma et al. , 2019 ) . 3.1 NOTATIONS AND FORMULATION . Let X and Y denote the source language domain and target language domain . For any x ∈ X and y ∈ Y , let xi and yi denote the i-th token in x and y respectively . Lx and Ly are the numbers of tokens in x and y. x≤t represents a prefix of x , which is the subsequence x1 , x2 , · · · , xt , and similarly for y≤t . Let Dtr and Dva denote the training and validation sets , both of which are collections of bilingual sentence pairs . The wait-k strategy ( Ma et al. , 2019 ) is defined as follows : given an input x ∈ X , the generation of the translation y is always k tokens behind reading x . That is , at the t-th decoding step , we generate token yt based on x≤t+k−1 ( more strictly , x≤min { t+k−1 , Lx } ) . Our goal is to obtain a model f : X 7→ Y with parameter θ that can achieve better results with wait-k inference . 3.2 MODEL ARCHITECTURE . Our model for simultaneous NMT is based on Transformer model ( Vaswani et al. , 2017 ) . The model includes an encoder and a decoder , which are used for incrementally processing the source and target sentences respectively . Both the encoder and decoder are stacked of L blocks . We mainly introduce the differences compared with the standard Transformer . ( 1 ) Incremental encoding : Let hlt denote the output of the t-th position from block l. For ease of reference , let H l≤t denote { hl1 , hl2 , · · · , hlt } , and let h0t denote the embeddings of the t-th token . An attention model attn ( q , K , V ) , takes a query q ∈ Rd , a set of keys K and values V as inputs . K and V are of equal size , q ∈ Rd where d ∈ Z+ is the dimension , ki ∈ Rd and vi ∈ Rd are the i-th key and value . attn is defined as follows : attn ( q , K , V ) = |K|∑ i=1 αiWvvi , αi = exp ( ( Wqq ) > ( Wkki ) ) Z , Z = |K|∑ i=1 exp ( ( Wqq ) > ( Wkki ) ) , ( 1 ) where W ’ s are the parameters to be optimized . In the encoder side , hlt is obtained in a unidirectional way : hlt = attn ( h l−1 t , H l−1 ≤t , H l−1 ≤t ) . That is , the model can only attend to the previously generated hidden representations , and the computation complexity is O ( L2x ) . In comparison , ( Ma et al. , 2019 ) still leverages bidirectional attention , whose computation complexity is O ( L3x ) . We find that unidirectional attention is much more efficient than bidirectional attention without much accuracy drop ( see Appendix D.1 for details ) . ( 2 ) Incremental decoding : Since we use wait-k strategy , the decoding starts before reading all inputs . At the t-th decoding step , the decoder can only read x≤t+k−1 . When t ≤ Lx−k , the decoder greedily generates one token at each step , i.e. , the token is yt = argmaxw∈V P ( w|y≤t−1 ; HL≤t+k−1 ) , where V is the vocabulary of the target language . When t > Lx − k , the model has read the full input sentence and can generate words using beam search ( Ma et al. , 2019 ) . 4 OUR METHOD . We first introduce our algorithm in Section 4.1 , and then we discuss its relationship with several other heuristic algorithms that leverage future information in Section 4.2 . 4.1 ALGORITHM . Let f ( · · · ; θ ) denote a translation model parameterized by θ , and let ϕ denote the controller parameterized by ω to guide the training process of f . f ( · · · ; θ∗ ( ω ) ) is the translation model obtained under the guidance of the controller ϕ ( · · · ; ω ) , where θ∗ ( ω ) is the corresponding parameter . For each training data ( x , y ) , the controller ϕ adaptively assigns a training task wait-m , where m ∈ { 1 , 2 , · · · , M } , andM ∈ Z+ is a pre-defined hyperparameter . The input of ϕ consists of two parts : ( i ) the information of the training data ( x , y ) ; ( ii ) the network status of the translation model f . For ease of reference , denote these input features as Ix , y , f . We will discuss how to design Ix , y , f in Section 5.1 . LetMk ( Dva ; θ∗ ( ω ) ) denote the validation metric , which is evaluated on the validation set Dva with model f ( · · · ; θ∗ ( ω ) ) and wait-k inference . We formulate the training process of f and ϕ as a bi-level optimization , where two optimization problems are nested together . In the inner-optimization , given a ω , we want to obtain the model f ( · · · , θ∗ ( ω ) ) that can minimize the loss function ` on the training set Dtr under the guidance of the controller ϕ ( · · · , ω ) . In the outer-optimization , given a translation model θ∗ ( ω ) , we optimize ω to maximize the validation performance Mk . The mathematical formulation is shown as follows : max ω Mk ( Dva ; θ∗ ( ω ) ) ; s.t . θ∗ ( ω ) = argmin θ 1 |Dtr| ∑ ( x , y ) ∼Dtr Em∼ϕ ( Ix , y , f ; ω ) ` ( x , y , m ; θ ) ; where ` ( x , y , m ; θ ) = ∑ ( x , y ) logP ( y|x ; θ , m ) = ∑ ( x , y ) |y|∑ t=1 logP ( yt|y≤t−1 , x≤t+m−1 ) . ( 2 ) We optimize Equation 2 in an alternative way , where we first optimize θ with a given ω , and then update ω using the REINFORCE algorithm . We repeat the above process until convergence . Details are in Algorithm 1 : Algorithm 1 : The optimization algorithm . 1 Input : Training episode E ; internal update iterations T ; learning rate ηθ of the translation model ; learning rate ηω of the controller ; batch size B ; initial parameters ω , θ ; 2 for e← 1 : E do 3 Init a buffer to store states and actions : B = { } ; 4 for t← 1 : T do 5 Randomly sample a mini-batch of data De , t from Dtr with batch size B ; 6 Assign a wait-m task to each data : D̃ = { ( x , y , m ) | ( x , y ) ∈ De , t , m ∼ ϕ ( Ix , y , f ; ω ) } , where the batch size is B , and m is sampled from to the output distribution of ϕ ; 7 Update the buffer : B ← B ∪ { ( Ix , y , f , m ) | ( x , y , m ) ∈ D̃ } ; 8 Update the translation model : θ ← θ − ( ηθ/B ) ∇θ ∑ ( x , y , m ) ∈D̃ ` ( x , y , m ; θ ) ; 9 Calculate the validation performance as the reward : Re =Mk ( Dva ; θe , T ) ; 10 Update the controller : ω ← ω + ηωRe ∑ ( I , m ) ∈B∇ω logP ( ϕ ( I ; ω ) = m ) . 11 Return θ. Algorithm 1 consists of E episodes ( i.e. , the outer loop ) , and each episode consists of T update iterations ( i.e. , the inner loop ) . The inner loop ( from step 4 to step 8 ) aims to optimize the θ , where we can update the parameter with any gradient based optimizer like momentum SGD , Adam ( Kingma & Ba , 2015 ) , etc . The outer loop ( from step 2 to step 10 ) aims to optimize ω. ϕ ( Ix , y , f ; ω ) can be regarded as a policy network , where the state is Ix , y , f , the action is the choice of the task wait-m , m ∈ { 1 , 2 , · · · , M } , and the reward is the validation performance Re ( step 9 ) . At the end of each episode , we update ω using REINFORCE algorithm ( step 10 ) .
This paper proposes a training strategy for simultaneous translation to choose appropriate amount of look-ahead information for each decoding. Based on the observation that the wait-k method can be improved by training with longer information, the method introduces a function to determine its length given the current example (source x, target y, and the translation model f). It is used as only a guidance during training, and it remains the same decoding criterion at inference.
SP:28a8d17fa8de3d51a3837f4e306facaafd416768
Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data
1 INTRODUCTION . Inferring causal relations in observational time-series is central to many fields of scientific inquiry ( Berzuini et al. , 2012 ; Spirtes et al. , 2000 ) . Suppose you want to analyze fMRI data , which measures the activity of different brain regions over time — how can you infer the ( causal ) influence of one brain region on another ? This question is addressed by the field of causal discovery ( Glymour et al. , 2019 ) . Methods within this field allow us to infer causal relations from observational data - when interventions ( e.g . randomized trials ) are infeasible , unethical or too expensive . In time-series , the assumption that causes temporally precede their effects enables us to discover causal relations in observational data ( Peters et al. , 2017 ) ; with approaches relying on conditional independence tests ( Entner and Hoyer , 2010 ) , scoring functions ( Chickering , 2002 ) , or deep learning ( Tank et al. , 2018 ) . All of these methods assume that samples share a single underlying causal graph and refit a new model whenever this assumption does not hold . However , samples with different underlying causal graphs may share relevant information such as the dynamics describing the effects of causal relations . fMRI test subjects may have varying brain connectivity but the same underlying neurochemistry ; social networks may have differing structure but comparable interpersonal relationships ; different stocks may relate differently to one another but obey similar market forces . Despite a range of relevant applications , inferring causal relations across samples with different underlying causal graphs is as of yet largely unexplored . In this paper , we propose a novel causal discovery framework for time-series that embraces this aspect : Amortized Causal Discovery ( Fig . 1 ) . In this framework , we learn to infer causal relations across samples with different underlying causal graphs but shared dynamics . We achieve this by separating the causal relation prediction from the modeling of their dynamics : an amortized encoder predicts the edges in the causal graph , and a decoder models the dynamics of the system under the predicted causal relations . This setup allows us to pool statistical strength across samples and to achieve significant improvements in performance with additional training data . It also allows us to infer causal relations in previously unseen samples without refitting our model . Additionally , we show that Amortized Causal Discovery allows us to improve robustness under hidden confounding by modeling the unobserved variables with the amortized encoder . Our contributions are as follows : • We formalize Amortized Causal Discovery ( ACD ) , a novel framework for causal discovery in time-series , in which we learn to infer causal relations from samples with different underlying causal graphs but shared dynamics . • We propose a variational model for ACD , applicable to multi-variate , non-linear data . • We present experiments demonstrating the effectiveness of this model on a range of causal discovery datasets , both in the fully observed setting and under hidden confounding . 2 BACKGROUND : GRANGER CAUSALITY . Granger causality ( Granger , 1969 ) is one of the most commonly used approaches to infer causal relations from observational time-series data . Its central assumption is that causes precede their effects : if the prediction of the future of time-series Y can be improved by knowing past elements of time-series X , then X “ Granger causes ” Y . Originally , Granger causality was defined for linear relations ; we follow the more recent definition of Tank et al . ( 2018 ) for non-linear Granger causality : Definition 2.1 . Non-Linear Granger Causality : Given N stationary time-series x = { x1 , ... xN } across time-steps t = { 1 , ... , T } and a non-linear autoregressive function gj , such that xt+1j = gj ( x ≤t 1 , ... , x ≤t N ) + ε t j , ( 1 ) where x≤tj = ( ... , x t−1 j , x t j ) denotes the present and past of series j and ε t j represents independent noise . In this setup , time-series i Granger causes j , if gj is not invariant to x ≤t i , i.e . if ∃ x′≤ti 6= x ≤t i : gj ( x ≤t 1 , ... , x ′≤t i , ... , x ≤t N ) 6= gj ( x ≤t 1 , ... , x ≤t i , ... x ≤t N ) . Granger causal relations are equivalent to causal relations in the underlying directed acyclic graph if all relevant variables are observed and no instantaneous1 connections exist ( Peters et al. , 2013 ; 2017 , Theorem 10.1 ) . Many methods for Granger causal discovery , including vector autoregressive ( Hyvärinen et al. , 2010 ) and more recent deep learning-based approaches ( Khanna and Tan , 2020 ; Tank et al. , 2018 ; Wu et al. , 2020 ) , can be encapsulated by a particular framework : 1 . Define a function fθ ( an MLP in Tank et al . ( 2018 ) , a linear model in Hyvärinen et al . ( 2010 ) ) , which learns to predict the next time-step of the test sequence x . 2 . Fit fθ to x by minimizing some loss L : θ ? = argminθ L ( x , fθ ) . 3 . Apply some fixed function h ( e.g . thresholding ) to the learned parameters to produce the Granger causal graph estimate for x : Ĝx = h ( θ ? ) . For instance , Tank et al . ( 2018 ) infer the Granger causal relations through examination of the weights θ ? : if all outgoing weights wij between time-series i and j are zero , then i does not Granger-cause j . 1connections between two variables at the same time step The shortcoming of this approach is that , when we have S samples x1 , . . . , xS with different underlying causal graphs , the parameters θ must be optimized separately for each of them . As a result , methods within this framework can not take advantage of the information that might be shared between samples . This motivates us to question : can we amortize this process ? 3 AMORTIZED CAUSAL DISCOVERY . We propose Amortized Causal Discovery ( ACD ) , a framework in which we learn to infer causal relations across samples with different underlying causal graphs but shared dynamics . To illustrate : Suppose you want to infer synaptic connections ( i.e . causal relations ) between neurons based on their spiking behaviour . You are given a set of S recordings ( i.e . samples ) , each containing N time-series representing the firing of N individual neurons . Even though you might record across different populations of neurons with different wiring , the dynamics of how neurons connected by synapses influence one another stays the same . ACD takes advantage of such shared dynamics to improve the prediction of causal relations . It can be summarized as follows : 1 . Define an encoding function fφ which learns to infer Granger causal relations of any sample xi in the training setXtrain . Define a decoding function fθ which learns to predict the next time-step of the samples under the inferred causal relations . 2 . Fit fφ and fθ toXtrain by minimizing some loss L : fφ ? , fθ ? = argminfφ , fθ L ( Xtrain , fφ , fθ ) . 3 . For any given test sequence xtest , simply output the Granger causal graph estimate Ĝxtest : Ĝxtest = fφ ? ( xtest ) . By dividing the model into two parts , an encoder and a decoder , ACD can use the activations of fφ ? to infer causal structure . This increases the flexibility of our approach greatly compared to methods that use the learned weights θ ? such as the prior Granger causal discovery methods described in Section 2 . In this section , we describe our framework in more detail , and provide a probabilistic implementation thereof . We also extend our approach to model hidden confounders . Preliminaries We begin with a datasetX = { xs } Ss=1 of S samples , where each sample xs consists of N stationary time-series xs = { xs,1 , . . . , xs , N } across timesteps t = { 1 , ... , T } . We denote the t-th time-step of the i-th time-series of xs as xts , i.We assume there is a directed acyclic graph G1 : Ts = { V1 : Ts , E1 : Ts } underlying the generative process of each sample . This is a structural causal model ( SCM ) ( Pearl , 2009 ) . Its endogenous ( observed ) variables are vertices vts , i ∈ V1 : Ts for each time-series i and each time-step t. Every set of incoming edges to an endogenous variable defines inputs to a deterministic function gts , i which determines that variable ’ s value 2 . The edges are defined by ordered pairs of vertices E1 : Ts = { ( vts , i , vt ′ s , j ) } , which we make two assumptions about : 1 . No edges are instantaneous ( t = t′ ) or go back in time . Thus , t < t′ for all edges . 2 . Edges are invariant to time . Thus , if ( vts , i , v t+k s , j ) ∈ E1 : Ts , then ∀1 ≤ t′ ≤ T − k : ( vt ′ s , i , v t′+k s , j ) ∈ E1 : Ts . The associated structural equations gts , i are invariant to time as well , i.e . gts , i = gt ′ s , i ∀t , t′ . The first assumption states that causes temporally precede their effects and makes causal relations identifiable from observational data , when no hidden confounders are present ( Peters et al. , 2013 ; 2017 , Theorem 10.1 ) . The second simplifies modeling : it is a fairly general assumption which allows us to define dynamics that govern all time-steps ( Eq . ( 2 ) ) . Throughout this paper , we are interested in discovering the summary graph Gs = { Vs , Es } ( Peters et al. , 2017 ) . It consists of vertices vs , i ∈ Vs for each time-series i in sample s , and has directed edges whenever they exist in E1 : Ts at any time-step , i.e . Es = { ( vs , i , vs , j ) | ∃t , t′ : ( vts , i , vt ′ s , j ) ∈ E1 : Ts } . Note that while G1 : Ts is acyclic ( due to the first assumption above ) , the summary graph Gs may contain ( self- ) cycles . Amortized Causal Discovery The key assumption for Amortized Causal Discovery is that there exists some fixed function g that describes the dynamics of all samples xs ∈ X given their past 2The SCM also includes an exogenous ( unobserved ) , independently-sampled error variable v as a parent of each vertex v , which we do not model and thus leave out for brevity . observations x≤ts and their underlying causal graph Gs : xt+1s = g ( x ≤t s , Gs ) + εts . ( 2 ) There are two variables in this data-generating process that we would like to model : the causal graph Gs that is specific to sample xs , and the dynamics g that are shared across all samples . This separation between the causal graph and the dynamics allows us to divide our model accordingly : we introduce an amortized causal discovery encoder fφ which learns to infer a causal graph Gs given the sample xs , and a dynamics decoder fθ that learns to approximate g : xt+1s ≈ fθ ( x≤ts , fφ ( xs ) ) . ( 3 ) We formalize Amortized Causal Discovery ( ACD ) as follows . Let G be the domain of all possible summary graphs on xs : Gs ∈ G. Let X be the domain of any single step , partial or full , observed sequence : xts , x ≤t s , xs ∈ X . The model consists of two components : a causal discovery encoder fφ : X → G which infers a causal graph for each input sample , and a decoder fθ : X × G → X which models the dynamics . This model is optimized with a sample-wise loss ` : X×X→ R which scores how well the decoder models the true dynamics of xs , and a regularization term r : G→ R on the inferred graphs . For example , this function r may enforce sparsity by penalizing graphs with more edges . Note , that our formulation of the graph prediction problem is unsupervised : we do not have access to the true underlying graph Gs . Then , given some dataset Xtrain with S samples , we optimize : fφ ? , fθ ? = argminfφ , fθ L ( Xtrain , fφ , fθ ) ( 4 ) where L ( Xtrain , φ , θ ) = S∑ s=1 T−1∑ t=1 ` ( xt+1s , fθ ( x ≤t s , fφ ( xs ) ) ) + r ( fφ ( xs ) ) . ( 5 ) See Appendix B for a proof of the consistency of the loss ` and a discussion on regularization r. Once we have completed optimization , we can perform causal graph prediction on any new input test sample xtest in two ways – we can feed xtest into the amortized encoder and take its output as the predicted edges ( Eq . 6 ) ; or we can instantiate our estimate Ĝtest ∈ G which will be our edge predictions , and find the edges which best explain the observed sequence xtest by minimizing the ( learned ) decoding loss with respect to Ĝtest , which we term Test-Time Adaptation ( TTA ) ( Eq . 7 ) : ĜEnc = fφ ? ( xtest ) ; ( 6 ) ĜTTA = argminĜtest∈G L ( xtest , Ĝtest , fθ ? ) . ( 7 ) By separating the prediction of causal relations from the modeling of their dynamics , ACD yields a number of benefits . ACD can learn to infer causal relations across samples with different underlying causal graphs , and it can infer causal relations in previously unseen test samples without refitting ( Eq . ( 6 ) ) . By generalizing across samples , it can improve causal discovery performance with increasing training data size . We can replace either fφ or fθ with ground truth annotations , or simulate the outcome of counterfactual causal relations . Additionally , ACD can be applied in the standard causal discovery setting , where only a single causal graph underlies all samples , by replacing the amortized encoder fφ with an estimated graph Ĝ ( or distribution over G ) in Eq . ( 4 ) .
The authors proposed a framework called Amortized Causal Discovery (ACD) for recovering causal relationships in time series where samples are generated from models with different underlying causal graphs but shared dynamics. This framework is applicable in settings such as modeling neural spiking trains where the dynamics of how neurons react to the activities of other neurons remain the same. In the proposed framework, they considered a causal discovery encoder $f_{\theta}$, which tries to extract causal relationships and map it to a latent space. Moreover, there is a dynamic decoder $f_{\phi}$, which provides one-step predictions. The proposed architecture is similar to a variational auto-encoder and the encoder part is based on graph neural networks.
SP:3cda613e93b67aa8b562cd2564f4d3583fe9f2e8
Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data
1 INTRODUCTION . Inferring causal relations in observational time-series is central to many fields of scientific inquiry ( Berzuini et al. , 2012 ; Spirtes et al. , 2000 ) . Suppose you want to analyze fMRI data , which measures the activity of different brain regions over time — how can you infer the ( causal ) influence of one brain region on another ? This question is addressed by the field of causal discovery ( Glymour et al. , 2019 ) . Methods within this field allow us to infer causal relations from observational data - when interventions ( e.g . randomized trials ) are infeasible , unethical or too expensive . In time-series , the assumption that causes temporally precede their effects enables us to discover causal relations in observational data ( Peters et al. , 2017 ) ; with approaches relying on conditional independence tests ( Entner and Hoyer , 2010 ) , scoring functions ( Chickering , 2002 ) , or deep learning ( Tank et al. , 2018 ) . All of these methods assume that samples share a single underlying causal graph and refit a new model whenever this assumption does not hold . However , samples with different underlying causal graphs may share relevant information such as the dynamics describing the effects of causal relations . fMRI test subjects may have varying brain connectivity but the same underlying neurochemistry ; social networks may have differing structure but comparable interpersonal relationships ; different stocks may relate differently to one another but obey similar market forces . Despite a range of relevant applications , inferring causal relations across samples with different underlying causal graphs is as of yet largely unexplored . In this paper , we propose a novel causal discovery framework for time-series that embraces this aspect : Amortized Causal Discovery ( Fig . 1 ) . In this framework , we learn to infer causal relations across samples with different underlying causal graphs but shared dynamics . We achieve this by separating the causal relation prediction from the modeling of their dynamics : an amortized encoder predicts the edges in the causal graph , and a decoder models the dynamics of the system under the predicted causal relations . This setup allows us to pool statistical strength across samples and to achieve significant improvements in performance with additional training data . It also allows us to infer causal relations in previously unseen samples without refitting our model . Additionally , we show that Amortized Causal Discovery allows us to improve robustness under hidden confounding by modeling the unobserved variables with the amortized encoder . Our contributions are as follows : • We formalize Amortized Causal Discovery ( ACD ) , a novel framework for causal discovery in time-series , in which we learn to infer causal relations from samples with different underlying causal graphs but shared dynamics . • We propose a variational model for ACD , applicable to multi-variate , non-linear data . • We present experiments demonstrating the effectiveness of this model on a range of causal discovery datasets , both in the fully observed setting and under hidden confounding . 2 BACKGROUND : GRANGER CAUSALITY . Granger causality ( Granger , 1969 ) is one of the most commonly used approaches to infer causal relations from observational time-series data . Its central assumption is that causes precede their effects : if the prediction of the future of time-series Y can be improved by knowing past elements of time-series X , then X “ Granger causes ” Y . Originally , Granger causality was defined for linear relations ; we follow the more recent definition of Tank et al . ( 2018 ) for non-linear Granger causality : Definition 2.1 . Non-Linear Granger Causality : Given N stationary time-series x = { x1 , ... xN } across time-steps t = { 1 , ... , T } and a non-linear autoregressive function gj , such that xt+1j = gj ( x ≤t 1 , ... , x ≤t N ) + ε t j , ( 1 ) where x≤tj = ( ... , x t−1 j , x t j ) denotes the present and past of series j and ε t j represents independent noise . In this setup , time-series i Granger causes j , if gj is not invariant to x ≤t i , i.e . if ∃ x′≤ti 6= x ≤t i : gj ( x ≤t 1 , ... , x ′≤t i , ... , x ≤t N ) 6= gj ( x ≤t 1 , ... , x ≤t i , ... x ≤t N ) . Granger causal relations are equivalent to causal relations in the underlying directed acyclic graph if all relevant variables are observed and no instantaneous1 connections exist ( Peters et al. , 2013 ; 2017 , Theorem 10.1 ) . Many methods for Granger causal discovery , including vector autoregressive ( Hyvärinen et al. , 2010 ) and more recent deep learning-based approaches ( Khanna and Tan , 2020 ; Tank et al. , 2018 ; Wu et al. , 2020 ) , can be encapsulated by a particular framework : 1 . Define a function fθ ( an MLP in Tank et al . ( 2018 ) , a linear model in Hyvärinen et al . ( 2010 ) ) , which learns to predict the next time-step of the test sequence x . 2 . Fit fθ to x by minimizing some loss L : θ ? = argminθ L ( x , fθ ) . 3 . Apply some fixed function h ( e.g . thresholding ) to the learned parameters to produce the Granger causal graph estimate for x : Ĝx = h ( θ ? ) . For instance , Tank et al . ( 2018 ) infer the Granger causal relations through examination of the weights θ ? : if all outgoing weights wij between time-series i and j are zero , then i does not Granger-cause j . 1connections between two variables at the same time step The shortcoming of this approach is that , when we have S samples x1 , . . . , xS with different underlying causal graphs , the parameters θ must be optimized separately for each of them . As a result , methods within this framework can not take advantage of the information that might be shared between samples . This motivates us to question : can we amortize this process ? 3 AMORTIZED CAUSAL DISCOVERY . We propose Amortized Causal Discovery ( ACD ) , a framework in which we learn to infer causal relations across samples with different underlying causal graphs but shared dynamics . To illustrate : Suppose you want to infer synaptic connections ( i.e . causal relations ) between neurons based on their spiking behaviour . You are given a set of S recordings ( i.e . samples ) , each containing N time-series representing the firing of N individual neurons . Even though you might record across different populations of neurons with different wiring , the dynamics of how neurons connected by synapses influence one another stays the same . ACD takes advantage of such shared dynamics to improve the prediction of causal relations . It can be summarized as follows : 1 . Define an encoding function fφ which learns to infer Granger causal relations of any sample xi in the training setXtrain . Define a decoding function fθ which learns to predict the next time-step of the samples under the inferred causal relations . 2 . Fit fφ and fθ toXtrain by minimizing some loss L : fφ ? , fθ ? = argminfφ , fθ L ( Xtrain , fφ , fθ ) . 3 . For any given test sequence xtest , simply output the Granger causal graph estimate Ĝxtest : Ĝxtest = fφ ? ( xtest ) . By dividing the model into two parts , an encoder and a decoder , ACD can use the activations of fφ ? to infer causal structure . This increases the flexibility of our approach greatly compared to methods that use the learned weights θ ? such as the prior Granger causal discovery methods described in Section 2 . In this section , we describe our framework in more detail , and provide a probabilistic implementation thereof . We also extend our approach to model hidden confounders . Preliminaries We begin with a datasetX = { xs } Ss=1 of S samples , where each sample xs consists of N stationary time-series xs = { xs,1 , . . . , xs , N } across timesteps t = { 1 , ... , T } . We denote the t-th time-step of the i-th time-series of xs as xts , i.We assume there is a directed acyclic graph G1 : Ts = { V1 : Ts , E1 : Ts } underlying the generative process of each sample . This is a structural causal model ( SCM ) ( Pearl , 2009 ) . Its endogenous ( observed ) variables are vertices vts , i ∈ V1 : Ts for each time-series i and each time-step t. Every set of incoming edges to an endogenous variable defines inputs to a deterministic function gts , i which determines that variable ’ s value 2 . The edges are defined by ordered pairs of vertices E1 : Ts = { ( vts , i , vt ′ s , j ) } , which we make two assumptions about : 1 . No edges are instantaneous ( t = t′ ) or go back in time . Thus , t < t′ for all edges . 2 . Edges are invariant to time . Thus , if ( vts , i , v t+k s , j ) ∈ E1 : Ts , then ∀1 ≤ t′ ≤ T − k : ( vt ′ s , i , v t′+k s , j ) ∈ E1 : Ts . The associated structural equations gts , i are invariant to time as well , i.e . gts , i = gt ′ s , i ∀t , t′ . The first assumption states that causes temporally precede their effects and makes causal relations identifiable from observational data , when no hidden confounders are present ( Peters et al. , 2013 ; 2017 , Theorem 10.1 ) . The second simplifies modeling : it is a fairly general assumption which allows us to define dynamics that govern all time-steps ( Eq . ( 2 ) ) . Throughout this paper , we are interested in discovering the summary graph Gs = { Vs , Es } ( Peters et al. , 2017 ) . It consists of vertices vs , i ∈ Vs for each time-series i in sample s , and has directed edges whenever they exist in E1 : Ts at any time-step , i.e . Es = { ( vs , i , vs , j ) | ∃t , t′ : ( vts , i , vt ′ s , j ) ∈ E1 : Ts } . Note that while G1 : Ts is acyclic ( due to the first assumption above ) , the summary graph Gs may contain ( self- ) cycles . Amortized Causal Discovery The key assumption for Amortized Causal Discovery is that there exists some fixed function g that describes the dynamics of all samples xs ∈ X given their past 2The SCM also includes an exogenous ( unobserved ) , independently-sampled error variable v as a parent of each vertex v , which we do not model and thus leave out for brevity . observations x≤ts and their underlying causal graph Gs : xt+1s = g ( x ≤t s , Gs ) + εts . ( 2 ) There are two variables in this data-generating process that we would like to model : the causal graph Gs that is specific to sample xs , and the dynamics g that are shared across all samples . This separation between the causal graph and the dynamics allows us to divide our model accordingly : we introduce an amortized causal discovery encoder fφ which learns to infer a causal graph Gs given the sample xs , and a dynamics decoder fθ that learns to approximate g : xt+1s ≈ fθ ( x≤ts , fφ ( xs ) ) . ( 3 ) We formalize Amortized Causal Discovery ( ACD ) as follows . Let G be the domain of all possible summary graphs on xs : Gs ∈ G. Let X be the domain of any single step , partial or full , observed sequence : xts , x ≤t s , xs ∈ X . The model consists of two components : a causal discovery encoder fφ : X → G which infers a causal graph for each input sample , and a decoder fθ : X × G → X which models the dynamics . This model is optimized with a sample-wise loss ` : X×X→ R which scores how well the decoder models the true dynamics of xs , and a regularization term r : G→ R on the inferred graphs . For example , this function r may enforce sparsity by penalizing graphs with more edges . Note , that our formulation of the graph prediction problem is unsupervised : we do not have access to the true underlying graph Gs . Then , given some dataset Xtrain with S samples , we optimize : fφ ? , fθ ? = argminfφ , fθ L ( Xtrain , fφ , fθ ) ( 4 ) where L ( Xtrain , φ , θ ) = S∑ s=1 T−1∑ t=1 ` ( xt+1s , fθ ( x ≤t s , fφ ( xs ) ) ) + r ( fφ ( xs ) ) . ( 5 ) See Appendix B for a proof of the consistency of the loss ` and a discussion on regularization r. Once we have completed optimization , we can perform causal graph prediction on any new input test sample xtest in two ways – we can feed xtest into the amortized encoder and take its output as the predicted edges ( Eq . 6 ) ; or we can instantiate our estimate Ĝtest ∈ G which will be our edge predictions , and find the edges which best explain the observed sequence xtest by minimizing the ( learned ) decoding loss with respect to Ĝtest , which we term Test-Time Adaptation ( TTA ) ( Eq . 7 ) : ĜEnc = fφ ? ( xtest ) ; ( 6 ) ĜTTA = argminĜtest∈G L ( xtest , Ĝtest , fθ ? ) . ( 7 ) By separating the prediction of causal relations from the modeling of their dynamics , ACD yields a number of benefits . ACD can learn to infer causal relations across samples with different underlying causal graphs , and it can infer causal relations in previously unseen test samples without refitting ( Eq . ( 6 ) ) . By generalizing across samples , it can improve causal discovery performance with increasing training data size . We can replace either fφ or fθ with ground truth annotations , or simulate the outcome of counterfactual causal relations . Additionally , ACD can be applied in the standard causal discovery setting , where only a single causal graph underlies all samples , by replacing the amortized encoder fφ with an estimated graph Ĝ ( or distribution over G ) in Eq . ( 4 ) .
The paper makes an observation that signal dynamics common to a class of causal systems may contain strong information to enable the use of the encoder of the Neural Relational Inference (2018) for extracting (Granger) causal graphs. It minimally extends the NRI model with an empty edge type and demonstrates that the observation holds in a few cases of dynamical systems. Additionally, to handle the unobserved common causes the paper shows improvements in ROCAUC when the encoder is modified to model them directly.
SP:3cda613e93b67aa8b562cd2564f4d3583fe9f2e8
Physics-aware, probabilistic model order reduction with guaranteed stability
Given ( small amounts of ) time-series ’ data from a high-dimensional , fine-grained , multiscale dynamical system , we propose a generative framework for learning an effective , lower-dimensional , coarse-grained dynamical model that is predictive of the fine-grained system ’ s long-term evolution but also of its behavior under different initial conditions . We target fine-grained models as they arise in physical applications ( e.g . molecular dynamics , agent-based models ) , the dynamics of which are strongly non-stationary but their transition to equilibrium is governed by unknown slow processes which are largely inaccessible by brute-force simulations . Approaches based on domain knowledge heavily rely on physical insight in identifying temporally slow features and fail to enforce the long-term stability of the learned dynamics . On the other hand , purely statistical frameworks lack interpretability and rely on large amounts of expensive simulation data ( long and multiple trajectories ) as they can not infuse domain knowledge . The generative framework proposed achieves the aforementioned desiderata by employing a flexible prior on the complex plane for the latent , slow processes , and an intermediate layer of physics-motivated latent variables that reduces reliance on data and imbues inductive bias . In contrast to existing schemes , it does not require the a priori definition of projection operators or encoders and addresses simultaneously the tasks of dimensionality reduction and model estimation . We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics where probabilistic , long-term predictions of phenomena not contained in the training data are produced . 1 INTRODUCTION . High-dimensional , nonlinear systems are ubiquitous in engineering and computational physics . Their nature is in general multi-scale1 . E.g . in materials , defects and cracks occur on scales of millimeters to centimeters whereas the atomic processes responsible for such defects take place at much finer scales ( Belytschko & Song , 2010 ) . Local oscillations due to bonded interactions of atoms ( Smit , 1996 ) take place at time scales of femtoseconds ( 10−15s ) , whereas protein folding processes which can be relevant for e.g . drug discovery happen at time scales larger than milliseconds ( 10−3s ) . In Fluid Mechanics , turbulence phenomena are characterized by fine-scale spatiotemporal fluctuations which affect the coarse-scale response ( Laizet & Vassilicos , 2009 ) . In all of these cases , macroscopic observables are the result of microscopic phenomena and a better understanding of the interactions between the different scales would be highly beneficial for predicting the system ’ s evolution ( Givon et al. , 2004 ) . The identification of the different scales , their dynamics and connections however is a non-trivial task and is challenging from the perspective of statistical as well as physical modeling . 1With the term multiscale we refer to systems whose behavior arises from the synergy of two or more processes occurring at different ( spatio ) temporal scales . Very often these processes involve different physical descriptions and models ( i.e . they are also multi-physics ) . We refer to the description/model at the finer scale as fine-grained and to the description/model at the coarser scale as coarse-grained . In this paper we propose a novel physics-aware , probabilistic model order reduction framework with guaranteed stability that combines recent advances in statistical learning with a hierarchical architecture that promotes the discovery of interpretable , low-dimensional representations . We employ a generative state-space model with two layers of latent variables . The first describes the latent dynamics using a novel prior on the complex plane that guarantees stability and yields a clear distinction between fast and slow processes , the latter being responsible for the system ’ s long-term evolution . The second layer involves physically-motivated latent variables which infuse inductive bias , enable connections with the very high-dimensional observables and reduce the data requirements for training . The probabilistic formulation adopted enables the quantification of a crucial , and often neglected , component in any model compression process , i.e . the predictive uncertainty due to information loss . We finally want to emphasize that the problems of interest are Small Data ones due to the computational expense of the physical simulators . Hence the number of time-steps as well as the number of time-series used for training is small as compared to the dimension of the system and to the time-horizon over which predictions are sought . 2 PHYSICS-AWARE , PROBABILISTIC MODEL ORDER REDUCTION . Our data consists of N times-series { x ( i ) 0 : T } Ni=1 over T time-steps generated by a computational physics simulator . This can represent positions and velocities of each particle in a fluid or those of atoms in molecular dynamics . Their dimension is generally very high i.e . xt ∈M ⊂ Rf ( f > > 1 ) . In the context of state-space models , the goal is to find a lower-dimensional set of collective variables or latent generators zt and their associated dynamics . Given the difficulties associated with these tasks and the solutions that have been proposed in statistics and computational physics literature , we advocate the use of an intermediate layer of physically-motivated , lower-dimensional variables Xt ( e.g . density or velocity fields ) , the meaning of which will become precise in the next sections . These variables provide a coarse-grained description of the high-dimensional observables and imbue interpretability in the learned dynamics . Using Xt alone ( without zt ) would make it extremely difficult to enforce long-term stability ( see Appendix H.2 ) while ensuring sufficient complexity in the learned dynamics ( Felsberger & Koutsourelakis , 2019 ; Champion et al. , 2019 ) . Furthermore and even if the dynamics of xt are first-order Markovian , this is not necessarily the case forXt ( Chorin & Stinis , 2007 ) . The latent variables zt therefore effectively correspond to a nonlinear coordinate transformation that yields not only Markovian but also stable dynamics ( Gin et al. , 2019 ) . The general framework is summarized in Figure 1 and we provide details in the next section . 2.1 MODEL STRUCTURE . Our model consists of three levels . At the first level , we have the latent variables zt which are connected with Xt in the second layer through a probabilistic map G. The physical variables Xt are finally connected to the high-dimensional observables through another probabilistic map F . We parametrize F , G with deep neural networks and denote by θ1 and θ2 the corresponding parameters ( see Appendix D ) . In particular , we postulate the following relations : zt , j = zt−1 , j exp ( λj ) + σj t , j λj ∈ C , t , j ∼ CN ( 0 , 1 ) , j = 1 , 2 , . . . , h ( 1 ) Xt = G ( zt , θ1 ) ( 2 ) xt = F ( Xt , θ2 ) ( 3 ) We assume that the latent variables zt are complex-valued and a priori independent . Complex variables were chosen as their evolution includes a harmonic components which are observed in many physical systems . In Appendix H.1 we present results with a real-valued latent variables zt , j and illustrate their limitations . We model their dynamics with a discretized Ornstein-Uhlenbeck process on the complex plane with initial conditions z0 , j ∼ CN ( 0 , σ20 , j ) 2 . The parameters associated with this level are denoted summarily by θ0 = { σ20 , j , σ2j , λj } hj=1 . These , along with θ1 , θ2 mentioned earlier , and the state variables Xt and zt have to be inferred from the data xt . We explain each of the aforementioned components in the sequel . 2.1.1 STABLE LOW-DIMENSIONAL DYNAMICS . While the physical systems ( e.g . particle dynamics ) of interest are highly non-stationary , they generally converge to equilibrium in the long-term . We enforce long-term stability here by ensuring that the real-part of the λj ’ s in Equation ( 1 ) is negative , i.e . : λj = < ( λj ) + i = ( λj ) with < ( λj ) < 0 ( 4 ) which guarantees first and second-order stability i.e . the mean as well as the variance are bounded at all time steps . The transition density each process zt , j is given by : p ( zt , j | zt−1 , j ) = N ( [ < ( zt , j ) = ( zt , j ) ] | sj Rj [ < ( zt−1 , j ) = ( zt−1 , j ) ] , I σ2j 2 ) ( 5 ) where the orthogonal matrixRj depends on the imaginary part of λj : Rj = [ cos ( = ( λj ) ) − sin ( = ( λj ) ) sin ( = ( λj ) ) cos ( = ( λj ) ) ] ( 6 ) and the decay rate sj depends on the real part of λj : sj = exp ( < ( λj ) ) ( 7 ) i.e . the closer to zero the latter is , the ” slower ” the evolution of the corresponding process is . As in probabilistic Slow Feature Analysis ( SFA ) ( Turner & Sahani , 2007 ; Zafeiriou et al. , 2015 ) , we set σ2j = 1− exp ( 2 < ( λj ) ) = 1− s2j and σ20 , j = 1 . As a consequence , a priori , the latent dynamics are stationary3 and an ordering of the processes zt , j is possible on the basis of < ( λj ) . Hence the only independent parameters are the λj , the imaginary part of which can account for periodic effects in the latent dynamics ( see Appendix B ) . The joint density of zt can finally be expressed as : p ( z0 : T ) = h∏ j=1 ( T∏ t=1 p ( zt , j | zt−1 , j , θ0 ) p ( z0 , j |θ0 ) ) ( 8 ) The transition density between states at non-neighbouring time-instants is also available analytically and is useful for training on longer trajectories or in cases of missing data . Details can be found in Appendix B . 2A short review of complex normal distributions , denoted by CN , can be found in Appendix A . 3More details can be found in Appendix B . 2.1.2 PROBABILISTIC GENERATIVE MAPPING . We employ fully probabilistic maps between the different layers which involve two conditional densities based on Equations ( 2 ) and ( 3 ) , i.e . : p ( xt |Xt , θ2 ) and p ( Xt | zt , θ1 ) ( 9 ) In contrast to the majority of physics-motivated papers ( Chorin & Stinis , 2007 ; Champion et al. , 2019 ) as well as those based on transfer-operators Klus et al . ( 2018 ) , we note that the generative structure adopted does not require the prescription of a restriction operator ( or encoder ) and the reduced variables need not be selected a priori but rather are adapted to best reconstruct the observables . The splitting of the generative mapping into two parts through the introduction of the intermediate variables Xt has several advantages . Firstly , known physical dependencies between the data x and the physical variables X can be taken into account , which reduces the complexity of the associated maps and the total number of parameters . For instance , in the case of particle simulations where X represents a density or velocity field , i.e . it provides a coarsened or averaged description of the fine-scale observables , it can be used to ( probabilistically ) reconstruct the positions or velocities of the particles . This physical information can be used to compensate for the lack of data when only few training sequences are available ( Small data ) and can seen as a strong prior to the model order reduction framework . Due to the lower dimension of associated variables , the generative map between zt and Xt can be more easily learned even with few training samples . Lastly , the inferred physical variablesX can provide insight and interpretability to the analysis of the physical system .
The paper presents a generative approach to modeling physical systems with high-dimensional, nonlinear dynamical systems such as those found in fluid mechanics. The authors provide a physics-motivated hierarchical model for high-dimensional time series and a variational inference method for inferring latent variables and dynamical system parameters. They demonstrate its performance on simulated fluid mechanics prediction tasks.
SP:7f210a3382b6840f84b182f0c72b9de2e89b0fe0
Physics-aware, probabilistic model order reduction with guaranteed stability
Given ( small amounts of ) time-series ’ data from a high-dimensional , fine-grained , multiscale dynamical system , we propose a generative framework for learning an effective , lower-dimensional , coarse-grained dynamical model that is predictive of the fine-grained system ’ s long-term evolution but also of its behavior under different initial conditions . We target fine-grained models as they arise in physical applications ( e.g . molecular dynamics , agent-based models ) , the dynamics of which are strongly non-stationary but their transition to equilibrium is governed by unknown slow processes which are largely inaccessible by brute-force simulations . Approaches based on domain knowledge heavily rely on physical insight in identifying temporally slow features and fail to enforce the long-term stability of the learned dynamics . On the other hand , purely statistical frameworks lack interpretability and rely on large amounts of expensive simulation data ( long and multiple trajectories ) as they can not infuse domain knowledge . The generative framework proposed achieves the aforementioned desiderata by employing a flexible prior on the complex plane for the latent , slow processes , and an intermediate layer of physics-motivated latent variables that reduces reliance on data and imbues inductive bias . In contrast to existing schemes , it does not require the a priori definition of projection operators or encoders and addresses simultaneously the tasks of dimensionality reduction and model estimation . We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics where probabilistic , long-term predictions of phenomena not contained in the training data are produced . 1 INTRODUCTION . High-dimensional , nonlinear systems are ubiquitous in engineering and computational physics . Their nature is in general multi-scale1 . E.g . in materials , defects and cracks occur on scales of millimeters to centimeters whereas the atomic processes responsible for such defects take place at much finer scales ( Belytschko & Song , 2010 ) . Local oscillations due to bonded interactions of atoms ( Smit , 1996 ) take place at time scales of femtoseconds ( 10−15s ) , whereas protein folding processes which can be relevant for e.g . drug discovery happen at time scales larger than milliseconds ( 10−3s ) . In Fluid Mechanics , turbulence phenomena are characterized by fine-scale spatiotemporal fluctuations which affect the coarse-scale response ( Laizet & Vassilicos , 2009 ) . In all of these cases , macroscopic observables are the result of microscopic phenomena and a better understanding of the interactions between the different scales would be highly beneficial for predicting the system ’ s evolution ( Givon et al. , 2004 ) . The identification of the different scales , their dynamics and connections however is a non-trivial task and is challenging from the perspective of statistical as well as physical modeling . 1With the term multiscale we refer to systems whose behavior arises from the synergy of two or more processes occurring at different ( spatio ) temporal scales . Very often these processes involve different physical descriptions and models ( i.e . they are also multi-physics ) . We refer to the description/model at the finer scale as fine-grained and to the description/model at the coarser scale as coarse-grained . In this paper we propose a novel physics-aware , probabilistic model order reduction framework with guaranteed stability that combines recent advances in statistical learning with a hierarchical architecture that promotes the discovery of interpretable , low-dimensional representations . We employ a generative state-space model with two layers of latent variables . The first describes the latent dynamics using a novel prior on the complex plane that guarantees stability and yields a clear distinction between fast and slow processes , the latter being responsible for the system ’ s long-term evolution . The second layer involves physically-motivated latent variables which infuse inductive bias , enable connections with the very high-dimensional observables and reduce the data requirements for training . The probabilistic formulation adopted enables the quantification of a crucial , and often neglected , component in any model compression process , i.e . the predictive uncertainty due to information loss . We finally want to emphasize that the problems of interest are Small Data ones due to the computational expense of the physical simulators . Hence the number of time-steps as well as the number of time-series used for training is small as compared to the dimension of the system and to the time-horizon over which predictions are sought . 2 PHYSICS-AWARE , PROBABILISTIC MODEL ORDER REDUCTION . Our data consists of N times-series { x ( i ) 0 : T } Ni=1 over T time-steps generated by a computational physics simulator . This can represent positions and velocities of each particle in a fluid or those of atoms in molecular dynamics . Their dimension is generally very high i.e . xt ∈M ⊂ Rf ( f > > 1 ) . In the context of state-space models , the goal is to find a lower-dimensional set of collective variables or latent generators zt and their associated dynamics . Given the difficulties associated with these tasks and the solutions that have been proposed in statistics and computational physics literature , we advocate the use of an intermediate layer of physically-motivated , lower-dimensional variables Xt ( e.g . density or velocity fields ) , the meaning of which will become precise in the next sections . These variables provide a coarse-grained description of the high-dimensional observables and imbue interpretability in the learned dynamics . Using Xt alone ( without zt ) would make it extremely difficult to enforce long-term stability ( see Appendix H.2 ) while ensuring sufficient complexity in the learned dynamics ( Felsberger & Koutsourelakis , 2019 ; Champion et al. , 2019 ) . Furthermore and even if the dynamics of xt are first-order Markovian , this is not necessarily the case forXt ( Chorin & Stinis , 2007 ) . The latent variables zt therefore effectively correspond to a nonlinear coordinate transformation that yields not only Markovian but also stable dynamics ( Gin et al. , 2019 ) . The general framework is summarized in Figure 1 and we provide details in the next section . 2.1 MODEL STRUCTURE . Our model consists of three levels . At the first level , we have the latent variables zt which are connected with Xt in the second layer through a probabilistic map G. The physical variables Xt are finally connected to the high-dimensional observables through another probabilistic map F . We parametrize F , G with deep neural networks and denote by θ1 and θ2 the corresponding parameters ( see Appendix D ) . In particular , we postulate the following relations : zt , j = zt−1 , j exp ( λj ) + σj t , j λj ∈ C , t , j ∼ CN ( 0 , 1 ) , j = 1 , 2 , . . . , h ( 1 ) Xt = G ( zt , θ1 ) ( 2 ) xt = F ( Xt , θ2 ) ( 3 ) We assume that the latent variables zt are complex-valued and a priori independent . Complex variables were chosen as their evolution includes a harmonic components which are observed in many physical systems . In Appendix H.1 we present results with a real-valued latent variables zt , j and illustrate their limitations . We model their dynamics with a discretized Ornstein-Uhlenbeck process on the complex plane with initial conditions z0 , j ∼ CN ( 0 , σ20 , j ) 2 . The parameters associated with this level are denoted summarily by θ0 = { σ20 , j , σ2j , λj } hj=1 . These , along with θ1 , θ2 mentioned earlier , and the state variables Xt and zt have to be inferred from the data xt . We explain each of the aforementioned components in the sequel . 2.1.1 STABLE LOW-DIMENSIONAL DYNAMICS . While the physical systems ( e.g . particle dynamics ) of interest are highly non-stationary , they generally converge to equilibrium in the long-term . We enforce long-term stability here by ensuring that the real-part of the λj ’ s in Equation ( 1 ) is negative , i.e . : λj = < ( λj ) + i = ( λj ) with < ( λj ) < 0 ( 4 ) which guarantees first and second-order stability i.e . the mean as well as the variance are bounded at all time steps . The transition density each process zt , j is given by : p ( zt , j | zt−1 , j ) = N ( [ < ( zt , j ) = ( zt , j ) ] | sj Rj [ < ( zt−1 , j ) = ( zt−1 , j ) ] , I σ2j 2 ) ( 5 ) where the orthogonal matrixRj depends on the imaginary part of λj : Rj = [ cos ( = ( λj ) ) − sin ( = ( λj ) ) sin ( = ( λj ) ) cos ( = ( λj ) ) ] ( 6 ) and the decay rate sj depends on the real part of λj : sj = exp ( < ( λj ) ) ( 7 ) i.e . the closer to zero the latter is , the ” slower ” the evolution of the corresponding process is . As in probabilistic Slow Feature Analysis ( SFA ) ( Turner & Sahani , 2007 ; Zafeiriou et al. , 2015 ) , we set σ2j = 1− exp ( 2 < ( λj ) ) = 1− s2j and σ20 , j = 1 . As a consequence , a priori , the latent dynamics are stationary3 and an ordering of the processes zt , j is possible on the basis of < ( λj ) . Hence the only independent parameters are the λj , the imaginary part of which can account for periodic effects in the latent dynamics ( see Appendix B ) . The joint density of zt can finally be expressed as : p ( z0 : T ) = h∏ j=1 ( T∏ t=1 p ( zt , j | zt−1 , j , θ0 ) p ( z0 , j |θ0 ) ) ( 8 ) The transition density between states at non-neighbouring time-instants is also available analytically and is useful for training on longer trajectories or in cases of missing data . Details can be found in Appendix B . 2A short review of complex normal distributions , denoted by CN , can be found in Appendix A . 3More details can be found in Appendix B . 2.1.2 PROBABILISTIC GENERATIVE MAPPING . We employ fully probabilistic maps between the different layers which involve two conditional densities based on Equations ( 2 ) and ( 3 ) , i.e . : p ( xt |Xt , θ2 ) and p ( Xt | zt , θ1 ) ( 9 ) In contrast to the majority of physics-motivated papers ( Chorin & Stinis , 2007 ; Champion et al. , 2019 ) as well as those based on transfer-operators Klus et al . ( 2018 ) , we note that the generative structure adopted does not require the prescription of a restriction operator ( or encoder ) and the reduced variables need not be selected a priori but rather are adapted to best reconstruct the observables . The splitting of the generative mapping into two parts through the introduction of the intermediate variables Xt has several advantages . Firstly , known physical dependencies between the data x and the physical variables X can be taken into account , which reduces the complexity of the associated maps and the total number of parameters . For instance , in the case of particle simulations where X represents a density or velocity field , i.e . it provides a coarsened or averaged description of the fine-scale observables , it can be used to ( probabilistically ) reconstruct the positions or velocities of the particles . This physical information can be used to compensate for the lack of data when only few training sequences are available ( Small data ) and can seen as a strong prior to the model order reduction framework . Due to the lower dimension of associated variables , the generative map between zt and Xt can be more easily learned even with few training samples . Lastly , the inferred physical variablesX can provide insight and interpretability to the analysis of the physical system .
The paper proposes a generative model for learning a low-dimensional representation of a dynamical system from high-dimensional observations. The novelty of the approach is to introduce two latent spaces, one representing the standard physics-agnostic latent space learned from the data and one representing physics-motivated variables. The goal is to learn the dynamics of the first layer, denoted by z_t, which is a coarse-grained representation of the dynamical system and mostly captures the slow processes that drive the system.
SP:7f210a3382b6840f84b182f0c72b9de2e89b0fe0
Learning Disentangled Representations for Image Translation
1 INTRODUCTION . Learning disentangled representations from a set of observations is a fundamental problem in machine learning . Such representations can facilitate generalization to downstream discriminative and generative tasks as well as improving interpretability ( Hsu et al. , 2017 ) , reasoning ( van Steenkiste et al. , 2019 ) and fairness ( Creager et al. , 2019 ) . Recent advances have contributed to various tasks such as novel image synthesis ( Zhu et al. , 2018 ) and person re-identification ( Eom & Ham , 2019 ) . Image translation is an extensively researched task that benefits from disentanglement . Its goal is to generate an analogous image in a target domain ( e.g . cats ) given an input image in a source domain ( e.g . dogs ) . Although this task is generally poorly specified , it is often satisfied under the assumption that images in different domains share common attributes ( e.g . head pose ) which can be transferred during translation - we name those content . In many cases , the class ( domain ) and common attributes do not uniquely specify the target image e.g . there are many dog breeds with the same head pose . This multi-modal translation motivates the specification of the particular classspecific attributes that we wish the target image to have - we name those style . The ability to transfer the content of a source image to a target class and style has been the objective of several methods e.g . MUNIT ( Huang et al. , 2018 ) , FUNIT ( Liu et al. , 2019 ) and StartGAN-v2 ( Choi et al. , 2020 ) . Unfortunately , we show that despite their visually pleasing results , the translated images still retain many class-specific attributes of the original image resulting in limited translation quality . For example , when translating dogs to wild animals , current methods are prone to transfer facial shapes which are unique to dogs and should not be transferred precisely to wild animals . As demonstrated in Fig . 1 , our model transfers the semantic head pose more reliably . In this work , we analyze the class-supervised setting and present a principled objective for disentangling image class and attributes . We explain why LORD , introduced by ( Gabbay & Hoshen , 2020 ) , can not be applied for multi-modal translation . We then show that introducing an additional style representation overcomes this issue and propose a practical method for high-fidelity image translation by learning disentangled representations . Our method achieves this in two stages ; i ) Disentanglement : disentangled representation learning in a non-adversarial framework , leveraging latent optimization and well-motivated content and style bottlenecks . ii ) Synthesis : the disentangled representations learned in the previous stage are used to ” supervise ” a synthesis network that generalizes to unseen images and classes . As synthesis network training is well-conditioned , we can effectively incorporate a GAN loss resulting in a high-fidelity image translation model . Our approach illustrates that adversarial optimization , which is typically used for domain translation , is not necessary for disentanglement , and its main utility lies in generating perceptually pleasing images . Our model learns disentangled representations and achieves better translation quality and output diversity than current methods . Our contributions are : i ) Introducing a non-adversarial disentanglement method that enables multimodal solutions . ii ) Learning statistically disentangled representations . iii ) Extending domain translation methods to cases with many ( 10k ) domains . iv ) State-of-the-art results in image-translation . 1.1 RELATED WORK . Image Translation Translating the content of images across different domains has attracted much attention . In the unsupervised setting , CycleGAN ( Zhu et al. , 2017 ) introduces a cycle consistency loss to encourage translated images preserves the domain-invariant attributes ( e.g . pose ) of the source image . MUNIT ( Huang et al. , 2018 ) recognized that a given content image could be transferred to many different styles ( e.g . colors and textures ) in a target domain and extends UNIT ( Huang & Belongie , 2017 ) to learn multi-modal mappings by learning style representations . DRIT ( Lee et al. , 2018 ) tackles the same setting using an adversarial constraint at the representation level . MSGAN ( Mao et al. , 2019 ) added a regularization term to prevent mode collapse . StarGAN-v2 ( Choi et al. , 2020 ) and DMIT ( Yu et al. , 2019 ) extend previous frameworks to translation across more than two domains . FUNIT ( Liu et al. , 2019 ) further allows translation to novel domains . Class-Supervised Disentanglement In this parallel line of work , the goal is to anchor the semantics of all the images within each class into a separate class representation while modeling all the remaining class-independent attributes by a content representation . Several methods encourage disentanglement by adversarial constraints ( Denton et al. , 2017 ; Szabó et al. , 2018 ; Mathieu et al. , 2016 ) while other rely on cycle consistency ( Harsh Jha et al. , 2018 ) or group accumulation ( Bouchacourt et al. , 2018 ) . LORD ( Gabbay & Hoshen , 2020 ) takes a non-adversarial approach and trains a generative model while directly optimizing over class and content codes . Most works in this area demonstrate domain translation results on simple datasets but not in the multi-modal ( many-tomany ) settings . Moreover , their focus is to achieve disentanglement at the representation level rather than designing architectures for high-resolution image translation resulting in weak performance on competitive benchmarks . In this work , we draw inspiration from LORD in relying on the inductive bias conferred by latent optimization to learn a disentangled content representation . In contrast to LORD , we tackle the multi-modal image translation setting by modeling style . Moreover , we add an adversarial term in the synthesis network that increases the image quality and resolution . 2 BACKGROUND : REPRESENTATION LEARNING IN IMAGE-TRANSLATION . Image translation takes as input a set of N images and corresponding class labels ( x1 , y1 ) , ( x2 , y2 ) , ... , ( xN , yN ) . Let us assume that an image xi is fully specified by its class yi and attributes ai . As a motivational example , let us consider the images xi to be of animals , and the class label yi denotes the species . Attributes ai , may include attributes aci common to all classes such as head angle , and class-specific attributes asi such as dog or cat breed . An unknown function G∗ maps yi , aci and a s i to the image xi : xi = G ∗ ( yi , a s i , a c i ) ( 1 ) The goal of image translation is to replace the common attributes aci of target xi by a c j of source xj : xij = G ∗ ( yi , a s i , a c j ) ( 2 ) The main challenge is that during training , only the class yi of each image is given , but the attributes aci and a s i are unknown . We define three representations corresponding to each of the physical properties above - the class embedding eyi represents the class yi , the content code ci represents the common attributes aci and the style code si represents the class-specific attributes a s i . The computational task is to learn the representations eyi , si , ci such that they faithfully encode their corresponding physical properties of image xi and are sufficient to reconstruct the image using a generator G : xi = G ( eyi , si , ci ) ( 3 ) This constraint however is insufficient to specify the representations uniquely . A popular constraint , utilizes the fact that common attributes aci are independent of the class yi . In the motivating animals example , we assume that head pose acts similarly on cats or dogs . It therefore requires independence between the content code ci and class yi . Adversarial methods : Adversarial methods implement this constraint by training a domain confusion discriminator on the translated image xij = G ( eyi , si , cj ) . The discriminator attempts to distinguish translated images from original images of the target class . Unfortunately , we empirically show that state-of-the-art methods do not learn content codes ci that are disentangled from the class yi . The consequence is that ci transfers not only common attributes but also class-dependent attributes . We hypothesize that this failure is due to the challenging adversarial optimization . In order to constrain the style si , current methods ( e.g . StarGAN-v2 , FUNIT ) rely on localitypreserving architectures that bias the content code ci to represent the structure , while the style operates in a global manner per channel and is applied at a relatively deep level within the generator ( typically from 16×16 spatial resolution ) . Unfortunately , such style is unable to model class-specific attributes asi of spatial nature e.g . the spatial facial shape of different dog breeds . Non-adversarial methods : LORD is a non-adversarial approach for disentangled representation learning . It assumes that all attributes are common to all classes I ( ac ; y ) = 0 and H ( as ) = 0 - where I denotes mutual information and H denotes entropy . It therefore learns only class and content representations eyi and ci but not style codes ( si = 0 ) . In order to learn disentangled class and content representations it uses a simple but effective constraint of minimal information in the content codes ci . After learned disentangled representations eyi and ci , LORD learns a synthesis network which generalizes to unseen images and classes . When the assumptions of LORD are satisfied ( H ( as ) = 0 ) , the content codes ci are indeed shown to recover the information of the common attributes aci . However , when class-specific attributes exist , LORD does not achieve its goals . The reason is that the content code ci , contains the information of both the common aci and class specific a s i attributes ( I ( a s ; c ) > 0 ) . This has two issues : i ) as the content code now contains class-specific information ( e.g . cat breed ) , it is no longer disentangled from the class ( we can predict if the content is of a cat ) ; ii ) when transferring content codes across classes ( e.g . cats to dogs ) , we are missing the required class specific information in the target class ( i.e . given the class code of edog and content of a cat ccat , the generator is not provided with the information of what dog breed to generate ) . Due to its effective non-adversarial approach , LORD learns disentangled representations and achieves state-of-the-art results on uni-modal synthetic and low-res datasets . However , due to i ) ignoring class-specific attributes as highlighted above ; ii ) using a non-adversarial synthesis network , it does not produce high fidelity images and can not scale to high-resolution . 3 OVERLORD : PRINCIPLED DISENTANGLEMENT IN IMAGE TRANSLATION . Based on the above analysis , we present our disentangled image-translation method , OverLORD . 3.1 LEARNING A DISENTANGLEMENT MODEL . In order to learn disentangled representations for class , content and style , we propose a nonadversarial framework with several principles . Reconstruction : As the image should be fully specified by the representations eyi , si , ci , we employ a reconstruction term : Lrec = ` ( G ( eyi , si , ci ) , xi ) - where ` is a measure of similarity between images ( we use the VGG perceptual loss ) . Content bottleneck : In order to constrain the amount of information in each content variable ci , we parameterize it as a noisy channel consisting of a vector c′i and an additive Normal noise z ∼ N ( 0 , I ) , ci = c′i + z . Using the Shannon-Hartley theorem , the information capacity of the channel is a function of the signal-to-noise ratio . For the noisy channel ci , the SNR ratio is given by ‖c′i‖2 . We therefore define the content bottleneck loss as Lcb = ∑ i ‖c′i‖2 . Style : We train a style encoderEs : X −→ S to infer the class-specific attributes as . To encourage invariance to the common attributes ac , the input image first undergoes a random transformation . This creates a transformed version xtransi of the input image that shares the same class specific attributes as , but exhibits random common attributes ac ( as style should be invariant to these attributes ) . The nature of the transformation is setting dependent e.g . if common attributes include spatial locations , then crop and rotation transformations will remove spatial attributes from the transformed image and make the style code si invariant to them . si = Es ( x trans i ) ( 4 ) Our complete objective can be summarized as follows : min c′i , eyi , Es , G Ldisent = ∑ i ` ( G ( eyi , Es ( x trans i ) , c ′ i + z ) , xi ) + λcb‖c′i‖2 z ∼ N ( 0 , I ) ( 5 ) There are several fundamental differences from the cVAE objective : i ) the noise variance is a fixed hyper-parameter and is not learned ; ii ) there is no content encoder , instead the latent content code for each variable is learned directly , in an unamortized way often referred to as latent optimization ; iii ) the input to the style encoder is a transformed version of the image . Gabbay & Hoshen ( 2020 ) discovered that latent optimization improves disentanglement significantly over encoder-based methods . The difference lies in the initialization as latent optimization starts with no class-content correlation ( code initialization is IID ) while a content encoder starts with near perfect correlation ( class can be predicted from the output of a randomly initialized content encoder ) . We further elaborate on latent optimization and its inductive bias in Appendix A.3 .
The paper presents a principled approach to style transfer by disentangling class-specific attributes from common (eq. class-independent) attributes. In order to do so, the paper leverages the formulation of a recently proposed disentangling approach called "LORD". The proposed approach is called OverLORD, and includes two main augmentations to LORD. The first is the introduction of a style encoder to learn a latent code for class-specific attributes, and the second is to introduce an adversarial learning in the second stage for high-quality style-transferred generation of images. The results are shown on three datasets: AFHQ (dog, cat, wildlife), CelebA (human faces), and CelebA-HQ (hi-res human faces), and are compelling in both qualitative and quantitative comparisons.
SP:e1b66646c8acdfa00bdfc0ec8740458d7e8b2d83
Learning Disentangled Representations for Image Translation
1 INTRODUCTION . Learning disentangled representations from a set of observations is a fundamental problem in machine learning . Such representations can facilitate generalization to downstream discriminative and generative tasks as well as improving interpretability ( Hsu et al. , 2017 ) , reasoning ( van Steenkiste et al. , 2019 ) and fairness ( Creager et al. , 2019 ) . Recent advances have contributed to various tasks such as novel image synthesis ( Zhu et al. , 2018 ) and person re-identification ( Eom & Ham , 2019 ) . Image translation is an extensively researched task that benefits from disentanglement . Its goal is to generate an analogous image in a target domain ( e.g . cats ) given an input image in a source domain ( e.g . dogs ) . Although this task is generally poorly specified , it is often satisfied under the assumption that images in different domains share common attributes ( e.g . head pose ) which can be transferred during translation - we name those content . In many cases , the class ( domain ) and common attributes do not uniquely specify the target image e.g . there are many dog breeds with the same head pose . This multi-modal translation motivates the specification of the particular classspecific attributes that we wish the target image to have - we name those style . The ability to transfer the content of a source image to a target class and style has been the objective of several methods e.g . MUNIT ( Huang et al. , 2018 ) , FUNIT ( Liu et al. , 2019 ) and StartGAN-v2 ( Choi et al. , 2020 ) . Unfortunately , we show that despite their visually pleasing results , the translated images still retain many class-specific attributes of the original image resulting in limited translation quality . For example , when translating dogs to wild animals , current methods are prone to transfer facial shapes which are unique to dogs and should not be transferred precisely to wild animals . As demonstrated in Fig . 1 , our model transfers the semantic head pose more reliably . In this work , we analyze the class-supervised setting and present a principled objective for disentangling image class and attributes . We explain why LORD , introduced by ( Gabbay & Hoshen , 2020 ) , can not be applied for multi-modal translation . We then show that introducing an additional style representation overcomes this issue and propose a practical method for high-fidelity image translation by learning disentangled representations . Our method achieves this in two stages ; i ) Disentanglement : disentangled representation learning in a non-adversarial framework , leveraging latent optimization and well-motivated content and style bottlenecks . ii ) Synthesis : the disentangled representations learned in the previous stage are used to ” supervise ” a synthesis network that generalizes to unseen images and classes . As synthesis network training is well-conditioned , we can effectively incorporate a GAN loss resulting in a high-fidelity image translation model . Our approach illustrates that adversarial optimization , which is typically used for domain translation , is not necessary for disentanglement , and its main utility lies in generating perceptually pleasing images . Our model learns disentangled representations and achieves better translation quality and output diversity than current methods . Our contributions are : i ) Introducing a non-adversarial disentanglement method that enables multimodal solutions . ii ) Learning statistically disentangled representations . iii ) Extending domain translation methods to cases with many ( 10k ) domains . iv ) State-of-the-art results in image-translation . 1.1 RELATED WORK . Image Translation Translating the content of images across different domains has attracted much attention . In the unsupervised setting , CycleGAN ( Zhu et al. , 2017 ) introduces a cycle consistency loss to encourage translated images preserves the domain-invariant attributes ( e.g . pose ) of the source image . MUNIT ( Huang et al. , 2018 ) recognized that a given content image could be transferred to many different styles ( e.g . colors and textures ) in a target domain and extends UNIT ( Huang & Belongie , 2017 ) to learn multi-modal mappings by learning style representations . DRIT ( Lee et al. , 2018 ) tackles the same setting using an adversarial constraint at the representation level . MSGAN ( Mao et al. , 2019 ) added a regularization term to prevent mode collapse . StarGAN-v2 ( Choi et al. , 2020 ) and DMIT ( Yu et al. , 2019 ) extend previous frameworks to translation across more than two domains . FUNIT ( Liu et al. , 2019 ) further allows translation to novel domains . Class-Supervised Disentanglement In this parallel line of work , the goal is to anchor the semantics of all the images within each class into a separate class representation while modeling all the remaining class-independent attributes by a content representation . Several methods encourage disentanglement by adversarial constraints ( Denton et al. , 2017 ; Szabó et al. , 2018 ; Mathieu et al. , 2016 ) while other rely on cycle consistency ( Harsh Jha et al. , 2018 ) or group accumulation ( Bouchacourt et al. , 2018 ) . LORD ( Gabbay & Hoshen , 2020 ) takes a non-adversarial approach and trains a generative model while directly optimizing over class and content codes . Most works in this area demonstrate domain translation results on simple datasets but not in the multi-modal ( many-tomany ) settings . Moreover , their focus is to achieve disentanglement at the representation level rather than designing architectures for high-resolution image translation resulting in weak performance on competitive benchmarks . In this work , we draw inspiration from LORD in relying on the inductive bias conferred by latent optimization to learn a disentangled content representation . In contrast to LORD , we tackle the multi-modal image translation setting by modeling style . Moreover , we add an adversarial term in the synthesis network that increases the image quality and resolution . 2 BACKGROUND : REPRESENTATION LEARNING IN IMAGE-TRANSLATION . Image translation takes as input a set of N images and corresponding class labels ( x1 , y1 ) , ( x2 , y2 ) , ... , ( xN , yN ) . Let us assume that an image xi is fully specified by its class yi and attributes ai . As a motivational example , let us consider the images xi to be of animals , and the class label yi denotes the species . Attributes ai , may include attributes aci common to all classes such as head angle , and class-specific attributes asi such as dog or cat breed . An unknown function G∗ maps yi , aci and a s i to the image xi : xi = G ∗ ( yi , a s i , a c i ) ( 1 ) The goal of image translation is to replace the common attributes aci of target xi by a c j of source xj : xij = G ∗ ( yi , a s i , a c j ) ( 2 ) The main challenge is that during training , only the class yi of each image is given , but the attributes aci and a s i are unknown . We define three representations corresponding to each of the physical properties above - the class embedding eyi represents the class yi , the content code ci represents the common attributes aci and the style code si represents the class-specific attributes a s i . The computational task is to learn the representations eyi , si , ci such that they faithfully encode their corresponding physical properties of image xi and are sufficient to reconstruct the image using a generator G : xi = G ( eyi , si , ci ) ( 3 ) This constraint however is insufficient to specify the representations uniquely . A popular constraint , utilizes the fact that common attributes aci are independent of the class yi . In the motivating animals example , we assume that head pose acts similarly on cats or dogs . It therefore requires independence between the content code ci and class yi . Adversarial methods : Adversarial methods implement this constraint by training a domain confusion discriminator on the translated image xij = G ( eyi , si , cj ) . The discriminator attempts to distinguish translated images from original images of the target class . Unfortunately , we empirically show that state-of-the-art methods do not learn content codes ci that are disentangled from the class yi . The consequence is that ci transfers not only common attributes but also class-dependent attributes . We hypothesize that this failure is due to the challenging adversarial optimization . In order to constrain the style si , current methods ( e.g . StarGAN-v2 , FUNIT ) rely on localitypreserving architectures that bias the content code ci to represent the structure , while the style operates in a global manner per channel and is applied at a relatively deep level within the generator ( typically from 16×16 spatial resolution ) . Unfortunately , such style is unable to model class-specific attributes asi of spatial nature e.g . the spatial facial shape of different dog breeds . Non-adversarial methods : LORD is a non-adversarial approach for disentangled representation learning . It assumes that all attributes are common to all classes I ( ac ; y ) = 0 and H ( as ) = 0 - where I denotes mutual information and H denotes entropy . It therefore learns only class and content representations eyi and ci but not style codes ( si = 0 ) . In order to learn disentangled class and content representations it uses a simple but effective constraint of minimal information in the content codes ci . After learned disentangled representations eyi and ci , LORD learns a synthesis network which generalizes to unseen images and classes . When the assumptions of LORD are satisfied ( H ( as ) = 0 ) , the content codes ci are indeed shown to recover the information of the common attributes aci . However , when class-specific attributes exist , LORD does not achieve its goals . The reason is that the content code ci , contains the information of both the common aci and class specific a s i attributes ( I ( a s ; c ) > 0 ) . This has two issues : i ) as the content code now contains class-specific information ( e.g . cat breed ) , it is no longer disentangled from the class ( we can predict if the content is of a cat ) ; ii ) when transferring content codes across classes ( e.g . cats to dogs ) , we are missing the required class specific information in the target class ( i.e . given the class code of edog and content of a cat ccat , the generator is not provided with the information of what dog breed to generate ) . Due to its effective non-adversarial approach , LORD learns disentangled representations and achieves state-of-the-art results on uni-modal synthetic and low-res datasets . However , due to i ) ignoring class-specific attributes as highlighted above ; ii ) using a non-adversarial synthesis network , it does not produce high fidelity images and can not scale to high-resolution . 3 OVERLORD : PRINCIPLED DISENTANGLEMENT IN IMAGE TRANSLATION . Based on the above analysis , we present our disentangled image-translation method , OverLORD . 3.1 LEARNING A DISENTANGLEMENT MODEL . In order to learn disentangled representations for class , content and style , we propose a nonadversarial framework with several principles . Reconstruction : As the image should be fully specified by the representations eyi , si , ci , we employ a reconstruction term : Lrec = ` ( G ( eyi , si , ci ) , xi ) - where ` is a measure of similarity between images ( we use the VGG perceptual loss ) . Content bottleneck : In order to constrain the amount of information in each content variable ci , we parameterize it as a noisy channel consisting of a vector c′i and an additive Normal noise z ∼ N ( 0 , I ) , ci = c′i + z . Using the Shannon-Hartley theorem , the information capacity of the channel is a function of the signal-to-noise ratio . For the noisy channel ci , the SNR ratio is given by ‖c′i‖2 . We therefore define the content bottleneck loss as Lcb = ∑ i ‖c′i‖2 . Style : We train a style encoderEs : X −→ S to infer the class-specific attributes as . To encourage invariance to the common attributes ac , the input image first undergoes a random transformation . This creates a transformed version xtransi of the input image that shares the same class specific attributes as , but exhibits random common attributes ac ( as style should be invariant to these attributes ) . The nature of the transformation is setting dependent e.g . if common attributes include spatial locations , then crop and rotation transformations will remove spatial attributes from the transformed image and make the style code si invariant to them . si = Es ( x trans i ) ( 4 ) Our complete objective can be summarized as follows : min c′i , eyi , Es , G Ldisent = ∑ i ` ( G ( eyi , Es ( x trans i ) , c ′ i + z ) , xi ) + λcb‖c′i‖2 z ∼ N ( 0 , I ) ( 5 ) There are several fundamental differences from the cVAE objective : i ) the noise variance is a fixed hyper-parameter and is not learned ; ii ) there is no content encoder , instead the latent content code for each variable is learned directly , in an unamortized way often referred to as latent optimization ; iii ) the input to the style encoder is a transformed version of the image . Gabbay & Hoshen ( 2020 ) discovered that latent optimization improves disentanglement significantly over encoder-based methods . The difference lies in the initialization as latent optimization starts with no class-content correlation ( code initialization is IID ) while a content encoder starts with near perfect correlation ( class can be predicted from the output of a randomly initialized content encoder ) . We further elaborate on latent optimization and its inductive bias in Appendix A.3 .
This paper proposes a novel approach named OverLORD to learn disentangled representations for image class and attributes. To tackle the problem of previous methods that the learned content and class are often entangled, the authors propose to disentangle image representations to class and attributes, and further disentangle attributes to common attributes among all classes (content), and class-specific attributes (style). It uses the idea from LORD to disentangle the image representations, and extends LORD to not only common attributes (content), but also class-specific attributes (style). In this way, it is able to transfer the common attributes while preserving the class and class-specific attributes. Experiments are conducted on animal faces and human faces datasets, and the proposed approach is able to preserve the class-specific attributes (e.g., shape or identity of the face) better than previous image-to-image translation methods that only disentangle style and content.
SP:e1b66646c8acdfa00bdfc0ec8740458d7e8b2d83
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
1 INTRODUCTION . Most machine learning models , including neural networks , operate on vector spaces . Therefore , when working with discrete objects such as text , we must define a method of converting objects into vectors . The standard way to map objects to continuous representations involves : 1 ) defining the vocabulary V = { v1 , ... , v∣V ∣ } as the set of all objects , and 2 ) learning a ∣V ∣ × d embedding matrix that defines a d dimensional continuous representation for each object . This method has two main shortcomings . Firstly , when ∣V ∣ is large ( e.g. , million of words/users/URLs ) , this embedding matrix does not scale elegantly and may constitute up to 80 % of all trainable parameters ( Jozefowicz et al. , 2016 ) . Secondly , despite being discrete , these objects usually have underlying structures such as natural groupings and similarities among them . Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing . As a result , there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training , storage , and inference . In this paper , we propose a simple method to learn sparse representations that uses a global set of vectors , which we call the anchors , and expresses the embeddings of discrete objects as a sparse linear combination of these anchors , as shown in Figure 1 . One can consider these anchors to represent latent topics or concepts . Therefore , we call the resulting method ANCHOR & TRANSFORM ( ANT ) . The approach is reminiscent of low-rank and sparse coding approaches , however , surprisingly in the literature these methods were not elegantly integrated with deep networks . Competitive attempts are often complex ( e.g. , optimized with RL ( Joglekar et al. , 2019 ) ) , involve multiple training stages ( Ginart et al. , 2019 ; Liu et al. , 2017 ) , or require post-processing ( Svenstrup et al. , 2017 ; Guo et al. , 2017 ; Aharon et al. , 2006 ; Awasthi & Vijayaraghavan , 2018 ) . We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner . ANT is ∗work done during an internship at Google . scalable , flexible , and allows the user flexibility in defining these anchors and adding more constraints on the transformations , possibly in a domain/task specific manner . We find that our proposed method demonstrates stronger performance with fewer parameters ( up to 40× compression ) on multiple tasks ( text classification , language modeling , and recommendation ) as compared to existing baselines . We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric ( BNP ) prior for neural embeddings that encourages sparsity and leverages natural groupings among objects . Specifically , we show its equivalence to Indian Buffet Process ( IBP ; Griffiths & Ghahramani ( 2005 ) ) prior for embedding matrices . While such BNP priors have proven to be a flexible tools in graphical models to encourage hierarchies ( Teh & Jordan , 2010 ) , sparsity ( Knowles & Ghahramani , 2011 ) , and other structural constraints ( Roy et al. , 2016 ) , these inference methods are usually complex , hand designed for each setup , and non-differentiable . Our proposed method opens the door towards integrating priors ( e.g. , IBP ) with neural representation learning . These theoretical connections leads to practical insights - by asymptotically analyzing the likelihood of our model in the small variance limit using Small Variance Asymptotics ( SVA ; Roweis ( 1998 ) ) , we obtain a natural extension , NBANT , that automatically learns the optimal number of anchors to achieve a balance between performance and compression instead of having to tune it as a hyperparameter . 2 RELATED WORK . Prior work in learning sparse embeddings of discrete structures falls into three categories : Matrix compression techniques such as low rank approximations ( Acharya et al. , 2019 ; Grachev et al. , 2019 ; Markovsky , 2011 ) , quantizing ( Han et al. , 2016 ) , pruning ( Anwar et al. , 2017 ; Dong et al. , 2017 ; Wen et al. , 2016 ) , or hashing ( Chen et al. , 2015 ; Guo et al. , 2017 ; Qi et al. , 2017 ) have been applied to embedding matrices . However , it is not trivial to learn sparse low-rank representations of large matrices , especially in conjunction with neural networks . To the best of our knowledge , we are the first to present the integration of sparse low-rank representations , their non-parametric extension , and demonstrate its effectiveness on many tasks in balancing the tradeoffs between performance & sparsity . We also outperform many baselines based on low-rank compression ( Grachev et al. , 2019 ) , sparse coding ( Chen et al. , 2016b ) , and pruning ( Liu et al. , 2017 ) . Reducing representation size : These methods reduce the dimension d for different objects . Chen et al . ( 2016a ) divides the embedding into buckets which are assigned to objects in order of importance , Joglekar et al . ( 2019 ) learns d by solving a discrete optimization problem with RL , and Baevski & Auli ( 2019 ) reduces dimensions for rarer words . These methods resort to RL or are difficult to tune with many hyperparameters . Each object is also modeled independently without information sharing . Task specific methods include learning embeddings of only common words for language modeling ( Chen et al. , 2016b ; Luong et al. , 2015 ) , and vocabulary selection for text classification ( Chen et al. , 2019 ) . Other methods reconstruct pre-trained embeddings using codebook learning ( Chen et al. , 2018 ; Shu & Nakayama , 2018 ) or low rank tensors ( Sedov & Yang , 2018 ) . However , these methods can not work for general tasks . For example , methods that only model a subset of objects can not be used for retrieval because it would never retrieve the dropped objects . Rare objects might be highly relevant to a few users so it might not be ideal to completely ignore them . Similarly , task-specific methods such as subword ( Bojanowski et al. , 2017 ) and wordpiece ( Wu et al. , 2016 ) embeddings , while useful for text , do not generalize to general applications such as item and query retrieval . 3 ANCHOR & TRANSFORM . Suppose we are presented with data X ∈ V N , Y ∈ RN×c drawn from some joint distribution p ( x , y ) , where the support of x is over a discrete set V ( the vocabulary ) and N is the size of the training set . The entries in Y can be either discrete ( classification ) or continuous ( regression ) . The goal is to learn a d-dimensional representation { e1 , ... , e∣V ∣ } for each object by learning an embedding matrix E ∈ R∣V ∣×d where row i is the representation ei of object i . A model fθ with parameters θ is then used to predict y , i.e. , ŷi = fθ ( xi ; E ) = fθ ( E [ xi ] ) . At a high level , to encourage statistical sharing between objects , we assume that the embedding of each object is obtained by linearly superimposing a small set of anchor objects . For example , when the objects considered are words , the anchors may represent latent abstract concepts ( of unknown cardinality ) and each word is a weighted mixture of different concepts . More generally , the model assumes that there are some unknown number of anchors , A = { a1 , ... , a∣A∣ } . The embedding ei for object i is generated by first choosing whether the object possesses each anchor ak ∈ Rd . The selected anchors then each contribute some weight to the representation of object i . Therefore , instead of learning the large embedding matrix E directly , ANT consists of two components : Algorithm 1 ANCHOR & TRANSFORM algorithm for learning sparse representations of discrete objects . ANCHOR & TRANSFORM : 1 : Anchor : initialize anchor embeddings A . 2 : Transform : initialize T as a sparse matrix . 3 : Optionally + domain info : initialize domain sparsity ma- trix S ( G ) as a sparse matrix ( see Appendix F ) . 4 : for each batch ( X , Y ) do 5 : Compute loss L = ∑iDφ ( yi , fθ ( xi ; TA ) ) 6 : A , T , θ = UPDATE ( ∇L , η ) . 7 : T = max { ( T − ηλ2 ) ⊙ S ( G ) +T⊙ ( 1 − S ( G ) ) ,0 } . 8 : end for 9 : return anchor embeddings A and transformations T. 1 ) ANCHOR : Learn embeddings A ∈ R∣A∣×d of a small set of anchor objects A = { a1 , ... , a∣A∣ } , ∣A∣ < < ∣V ∣ that are representative of all discrete objects . 2 ) TRANSFORM : Learn a sparse transformation T from A to E. Each of the discrete objects is induced by some transformation from ( a few ) anchor objects . To ensure sparsity , we want nnz ( T ) < < ∣V ∣ × d. A and T are trained end-to-end for task specific representations . To enforce sparsity , we use an ` 1 penalty on T and constrain its domain to be non-negative to reduce redundancy in transformations ( positive and negative entries canceling out ) . min T≥0 , A , θ ∑ i Dφ ( yi , fθ ( xi ; TA ) ) + λ2∥T∥1 , ( 1 ) where Dφ is a suitable Bregman divergence between predicted and true labels , and ∥T∥1 denotes the sum of absolute values . Most deep learning frameworks directly use subgradient descent to solve eq ( 1 ) , but unfortunately , such an approach will not yield sparsity . Instead , we perform optimization by proximal gradient descent ( rather than approximate subgradient methods which have poorer convergence around non-smooth regions , e.g. , sparse regions ) to ensure exact zero entries in T : A t+1 , T t+1 , θ t+1 = UPDATE ( ∇∑ i Dφ ( yi , fθ ( xi ; T t A t ) ) , η ) , ( 2 ) T t+1 = PROXηλ2 ( T t+1 ) = max ( T t+1 − ηλ2 , 0 ) , ( 3 ) where η is the learning rate , and UPDATE is a gradient update rule ( e.g. , SGD ( Lecun et al. , 1998 ) , ADAM ( Kingma & Ba , 2015 ) , YOGI ( Zaheer et al. , 2018 ) ) . PROXηλ2 is a composition of two proximal operators : 1 ) soft-thresholding ( Beck & Teboulle , 2009 ) at ηλ2 which results from subgradient descent on λ2∥T∥1 , and 2 ) max ( ⋅,0 ) due to the non-negative domain for T. We implement this proximal operator on top of the YOGI optimizer for our experiments . Together , equations ( 2 ) and ( 3 ) give us an iterative process for end-to-end learning of A and T along with θ for specific tasks ( Algorithm 1 ) . T is implemented as a sparse matrix by only storing its non-zero entries and indices . Since nnz ( T ) < < ∣V ∣× d , this makes storage of T extremely efficient as compared to traditional approaches of computing the entire ∣V ∣×d embedding matrix . We also provide implementation tips to further speedup training and ways to incorporate ANT with existing speedup techniques like softmax sampling ( Mikolov et al. , 2013 ) or noise-contrastive estimation ( Mnih & Teh , 2012 ) in Appendix H. After training , we only store ∣A∣ × d + nnz ( T ) < < ∣V ∣ × d entries that define the complete embedding matrix , thereby using fewer parameters than the traditional ∣V ∣ × d matrix . General purpose matrix compression techniques such as hashing ( Qi et al. , 2017 ) , pruning ( Dong et al. , 2017 ) , and quantizing ( Han et al. , 2016 ) are compatible with our method : the matrices A and nnz ( T ) can be further compressed and stored . We first discuss practical methods for anchor selection ( §3.1 ) . In Appendix F we describe several ways to incorporate domain knowledge into the anchor selection and transform process . We also provide a statistical interpretation of ANT as a sparsity promoting generative process using an IBP prior and derive approximate inference based on SVA ( §3.2 ) . This gives rise to a nonparametric version of ANT that automatically learns the optimal number of anchors .
In this paper, the authors proposed a method to learn efficient representations of discrete tokens. They took a two step approach: in step 1, they learn "full fledged" embeddings for a subset of anchor tokens. In step 2, they learn a sparse matrix that is used to relate all tokens to the set of chosen anchors. This two-step approach reduced the overall number of parameters. The sparse matrix T can also encode domain knowledge (e.g. knowledge graphs). In the experiment section, the authors showed that their approach has good performance on several language tasks, with far fewer parameters.
SP:111b19c01327c2eb1211e8ce7861378e76a64877
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
1 INTRODUCTION . Most machine learning models , including neural networks , operate on vector spaces . Therefore , when working with discrete objects such as text , we must define a method of converting objects into vectors . The standard way to map objects to continuous representations involves : 1 ) defining the vocabulary V = { v1 , ... , v∣V ∣ } as the set of all objects , and 2 ) learning a ∣V ∣ × d embedding matrix that defines a d dimensional continuous representation for each object . This method has two main shortcomings . Firstly , when ∣V ∣ is large ( e.g. , million of words/users/URLs ) , this embedding matrix does not scale elegantly and may constitute up to 80 % of all trainable parameters ( Jozefowicz et al. , 2016 ) . Secondly , despite being discrete , these objects usually have underlying structures such as natural groupings and similarities among them . Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing . As a result , there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training , storage , and inference . In this paper , we propose a simple method to learn sparse representations that uses a global set of vectors , which we call the anchors , and expresses the embeddings of discrete objects as a sparse linear combination of these anchors , as shown in Figure 1 . One can consider these anchors to represent latent topics or concepts . Therefore , we call the resulting method ANCHOR & TRANSFORM ( ANT ) . The approach is reminiscent of low-rank and sparse coding approaches , however , surprisingly in the literature these methods were not elegantly integrated with deep networks . Competitive attempts are often complex ( e.g. , optimized with RL ( Joglekar et al. , 2019 ) ) , involve multiple training stages ( Ginart et al. , 2019 ; Liu et al. , 2017 ) , or require post-processing ( Svenstrup et al. , 2017 ; Guo et al. , 2017 ; Aharon et al. , 2006 ; Awasthi & Vijayaraghavan , 2018 ) . We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner . ANT is ∗work done during an internship at Google . scalable , flexible , and allows the user flexibility in defining these anchors and adding more constraints on the transformations , possibly in a domain/task specific manner . We find that our proposed method demonstrates stronger performance with fewer parameters ( up to 40× compression ) on multiple tasks ( text classification , language modeling , and recommendation ) as compared to existing baselines . We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric ( BNP ) prior for neural embeddings that encourages sparsity and leverages natural groupings among objects . Specifically , we show its equivalence to Indian Buffet Process ( IBP ; Griffiths & Ghahramani ( 2005 ) ) prior for embedding matrices . While such BNP priors have proven to be a flexible tools in graphical models to encourage hierarchies ( Teh & Jordan , 2010 ) , sparsity ( Knowles & Ghahramani , 2011 ) , and other structural constraints ( Roy et al. , 2016 ) , these inference methods are usually complex , hand designed for each setup , and non-differentiable . Our proposed method opens the door towards integrating priors ( e.g. , IBP ) with neural representation learning . These theoretical connections leads to practical insights - by asymptotically analyzing the likelihood of our model in the small variance limit using Small Variance Asymptotics ( SVA ; Roweis ( 1998 ) ) , we obtain a natural extension , NBANT , that automatically learns the optimal number of anchors to achieve a balance between performance and compression instead of having to tune it as a hyperparameter . 2 RELATED WORK . Prior work in learning sparse embeddings of discrete structures falls into three categories : Matrix compression techniques such as low rank approximations ( Acharya et al. , 2019 ; Grachev et al. , 2019 ; Markovsky , 2011 ) , quantizing ( Han et al. , 2016 ) , pruning ( Anwar et al. , 2017 ; Dong et al. , 2017 ; Wen et al. , 2016 ) , or hashing ( Chen et al. , 2015 ; Guo et al. , 2017 ; Qi et al. , 2017 ) have been applied to embedding matrices . However , it is not trivial to learn sparse low-rank representations of large matrices , especially in conjunction with neural networks . To the best of our knowledge , we are the first to present the integration of sparse low-rank representations , their non-parametric extension , and demonstrate its effectiveness on many tasks in balancing the tradeoffs between performance & sparsity . We also outperform many baselines based on low-rank compression ( Grachev et al. , 2019 ) , sparse coding ( Chen et al. , 2016b ) , and pruning ( Liu et al. , 2017 ) . Reducing representation size : These methods reduce the dimension d for different objects . Chen et al . ( 2016a ) divides the embedding into buckets which are assigned to objects in order of importance , Joglekar et al . ( 2019 ) learns d by solving a discrete optimization problem with RL , and Baevski & Auli ( 2019 ) reduces dimensions for rarer words . These methods resort to RL or are difficult to tune with many hyperparameters . Each object is also modeled independently without information sharing . Task specific methods include learning embeddings of only common words for language modeling ( Chen et al. , 2016b ; Luong et al. , 2015 ) , and vocabulary selection for text classification ( Chen et al. , 2019 ) . Other methods reconstruct pre-trained embeddings using codebook learning ( Chen et al. , 2018 ; Shu & Nakayama , 2018 ) or low rank tensors ( Sedov & Yang , 2018 ) . However , these methods can not work for general tasks . For example , methods that only model a subset of objects can not be used for retrieval because it would never retrieve the dropped objects . Rare objects might be highly relevant to a few users so it might not be ideal to completely ignore them . Similarly , task-specific methods such as subword ( Bojanowski et al. , 2017 ) and wordpiece ( Wu et al. , 2016 ) embeddings , while useful for text , do not generalize to general applications such as item and query retrieval . 3 ANCHOR & TRANSFORM . Suppose we are presented with data X ∈ V N , Y ∈ RN×c drawn from some joint distribution p ( x , y ) , where the support of x is over a discrete set V ( the vocabulary ) and N is the size of the training set . The entries in Y can be either discrete ( classification ) or continuous ( regression ) . The goal is to learn a d-dimensional representation { e1 , ... , e∣V ∣ } for each object by learning an embedding matrix E ∈ R∣V ∣×d where row i is the representation ei of object i . A model fθ with parameters θ is then used to predict y , i.e. , ŷi = fθ ( xi ; E ) = fθ ( E [ xi ] ) . At a high level , to encourage statistical sharing between objects , we assume that the embedding of each object is obtained by linearly superimposing a small set of anchor objects . For example , when the objects considered are words , the anchors may represent latent abstract concepts ( of unknown cardinality ) and each word is a weighted mixture of different concepts . More generally , the model assumes that there are some unknown number of anchors , A = { a1 , ... , a∣A∣ } . The embedding ei for object i is generated by first choosing whether the object possesses each anchor ak ∈ Rd . The selected anchors then each contribute some weight to the representation of object i . Therefore , instead of learning the large embedding matrix E directly , ANT consists of two components : Algorithm 1 ANCHOR & TRANSFORM algorithm for learning sparse representations of discrete objects . ANCHOR & TRANSFORM : 1 : Anchor : initialize anchor embeddings A . 2 : Transform : initialize T as a sparse matrix . 3 : Optionally + domain info : initialize domain sparsity ma- trix S ( G ) as a sparse matrix ( see Appendix F ) . 4 : for each batch ( X , Y ) do 5 : Compute loss L = ∑iDφ ( yi , fθ ( xi ; TA ) ) 6 : A , T , θ = UPDATE ( ∇L , η ) . 7 : T = max { ( T − ηλ2 ) ⊙ S ( G ) +T⊙ ( 1 − S ( G ) ) ,0 } . 8 : end for 9 : return anchor embeddings A and transformations T. 1 ) ANCHOR : Learn embeddings A ∈ R∣A∣×d of a small set of anchor objects A = { a1 , ... , a∣A∣ } , ∣A∣ < < ∣V ∣ that are representative of all discrete objects . 2 ) TRANSFORM : Learn a sparse transformation T from A to E. Each of the discrete objects is induced by some transformation from ( a few ) anchor objects . To ensure sparsity , we want nnz ( T ) < < ∣V ∣ × d. A and T are trained end-to-end for task specific representations . To enforce sparsity , we use an ` 1 penalty on T and constrain its domain to be non-negative to reduce redundancy in transformations ( positive and negative entries canceling out ) . min T≥0 , A , θ ∑ i Dφ ( yi , fθ ( xi ; TA ) ) + λ2∥T∥1 , ( 1 ) where Dφ is a suitable Bregman divergence between predicted and true labels , and ∥T∥1 denotes the sum of absolute values . Most deep learning frameworks directly use subgradient descent to solve eq ( 1 ) , but unfortunately , such an approach will not yield sparsity . Instead , we perform optimization by proximal gradient descent ( rather than approximate subgradient methods which have poorer convergence around non-smooth regions , e.g. , sparse regions ) to ensure exact zero entries in T : A t+1 , T t+1 , θ t+1 = UPDATE ( ∇∑ i Dφ ( yi , fθ ( xi ; T t A t ) ) , η ) , ( 2 ) T t+1 = PROXηλ2 ( T t+1 ) = max ( T t+1 − ηλ2 , 0 ) , ( 3 ) where η is the learning rate , and UPDATE is a gradient update rule ( e.g. , SGD ( Lecun et al. , 1998 ) , ADAM ( Kingma & Ba , 2015 ) , YOGI ( Zaheer et al. , 2018 ) ) . PROXηλ2 is a composition of two proximal operators : 1 ) soft-thresholding ( Beck & Teboulle , 2009 ) at ηλ2 which results from subgradient descent on λ2∥T∥1 , and 2 ) max ( ⋅,0 ) due to the non-negative domain for T. We implement this proximal operator on top of the YOGI optimizer for our experiments . Together , equations ( 2 ) and ( 3 ) give us an iterative process for end-to-end learning of A and T along with θ for specific tasks ( Algorithm 1 ) . T is implemented as a sparse matrix by only storing its non-zero entries and indices . Since nnz ( T ) < < ∣V ∣× d , this makes storage of T extremely efficient as compared to traditional approaches of computing the entire ∣V ∣×d embedding matrix . We also provide implementation tips to further speedup training and ways to incorporate ANT with existing speedup techniques like softmax sampling ( Mikolov et al. , 2013 ) or noise-contrastive estimation ( Mnih & Teh , 2012 ) in Appendix H. After training , we only store ∣A∣ × d + nnz ( T ) < < ∣V ∣ × d entries that define the complete embedding matrix , thereby using fewer parameters than the traditional ∣V ∣ × d matrix . General purpose matrix compression techniques such as hashing ( Qi et al. , 2017 ) , pruning ( Dong et al. , 2017 ) , and quantizing ( Han et al. , 2016 ) are compatible with our method : the matrices A and nnz ( T ) can be further compressed and stored . We first discuss practical methods for anchor selection ( §3.1 ) . In Appendix F we describe several ways to incorporate domain knowledge into the anchor selection and transform process . We also provide a statistical interpretation of ANT as a sparsity promoting generative process using an IBP prior and derive approximate inference based on SVA ( §3.2 ) . This gives rise to a nonparametric version of ANT that automatically learns the optimal number of anchors .
This paper proposes ANT to solve the problem of learning Sparse embeddings instead of dense counterparts for tasks like Text Classification, Language Modeling and Recommendation Systems. When the vocabulary size |V| runs into several 100Ks or millions, it is impractical to store one dense vector per label. Hence the paper proposes to only store a few anchor/latent vectors (the matrix is A with |A|<<|V|). All label vectors are expressed as linear combinations of a 'few' anchor vectors. To train this end-to-end, we need a transformation matrix T such that T*A = E (E is V\times d embedding matrix). T has to be structured, i.e., each row of T has to be sparse and positive only (although negative weights are also fine, I'm not sure if weight redundancy is that important).
SP:111b19c01327c2eb1211e8ce7861378e76a64877
A Maximum Mutual Information Framework for Multi-Agent Reinforcement Learning
1 INTRODUCTION . With the success of RL in the single-agent domain ( Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2015 ) ) , MARL is being actively studied and applied to real-world problems such as traffic control systems and connected self-driving cars , which can be modeled as multi-agent systems requiring coordinated control ( Li et al . ( 2019 ) ; Andriotis & Papakonstantinou ( 2019 ) ) . The simplest approach to MARL is independent learning , which trains each agent independently while treating other agents as a part of the environment . One such example is independent Q-learning ( IQL ) ( Tan ( 1993 ) ) , which is an extension of Q-learning to multi-agent setting . However , this approach suffers from the problem of non-stationarity of the environment . A common solution to this problem is to use fully-centralized critic in the framework of centralized training with decentralized execution ( CTDE ) ( OroojlooyJadid & Hajinezhad ( 2019 ) ; Rashid et al . ( 2018 ) ) . For example , MADDPG ( Lowe et al . ( 2017 ) ) uses a centralized critic to train a decentralized policy for each agent , and COMA ( Foerster et al . ( 2018 ) ) uses a common centralized critic to train all decentralized policies . However , these approaches assume that decentralized policies are independent and hence the joint policy is the product of each agent ’ s policy . Such non-correlated factorization of the joint policy limits the agents to learn coordinated behavior due to negligence of the influence of other agents ( Wen et al . ( 2019 ) ; de Witt et al . ( 2019 ) ) . Thus , learning coordinated behavior is one of the fundamental problems in MARL ( Wen et al . ( 2019 ) ; Liu et al . ( 2020 ) ) . In this paper , we introduce a new framework for MARL to learn coordinated behavior under CTDE . Our framework is based on regularizing the expected cumulative reward with mutual information among agents ’ actions induced by injecting a latent variable . The intuition behind the proposed framework is that agents can coordinate with other agents if they know what other agents will do with high probability , and the dependence between action policies can be captured by the mutual information . High mutual information among actions means low uncertainty of other agents ’ actions . Hence , by regularizing the objective of the expected cumulative reward with mutual information among agents ’ actions , we can coordinate the behaviors of agents implicitly without explicit dependence enforcement . However , the optimization problem with the proposed objective function has several difficulties since we consider decentralized policies without explicit dependence or communication in the execution phase . In addition , optimizing mutual information is difficult because of the intractable conditional distribution . We circumvent these difficulties by exploiting the property of the latent variable injected to induce mutual information , and applying variational lower bound on the mutual information . With the proposed framework , we apply policy iteration by redefining value functions to propose the VM3-AC algorithm for MARL with coordinated behavior under CTDE . 2 RELATED WORK . Learning coordinated behavior in multi-agent systems is studied extensively in the MARL community . To promote coordination , some previous works used communication among agents ( Zhang & Lesser ( 2013 ) ; Foerster et al . ( 2016 ) ; Pesce & Montana ( 2019 ) ) . For example , Foerster et al . ( 2016 ) proposed the DIAL algorithm to learn a communication protocol that enables the agents to coordinate their behaviors . Instead of relying on communication , Jaques et al . ( 2018 ) proposed social influence intrinsic reward which is related to the mutual information between actions to achieve coordination . The purpose of the social influence approach is similar to our approach and the social influence yields good performance in social dilemma environments . The difference between our algorithm and the social influence approach will be explained in detail and the effectiveness of our approach over the social influence approach will be shown in Section 6 . Wang et al . ( 2019 ) proposed an intrinsic reward capturing the influence based on the mutual information between an agent ’ s current actions/states and other agents ’ next states . In addition , they proposed an intrinsic reward based on a decision-theoretic measure . Although they used the mutual information to enhance exploration , our approach focuses on the mutual information between simultaneous actions capturing policy correlation not influence . Besides , they considered independent policies , whereas policies are correlated in our approach . Some previous works considered correlated policies instead of independent policies . For example , Liu et al . ( 2020 ) proposed explicit modeling of correlated policies for multi-agent imitation learning , and Wen et al . ( 2019 ) proposed a recursive reasoning framework for MARL to maximize the expected return by decomposing the joint policy into own policy and opponents ’ policies . Going beyond adopting correlated policies , our approach maximizes the mutual information between actions which is a measure of correlation . Our framework can be interpreted as enhancing correlated exploration by increasing the entropy of own policy while decreasing the uncertainty about other agents ’ actions . Some previous works proposed other techniques to enhance correlated exploration ( Zheng & Yue ( 2018 ) ; Mahajan et al . ( 2019 ) ) . For example , MAVEN addressed the poor exploration problem of QMIX by maximizing the mutual information between the latent variable and the observed trajectories ( Mahajan et al . ( 2019 ) ) . However , MAVEN does not consider the correlation among policies . 3 BACKGROUND . We consider a Markov Game ( Littman ( 1994 ) ) , which is an extention of Markov Decision Process ( MDP ) to multi-agent setting . An N -agent Markov game is defined by an environment state space S , action spaces forN agentsA1 , · · · , AN , a state transition probability T : S×A×S → [ 0 , 1 ] , where A = ∏N i=1Ai is the joint action space , and a reward functionR : S ×A→ R. At each time step t , Agent i executes action ait ∈ Ai based on state st ∈ S . The actions of all agents at = ( a1t , · · · , aNt ) yield next state st+1 according to T and yield shared common reward rt according toR under the assumption of fully-cooperative MARL . The discounted return is defined asRt = ∑∞ τ=t γ τrτ , where γ ∈ [ 0 , 1 ) is the discounting factor . We assume CTDE incorporating resource asymmetry between training and execution phases , widely considered in MARL ( Lowe et al . ( 2017 ) ; Iqbal & Sha ( 2018 ) ; Foerster et al . ( 2018 ) ) . Under CTDE , each agent can access all information including the environment state , observations and actions of other agents in the training phase , whereas the policy of each agent can be conditioned only on its own action-observation history τ it or observation o i t in the execution phase . For given joint policy π = ( π1 , · · · , πN ) , the goal of fully cooperative MARL is to find the optimal joint policy π∗ that maximizes the objective J ( π ) = Eπ [ R0 ] . Maximum Entropy RL The goal of maximum entropy RL is to find an optimal policy that maximizes the entropy-regularized objective function , given by J ( π ) = Eπ [ ∞∑ t=0 γt ( rt ( st , at ) + αH ( π ( ·|st ) ) ) ] ( 1 ) It is known that this objective encourages the policy to enhance exploration in the state and action spaces and helps the policy avoid converging to a local minimum . Soft actor-critic ( SAC ) , which is based on the maximum entropy RL principle , approximates soft policy iteration to the actor-critic method . SAC outperforms other deep RL algorithms in many continuous action tasks ( Haarnoja et al . ( 2018 ) ) . We can simply extend SAC to multi-agent setting in the manner of independent learning . Each agent trains its decentralized policy using decentralized critic to maximize the weighted sum of the cumulative return and the entropy of its policy . We refer to this method as Independent SAC ( I-SAC ) . Adopting the framework of CTDE , we can replace decentralized critic with centralized critic which incorporates observations and actions of all agents . We refer to this method as multi-agent soft actor-critic ( MA-SAC ) . Both I-SAC and MA-SAC are considered as baselines in the experiment section . 4 THE PROPOSED MAXIMUM MUTUAL INFORMATION FRAMEWORK . We assume that the environment is fully observable , i.e. , each agent can observe the environment state st for theoretical development in this section , and will consider partially observable environment for practical algorithm construction under CTDE in the next section . Under the proposed MMI framework , we aims to find the policy that maximizes the mutual information between actions in addition to cumulative return . Thus , the MMI-regularized objective function for joint policy π is given by J ( π ) = Eπ [ ∞∑ t=0 γt ( rt ( st , at ) + α ∑ ( i , j ) I ( πi ( ·|st ) ; πj ( ·|st ) ) ) ] ( 2 ) where ait ∼ πi ( ·|st ) and α is the temperature parameter that controls the relative importance of the mutual information against the reward . As aforementioned , we assume decentralized policies and want the decentralized policies to exhibit coordinated behavior . By regularization with mutual information in the proposed objective function ( 2 ) , the policy of each agent is implicitly encouraged to coordinate with other agents ’ policies without explicit dependency by reducing the uncertainty about other agents ’ policies . This can be seen as follows : Mutual information is expressed in terms of entropy and conditional entropy as I ( πi ( ·|st ) ; πj ( ·|st ) ) = H ( πj ( ·|st ) ) −H ( πj ( ·|st ) |πi ( ·|st ) ) . ( 3 ) If the knowledge of πi ( ·|st ) does not provide any information about πj ( ·|st ) , the conditional entropy reduces to the unconditional entropy , i.e. , H ( πj ( ·|st ) |πi ( ·|st ) ) = H ( πj ( ·|st ) ) , and the mutual information becomes zero . Maximizing mutual information is equivalent to minimizing the uncertainty about other agents ’ policies conditioned on the agent ’ s own policy , which can lead the agent to learn coordinated behavior based on the reduced uncertainty about other agents ’ policies . However , direct optimization of the objective function ( 2 ) is not easy . Fig . 1 ( a ) shows the causal diagram of the considered system model described in Section 3 in the case of two agents with decentralized policies . Since we consider the case of no explicit dependency , the two policy distributions can be expressed as π1 ( a1t |st ) and π2 ( a2t |st ) . Then , for given environment state st observed by both agents , π1 ( a1t |st ) and π2 ( a2t |st ) are conditionally independent and the mutual information I ( π1 ( ·|st ) ; π2 ( ·|st ) ) = 0 . Thus , the MMI objective ( 2 ) reduces to the standard MARL objective of only the accumulated return . In the following subsections , we present our approach to circumvent this difficulty and implement the MMI framework and its operation under CTDE .
- This paper proposes a Maximum Mutual Information framework for cooperative MARL. Following the insight that mutual information of agents’ policies is the indicator of coordination, this paper proposes VM3-AC, an MA-AC algorithm that optimizes long-term reward as well as a variational lower bound of mutual information in the paradigm of CTDE. Experimental results show superiority of proposed algorithm in comparison with a few benchmark approaches.
SP:dc4dbc42defdc5f34bdfb2288fb33986ba348f8c
A Maximum Mutual Information Framework for Multi-Agent Reinforcement Learning
1 INTRODUCTION . With the success of RL in the single-agent domain ( Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2015 ) ) , MARL is being actively studied and applied to real-world problems such as traffic control systems and connected self-driving cars , which can be modeled as multi-agent systems requiring coordinated control ( Li et al . ( 2019 ) ; Andriotis & Papakonstantinou ( 2019 ) ) . The simplest approach to MARL is independent learning , which trains each agent independently while treating other agents as a part of the environment . One such example is independent Q-learning ( IQL ) ( Tan ( 1993 ) ) , which is an extension of Q-learning to multi-agent setting . However , this approach suffers from the problem of non-stationarity of the environment . A common solution to this problem is to use fully-centralized critic in the framework of centralized training with decentralized execution ( CTDE ) ( OroojlooyJadid & Hajinezhad ( 2019 ) ; Rashid et al . ( 2018 ) ) . For example , MADDPG ( Lowe et al . ( 2017 ) ) uses a centralized critic to train a decentralized policy for each agent , and COMA ( Foerster et al . ( 2018 ) ) uses a common centralized critic to train all decentralized policies . However , these approaches assume that decentralized policies are independent and hence the joint policy is the product of each agent ’ s policy . Such non-correlated factorization of the joint policy limits the agents to learn coordinated behavior due to negligence of the influence of other agents ( Wen et al . ( 2019 ) ; de Witt et al . ( 2019 ) ) . Thus , learning coordinated behavior is one of the fundamental problems in MARL ( Wen et al . ( 2019 ) ; Liu et al . ( 2020 ) ) . In this paper , we introduce a new framework for MARL to learn coordinated behavior under CTDE . Our framework is based on regularizing the expected cumulative reward with mutual information among agents ’ actions induced by injecting a latent variable . The intuition behind the proposed framework is that agents can coordinate with other agents if they know what other agents will do with high probability , and the dependence between action policies can be captured by the mutual information . High mutual information among actions means low uncertainty of other agents ’ actions . Hence , by regularizing the objective of the expected cumulative reward with mutual information among agents ’ actions , we can coordinate the behaviors of agents implicitly without explicit dependence enforcement . However , the optimization problem with the proposed objective function has several difficulties since we consider decentralized policies without explicit dependence or communication in the execution phase . In addition , optimizing mutual information is difficult because of the intractable conditional distribution . We circumvent these difficulties by exploiting the property of the latent variable injected to induce mutual information , and applying variational lower bound on the mutual information . With the proposed framework , we apply policy iteration by redefining value functions to propose the VM3-AC algorithm for MARL with coordinated behavior under CTDE . 2 RELATED WORK . Learning coordinated behavior in multi-agent systems is studied extensively in the MARL community . To promote coordination , some previous works used communication among agents ( Zhang & Lesser ( 2013 ) ; Foerster et al . ( 2016 ) ; Pesce & Montana ( 2019 ) ) . For example , Foerster et al . ( 2016 ) proposed the DIAL algorithm to learn a communication protocol that enables the agents to coordinate their behaviors . Instead of relying on communication , Jaques et al . ( 2018 ) proposed social influence intrinsic reward which is related to the mutual information between actions to achieve coordination . The purpose of the social influence approach is similar to our approach and the social influence yields good performance in social dilemma environments . The difference between our algorithm and the social influence approach will be explained in detail and the effectiveness of our approach over the social influence approach will be shown in Section 6 . Wang et al . ( 2019 ) proposed an intrinsic reward capturing the influence based on the mutual information between an agent ’ s current actions/states and other agents ’ next states . In addition , they proposed an intrinsic reward based on a decision-theoretic measure . Although they used the mutual information to enhance exploration , our approach focuses on the mutual information between simultaneous actions capturing policy correlation not influence . Besides , they considered independent policies , whereas policies are correlated in our approach . Some previous works considered correlated policies instead of independent policies . For example , Liu et al . ( 2020 ) proposed explicit modeling of correlated policies for multi-agent imitation learning , and Wen et al . ( 2019 ) proposed a recursive reasoning framework for MARL to maximize the expected return by decomposing the joint policy into own policy and opponents ’ policies . Going beyond adopting correlated policies , our approach maximizes the mutual information between actions which is a measure of correlation . Our framework can be interpreted as enhancing correlated exploration by increasing the entropy of own policy while decreasing the uncertainty about other agents ’ actions . Some previous works proposed other techniques to enhance correlated exploration ( Zheng & Yue ( 2018 ) ; Mahajan et al . ( 2019 ) ) . For example , MAVEN addressed the poor exploration problem of QMIX by maximizing the mutual information between the latent variable and the observed trajectories ( Mahajan et al . ( 2019 ) ) . However , MAVEN does not consider the correlation among policies . 3 BACKGROUND . We consider a Markov Game ( Littman ( 1994 ) ) , which is an extention of Markov Decision Process ( MDP ) to multi-agent setting . An N -agent Markov game is defined by an environment state space S , action spaces forN agentsA1 , · · · , AN , a state transition probability T : S×A×S → [ 0 , 1 ] , where A = ∏N i=1Ai is the joint action space , and a reward functionR : S ×A→ R. At each time step t , Agent i executes action ait ∈ Ai based on state st ∈ S . The actions of all agents at = ( a1t , · · · , aNt ) yield next state st+1 according to T and yield shared common reward rt according toR under the assumption of fully-cooperative MARL . The discounted return is defined asRt = ∑∞ τ=t γ τrτ , where γ ∈ [ 0 , 1 ) is the discounting factor . We assume CTDE incorporating resource asymmetry between training and execution phases , widely considered in MARL ( Lowe et al . ( 2017 ) ; Iqbal & Sha ( 2018 ) ; Foerster et al . ( 2018 ) ) . Under CTDE , each agent can access all information including the environment state , observations and actions of other agents in the training phase , whereas the policy of each agent can be conditioned only on its own action-observation history τ it or observation o i t in the execution phase . For given joint policy π = ( π1 , · · · , πN ) , the goal of fully cooperative MARL is to find the optimal joint policy π∗ that maximizes the objective J ( π ) = Eπ [ R0 ] . Maximum Entropy RL The goal of maximum entropy RL is to find an optimal policy that maximizes the entropy-regularized objective function , given by J ( π ) = Eπ [ ∞∑ t=0 γt ( rt ( st , at ) + αH ( π ( ·|st ) ) ) ] ( 1 ) It is known that this objective encourages the policy to enhance exploration in the state and action spaces and helps the policy avoid converging to a local minimum . Soft actor-critic ( SAC ) , which is based on the maximum entropy RL principle , approximates soft policy iteration to the actor-critic method . SAC outperforms other deep RL algorithms in many continuous action tasks ( Haarnoja et al . ( 2018 ) ) . We can simply extend SAC to multi-agent setting in the manner of independent learning . Each agent trains its decentralized policy using decentralized critic to maximize the weighted sum of the cumulative return and the entropy of its policy . We refer to this method as Independent SAC ( I-SAC ) . Adopting the framework of CTDE , we can replace decentralized critic with centralized critic which incorporates observations and actions of all agents . We refer to this method as multi-agent soft actor-critic ( MA-SAC ) . Both I-SAC and MA-SAC are considered as baselines in the experiment section . 4 THE PROPOSED MAXIMUM MUTUAL INFORMATION FRAMEWORK . We assume that the environment is fully observable , i.e. , each agent can observe the environment state st for theoretical development in this section , and will consider partially observable environment for practical algorithm construction under CTDE in the next section . Under the proposed MMI framework , we aims to find the policy that maximizes the mutual information between actions in addition to cumulative return . Thus , the MMI-regularized objective function for joint policy π is given by J ( π ) = Eπ [ ∞∑ t=0 γt ( rt ( st , at ) + α ∑ ( i , j ) I ( πi ( ·|st ) ; πj ( ·|st ) ) ) ] ( 2 ) where ait ∼ πi ( ·|st ) and α is the temperature parameter that controls the relative importance of the mutual information against the reward . As aforementioned , we assume decentralized policies and want the decentralized policies to exhibit coordinated behavior . By regularization with mutual information in the proposed objective function ( 2 ) , the policy of each agent is implicitly encouraged to coordinate with other agents ’ policies without explicit dependency by reducing the uncertainty about other agents ’ policies . This can be seen as follows : Mutual information is expressed in terms of entropy and conditional entropy as I ( πi ( ·|st ) ; πj ( ·|st ) ) = H ( πj ( ·|st ) ) −H ( πj ( ·|st ) |πi ( ·|st ) ) . ( 3 ) If the knowledge of πi ( ·|st ) does not provide any information about πj ( ·|st ) , the conditional entropy reduces to the unconditional entropy , i.e. , H ( πj ( ·|st ) |πi ( ·|st ) ) = H ( πj ( ·|st ) ) , and the mutual information becomes zero . Maximizing mutual information is equivalent to minimizing the uncertainty about other agents ’ policies conditioned on the agent ’ s own policy , which can lead the agent to learn coordinated behavior based on the reduced uncertainty about other agents ’ policies . However , direct optimization of the objective function ( 2 ) is not easy . Fig . 1 ( a ) shows the causal diagram of the considered system model described in Section 3 in the case of two agents with decentralized policies . Since we consider the case of no explicit dependency , the two policy distributions can be expressed as π1 ( a1t |st ) and π2 ( a2t |st ) . Then , for given environment state st observed by both agents , π1 ( a1t |st ) and π2 ( a2t |st ) are conditionally independent and the mutual information I ( π1 ( ·|st ) ; π2 ( ·|st ) ) = 0 . Thus , the MMI objective ( 2 ) reduces to the standard MARL objective of only the accumulated return . In the following subsections , we present our approach to circumvent this difficulty and implement the MMI framework and its operation under CTDE .
The authors propose to include the mutual information between agents' simultaneous actions in the objective to encourage coordinated behaviour. To induce positive mutual information, the authors relax the assumption that the joint policy can be decomposed as the product of each agent's policy, independent of each other given the state, and they achieve so by introducing a latent variable that correlates agents behaviours. Since the mutual information is difficult to compute, the authors proposed to maximise a parametric lower bound. The algorithm is theoretically motivated as a policy iteration variation in its exact tabular form. But experiments are performed with neural network approximations on some environments. Numerical results show improvements over previous similar techniques.
SP:dc4dbc42defdc5f34bdfb2288fb33986ba348f8c
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
1 INTRODUCTION . Recurrent neural networks ( RNNs ) have achieved tremendous success in a variety of tasks involving sequential ( time series ) inputs and outputs , ranging from speech recognition to computer vision and natural language processing , among others . However , it is well known that training RNNs to process inputs over long time scales ( input sequences ) is notoriously hard on account of the so-called exploding and vanishing gradient problem ( EVGP ) ( Pascanu et al. , 2013 ) , which stems from the fact that the well-established BPTT algorithm for training RNNs requires computing products of gradients ( Jacobians ) of the underlying hidden states over very long time scales . Consequently , the overall gradient can grow ( to infinity ) or decay ( to zero ) exponentially fast with respect to the number of recurrent interactions . A variety of approaches have been suggested to mitigate the exploding and vanishing gradient problem . These include adding gating mechanisms to the RNN in order to control the flow of information in the network , leading to architectures such as long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) and gated recurring units ( GRU ) ( Cho et al. , 2014 ) , that can overcome the vanishing gradient problem on account of the underlying additive structure . However , the gradients might still explode and learning very long term dependencies remains a challenge ( Li et al. , 2018 ) . Another popular approach for handling the EVGP is to constrain the structure of underlying recurrent weight matrices by requiring them to be orthogonal ( unitary ) , leading to the so-called orthogonal RNNs ( Henaff et al. , 2016 ; Arjovsky et al. , 2016 ; Wisdom et al. , 2016 ; Kerg et al. , 2019 ) and references therein . By construction , the resulting Jacobians have eigen- and singular-spectra with unit norm , alleviating the EVGP . However as pointed out by Kerg et al . ( 2019 ) , imposing such constraints on the recurrent matrices may lead to a significant loss of expressivity of the RNN resulting in inadequate performance on realistic tasks . In this article , we adopt a different approach , based on observation that coupled networks of controlled non-linear forced and damped oscillators , that arise in many physical , engineering and biological systems , such as networks of biological neurons , do seem to ensure expressive representations while constraining the dynamics of state variables and their gradients . This motivates us to propose a novel architecture for RNNs , based on time-discretizations of second-order systems of non-linear ordinary differential equations ( ODEs ) ( 1 ) that model coupled oscillators . Under verifiable hypotheses , we are able to rigorously prove precise bounds on the hidden states of these RNNs and their gradients , enabling a possible solution of the exploding and vanishing gradient problem , while demonstrating through benchmark numerical experiments , that the resulting system still retains sufficient expressivity , i.e . ability to process complex inputs , with a competitive performance , with respect to the state of the art , on a variety of sequential learning tasks . 2 THE PROPOSED RNN . Our proposed RNN is based on the following second-order system of ODEs , y′′ = σ ( Wy + Wy′ + Vu + b ) − γy − y′ . ( 1 ) Here , t ∈ [ 0 , 1 ] is the ( continuous ) time variable , u = u ( t ) ∈ Rd is the time-dependent input signal , y = y ( t ) ∈ Rm is the hidden state of the RNN with W , W ∈ Rm×m , V ∈ Rm×d are weight matrices , b ∈ Rm is the bias vector and 0 < γ , are parameters , representing oscillation frequency and the amount of damping ( friction ) in the system , respectively . σ : R 7→ R is the activation function , set to σ ( u ) = tanh ( u ) here . By introducing the so-called velocity variable z = y′ ( t ) ∈ Rm , we rewrite ( 1 ) as the first-order system : y′ = z , z′ = σ ( Wy + Wz + Vu + b ) − γy − z . ( 2 ) We fix a timestep 0 < ∆t < 1 and define our proposed RNN hidden states at time tn = n∆t ∈ [ 0 , 1 ] ( while omitting the affine output state ) as the following IMEX ( implicit-explicit ) discretization of the first order system ( 2 ) : yn = yn−1 + ∆tzn , zn = zn−1 + ∆tσ ( Wyn−1 + Wzn−1 + Vun + b ) −∆tγyn−1 −∆t zn̄ , ( 3 ) with either n̄ = n or n̄ = n− 1 . Note that the only difference in the two versions of the RNN ( 3 ) lies in the implicit ( n̄ = n ) or explicit ( n̄ = n− 1 ) treatment of the damping term − z in ( 2 ) , whereas both versions retain the implicit treatment of the first equation in ( 2 ) . Motivation and background . To see that the underlying ODE ( 2 ) models a coupled network of controlled forced and damped nonlinear oscillators , we start with the single neuron ( scalar ) case by setting d = m = 1 in ( 1 ) and assume an identity activation function σ ( x ) = x . Setting W = W = V = b = = 0 leads to the simple ODE , y′′ + γy = 0 , which exactly models simple harmonic motion with frequency γ , for instance that of a mass attached to a spring ( Guckenheimer & Holmes , 1990 ) . Letting > 0 in ( 1 ) adds damping or friction to the system ( Guckenheimer & Holmes , 1990 ) . Then , by introducing non-zero V in ( 1 ) , we drive the system with a driving force proportional to the input signal u ( t ) . The parameters V , b modulate the effect of the driving force , W controls the frequency of oscillations and W the amount of damping in the system . Finally , the tanh activation mediates a non-linear response in the oscillator . In the coupled network ( 2 ) with m > 1 , each neuron updates its hidden state based on the input signal as well as information from other neurons . The diagonal entries of W ( and the scalar hyperparameter γ ) control the frequency whereas the diagonal entries of W ( and the hyperparameter ) determine the amount of damping for each neuron , respectively , whereas the non-diagonal entries of these matrices modulate interactions between neurons . Hence , given this behavior of the underlying ODE ( 2 ) , we term the RNN ( 3 ) as a coupled oscillatory Recurrent Neural Network ( coRNN ) . The dynamics of the ODE ( 2 ) ( and the RNN ( 3 ) ) for a single neuron are relatively straightforward . As we illustrate in Fig . 6 of supplementary material SM§C , input signals drive the generation of ( superpositions of ) oscillatory wave-forms , whose amplitude and ( multiple ) frequencies are controlled by the tunable parameters W , W , V , b . Adding a tanh activation does not change these dynamics much . This is in contrast to truncating tanh to leading non-linear order by setting σ ( x ) = x− x3/3 , which yields a Duffing type oscillator that is characterized by chaotic behavior ( Guckenheimer & Holmes , 1990 ) . Adding interactions between neurons leads to further accentuation of this generation of superposed wave forms ( see Fig . 6 in SM§C ) and even with very simple network topologies , one sees the emergence of non-trivial non-oscillatory hidden states from oscillatory inputs . In practice , a network of a large number of neurons is used and can lead to extremely rich global dynamics . Hence , we argue that the ability of a network of ( forced , driven ) oscillators to access a very rich set of output states may lead to high expressivity of the system , allowing it to approximate outputs from complicated sequential inputs . Oscillator networks are ubiquitous in nature and in engineering systems ( Guckenheimer & Holmes , 1990 ; Strogatz , 2015 ) with canonical examples being pendulums ( classical mechanics ) , business cycles ( economics ) , heartbeat ( biology ) for single oscillators and electrical circuits for networks of oscillators . Our motivating examples arise in neurobiology , where individual biological neurons can be viewed as oscillators with periodic spiking and firing of the action potential . Moreover , functional circuits of the brain , such as cortical columns and prefrontal-striatal-hippocampal circuits , are being increasingly interpreted by networks of oscillatory neurons , see Stiefel & Ermentrout ( 2016 ) for an overview . Following well-established paths in machine learning , such as for convolutional neural networks ( LeCun et al. , 2015 ) , our focus here is to abstract the essence of functional brain circuits being networks of oscillators and design an RNN based on much simpler mechanistic systems , such as those modeled by ( 2 ) , while ignoring the complicated biological details of neural function . Related work . There is an increasing trend of basing RNN architectures on ODEs and dynamical systems . These approaches can roughly be classified into two branches , namely RNNs based on discretized ODEs and continuous-time RNNs . Examples of continuous-time approaches include neural ODEs ( Chen et al. , 2018 ) with ODE-RNNs ( Rubanova et al. , 2019 ) as its recurrent extension as well as E ( 2017 ) and references therein , to name just a few . We focus , however , in this article on an ODE-inspired discrete-time RNN , as the proposed coRNN is derived from a discretization of the ODE ( 1 ) . A good example for a discrete-time ODE-based RNNs is the so-called anti-symmetric RNN of Chang et al . ( 2019 ) , where the RNN architecture is based on a stable ODE resulting from a skew-symmetric hidden weight matrix , thus constraining the stable ( gradient ) dynamics of the network . This approach has much in common with previously mentioned unitary/orthogonal/nonnormal RNNs in constraining the structure of the hidden-to-hidden layer weight matrices . However , adding such strong constraints might reduce expressivity of the resulting RNN and might lead to inadequate performance on complex tasks . In contrast to these approaches , our proposed coRNN does not explicitly constrain the weight matrices but relies on the dynamics of the underlying ODE ( and the IMEX discretization ( 3 ) ) , to provide gradient stability . Moreover , no gating mechanisms as in LSTMs/GRUs are used in the current version of coRNN . There is also an increasing interest in designing hybrid methods , which use a discretization of an ODE ( in particular a Hamiltonian system ) in order to learn the continuous representation of the data , see for instance Greydanus et al . ( 2019 ) ; Chen et al . ( 2020 ) . Overall , our approach here differs from these papers in our use of networks of oscillators to build the RNN . 3 RIGOROUS ANALYSIS OF THE PROPOSED RNN . An attractive feature of the underlying ODE system ( 2 ) lies in the fact that the resulting hidden states ( and their gradients ) are bounded ( see SM§D for precise statements and proofs ) . Hence , one can expect that a suitable discretization of the ODE ( 2 ) that preserves these bounds will not have exploding gradients . We claim that one such structure preserving discretization is given by the IMEX discretization that results in the RNN ( 3 ) and proceed to derive bounds on this RNN below . Following standard practice we set y ( 0 ) = z ( 0 ) = 0 and purely for the simplicity of exposition , we set the control parameters , = γ = 1 and n̄ = n in ( 3 ) leading to , yn = yn−1 + ∆tzn , zn = zn−1 1+∆t + ∆t 1+∆tσ ( An−1 ) − ∆t1+∆tyn−1 , An−1 : = Wyn−1 + Wzn−1 + Vun + b . ( 4 ) Analogous results and proofs for the case where n̄ = n− 1 and for general values of , γ are provided in SM§F . Bounds on the hidden states . As with the underlying ODE ( 2 ) , the hidden states of the RNN ( 3 ) are bounded , i.e . Proposition 3.1 Let yn , zn be the hidden states of the RNN ( 4 ) for 1 ≤ n ≤ N , then the hidden states satisfy the following ( energy ) bounds : y > n yn + z > n zn ≤ nm∆t = mtn ≤ m. ( 5 ) The proof of the energy bound ( 5 ) is provided in SM§E.1 and a straightforward variant of the proof ( see SM§E.2 ) yields an estimate on the sensitivity of the hidden states to changing inputs . As with the underlying ODE ( see SM§D ) , this bound rules out chaotic behavior of hidden states . Bounds on hidden state gradients . We train the RNN ( 3 ) to minimize the loss function , E : = 1 N N∑ n=1 En , En = 1 2 ‖yn − ȳn‖22 , ( 6 ) with ȳ being the underlying ground truth ( training data ) . During training , we compute gradients of the loss function ( 6 ) with respect to the weights and biases Θ = [ W , W , V , b ] , i.e . ∂E ∂θ = 1 N N∑ n=1 ∂En ∂θ , ∀ θ ∈ Θ . ( 7 ) Proposition 3.2 Let yn , zn be the hidden states generated by the RNN ( 4 ) . We assume that the time step ∆t < < 1 can be chosen such that , max { ∆t ( 1 + ‖W‖∞ ) 1 + ∆t , ∆t‖W‖∞ 1 + ∆t } = η ≤ ∆tr , 1 2 ≤ r ≤ 1 . ( 8 ) Denoting δ = 11+∆t , the gradient of the loss function E ( 6 ) with respect to any parameter θ ∈ Θ is bounded as , ∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 32 ( m+ Ȳ√m ) , ( 9 ) with Ȳ = max 1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data . Sketch of the proof . Denoting Xn = [ yn , zn ] , we can apply the chain rule repeatedly ( for instance as in Pascanu et al . ( 2013 ) ) to obtain , ∂En ∂θ = ∑ 1≤k≤n ∂En ∂Xn ∂Xn ∂Xk ∂+Xk ∂θ︸ ︷︷ ︸ ∂E ( k ) n ∂θ . ( 10 ) Here , the notation ∂ +Xk ∂θ refers to taking the partial derivative of Xk with respect to the parameter θ , while keeping the other arguments constant . This quantity can be readily calculated from the structure of the RNN ( 4 ) and is presented in the detailed proof provided in SM§E.3 . From ( 6 ) , we can directly compute that ∂En∂Xn = [ yn − ȳn , 0 ] . Repeated application of the chain rule and a direct calculation with ( 4 ) yields , ∂Xn ∂Xk = ∏ k < i≤n ∂Xi ∂Xi−1 , ∂Xi ∂Xi−1 = [ I + ∆tBi−1 ∆tCi−1 Bi−1 Ci−1 ] , ( 11 ) where I is the identity matrix and Bi−1 = δ∆t ( diag ( σ ′ ( Ai−1 ) ) W − I ) , Ci−1 = δ ( I + ∆tdiag ( σ′ ( Ai−1 ) ) W ) . ( 12 ) It is straightforward to calculate using the assumption ( 8 ) that ‖Bi−1‖∞ < η and ‖Ci−1‖∞ ≤ η+ δ . Using the definitions of matrix norms and ( 8 ) , we obtain : ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ max ( 1 + ∆t ( ‖Bi−1‖∞ + ‖Ci−1‖∞ ) , ‖Bi−1‖∞ + ‖Ci−1‖∞ ) ≤ max ( 1 + ∆t ( δ + 2η ) , δ + 2η ) ≤ 1 + 3∆tr . ( 13 ) Therefore , using ( 11 ) , we have∥∥∥∥∂Xn∂Xk ∥∥∥∥ ∞ ≤ ∏ k < i≤n ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ ( 1 + 3∆tr ) n−k ≈ 1 + 3 ( n− k ) ∆tr . ( 14 ) Note that we have used an expansion around 1 and neglected terms of O ( ∆t2r ) as ∆t < < 1 . We remark that the bound ( 13 ) is the crux of our argument about gradient control as we see from the structure of the RNN that the recurrent matrices have close to unit norm . The detailed proof is presented in SM§E.3 . As the entire gradient of the loss function ( 6 ) , with respect to the weights and biases of the network , is bounded above in ( 9 ) , the exploding gradient problem is mitigated for this RNN . On the vanishing gradient problem . The vanishing gradient problem ( Pascanu et al. , 2013 ) arises if ∣∣∣∂E ( k ) n∂θ ∣∣∣ , defined in ( 10 ) , → 0 exponentially fast in k , for k < < n ( long-term dependencies ) . In that case , the RNN does not have long-term memory , as the contribution of the k-th hidden state to error at time step tn is infinitesimally small . We already see from ( 14 ) that ∥∥∥∂Xn∂Xk ∥∥∥∞ ≈ 1 ( independently of k ) . Thus , we should not expect the products in ( 10 ) to decay fast . In fact , we will provide a much more precise characterization of this gradient . To this end , we introduce the following order-notation , β = O ( α ) , for α , β ∈ R+ if there exists constants C , C such that Cα ≤ β ≤ Cα . M = O ( α ) , for M ∈ Rd1×d2 , α ∈ R+ if there exists constant C such that ‖M‖ ≤ Cα . ( 15 ) For simplicity of notation , we will also set ȳn = un ≡ 0 , for all n , b = 0 and r = 1 in ( 8 ) and we will only consider θ = Wi , j for some 1 ≤ i , j ≤ m in the following proposition . Proposition 3.3 Let yn be the hidden states generated by the RNN ( 4 ) . Under the assumption that yin = O ( √ tn ) , for all 1 ≤ i ≤ m and ( 8 ) , the gradient for long-term dependencies satisfies , ∂E ( k ) n ∂θ = O ( ĉδ∆t 3 2 ) +O ( ĉδ ( 1 + δ ) ∆t 5 2 ) +O ( ∆t3 ) , ĉ = sech2 ( √ k∆t ( 1 + ∆t ) ) , k < < n. ( 16 ) This precise bound ( 16 ) on the gradient shows that although the gradient can be small , i.e O ( ∆t 32 ) , it is in fact independent of k , ensuring that long-term dependencies contribute to gradients at much later steps and mitigating the vanishing gradient problem . The detailed proof is presented in SM§E.5 . Summarizing , we see that the RNN ( 3 ) indeed satisfied similar bounds to the underlying ODE ( 2 ) that resulted in upper bounds on the hidden states and its gradients . However , the lower bound on the gradient ( 16 ) is due to the specific choice of this discretization and does not appear to have a continuous analogue , making the specific choice of discretization of ( 2 ) crucial for mitigating the vanishing gradient problem .
This paper proposes a new continuous-time formulation for modeling recurrent units. The particular form of the recurrent unit is motivated by a system of coupled oscillators. These systems are well studied and widely used in the physical, engineering and biological sciences. Establishing this connection has the potential to motivate interesting future works. The performance of the proposed recurrent unit is state of the art.
SP:0a51115327ce08990aa3517ae1d20e88e80d6d65
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
1 INTRODUCTION . Recurrent neural networks ( RNNs ) have achieved tremendous success in a variety of tasks involving sequential ( time series ) inputs and outputs , ranging from speech recognition to computer vision and natural language processing , among others . However , it is well known that training RNNs to process inputs over long time scales ( input sequences ) is notoriously hard on account of the so-called exploding and vanishing gradient problem ( EVGP ) ( Pascanu et al. , 2013 ) , which stems from the fact that the well-established BPTT algorithm for training RNNs requires computing products of gradients ( Jacobians ) of the underlying hidden states over very long time scales . Consequently , the overall gradient can grow ( to infinity ) or decay ( to zero ) exponentially fast with respect to the number of recurrent interactions . A variety of approaches have been suggested to mitigate the exploding and vanishing gradient problem . These include adding gating mechanisms to the RNN in order to control the flow of information in the network , leading to architectures such as long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) and gated recurring units ( GRU ) ( Cho et al. , 2014 ) , that can overcome the vanishing gradient problem on account of the underlying additive structure . However , the gradients might still explode and learning very long term dependencies remains a challenge ( Li et al. , 2018 ) . Another popular approach for handling the EVGP is to constrain the structure of underlying recurrent weight matrices by requiring them to be orthogonal ( unitary ) , leading to the so-called orthogonal RNNs ( Henaff et al. , 2016 ; Arjovsky et al. , 2016 ; Wisdom et al. , 2016 ; Kerg et al. , 2019 ) and references therein . By construction , the resulting Jacobians have eigen- and singular-spectra with unit norm , alleviating the EVGP . However as pointed out by Kerg et al . ( 2019 ) , imposing such constraints on the recurrent matrices may lead to a significant loss of expressivity of the RNN resulting in inadequate performance on realistic tasks . In this article , we adopt a different approach , based on observation that coupled networks of controlled non-linear forced and damped oscillators , that arise in many physical , engineering and biological systems , such as networks of biological neurons , do seem to ensure expressive representations while constraining the dynamics of state variables and their gradients . This motivates us to propose a novel architecture for RNNs , based on time-discretizations of second-order systems of non-linear ordinary differential equations ( ODEs ) ( 1 ) that model coupled oscillators . Under verifiable hypotheses , we are able to rigorously prove precise bounds on the hidden states of these RNNs and their gradients , enabling a possible solution of the exploding and vanishing gradient problem , while demonstrating through benchmark numerical experiments , that the resulting system still retains sufficient expressivity , i.e . ability to process complex inputs , with a competitive performance , with respect to the state of the art , on a variety of sequential learning tasks . 2 THE PROPOSED RNN . Our proposed RNN is based on the following second-order system of ODEs , y′′ = σ ( Wy + Wy′ + Vu + b ) − γy − y′ . ( 1 ) Here , t ∈ [ 0 , 1 ] is the ( continuous ) time variable , u = u ( t ) ∈ Rd is the time-dependent input signal , y = y ( t ) ∈ Rm is the hidden state of the RNN with W , W ∈ Rm×m , V ∈ Rm×d are weight matrices , b ∈ Rm is the bias vector and 0 < γ , are parameters , representing oscillation frequency and the amount of damping ( friction ) in the system , respectively . σ : R 7→ R is the activation function , set to σ ( u ) = tanh ( u ) here . By introducing the so-called velocity variable z = y′ ( t ) ∈ Rm , we rewrite ( 1 ) as the first-order system : y′ = z , z′ = σ ( Wy + Wz + Vu + b ) − γy − z . ( 2 ) We fix a timestep 0 < ∆t < 1 and define our proposed RNN hidden states at time tn = n∆t ∈ [ 0 , 1 ] ( while omitting the affine output state ) as the following IMEX ( implicit-explicit ) discretization of the first order system ( 2 ) : yn = yn−1 + ∆tzn , zn = zn−1 + ∆tσ ( Wyn−1 + Wzn−1 + Vun + b ) −∆tγyn−1 −∆t zn̄ , ( 3 ) with either n̄ = n or n̄ = n− 1 . Note that the only difference in the two versions of the RNN ( 3 ) lies in the implicit ( n̄ = n ) or explicit ( n̄ = n− 1 ) treatment of the damping term − z in ( 2 ) , whereas both versions retain the implicit treatment of the first equation in ( 2 ) . Motivation and background . To see that the underlying ODE ( 2 ) models a coupled network of controlled forced and damped nonlinear oscillators , we start with the single neuron ( scalar ) case by setting d = m = 1 in ( 1 ) and assume an identity activation function σ ( x ) = x . Setting W = W = V = b = = 0 leads to the simple ODE , y′′ + γy = 0 , which exactly models simple harmonic motion with frequency γ , for instance that of a mass attached to a spring ( Guckenheimer & Holmes , 1990 ) . Letting > 0 in ( 1 ) adds damping or friction to the system ( Guckenheimer & Holmes , 1990 ) . Then , by introducing non-zero V in ( 1 ) , we drive the system with a driving force proportional to the input signal u ( t ) . The parameters V , b modulate the effect of the driving force , W controls the frequency of oscillations and W the amount of damping in the system . Finally , the tanh activation mediates a non-linear response in the oscillator . In the coupled network ( 2 ) with m > 1 , each neuron updates its hidden state based on the input signal as well as information from other neurons . The diagonal entries of W ( and the scalar hyperparameter γ ) control the frequency whereas the diagonal entries of W ( and the hyperparameter ) determine the amount of damping for each neuron , respectively , whereas the non-diagonal entries of these matrices modulate interactions between neurons . Hence , given this behavior of the underlying ODE ( 2 ) , we term the RNN ( 3 ) as a coupled oscillatory Recurrent Neural Network ( coRNN ) . The dynamics of the ODE ( 2 ) ( and the RNN ( 3 ) ) for a single neuron are relatively straightforward . As we illustrate in Fig . 6 of supplementary material SM§C , input signals drive the generation of ( superpositions of ) oscillatory wave-forms , whose amplitude and ( multiple ) frequencies are controlled by the tunable parameters W , W , V , b . Adding a tanh activation does not change these dynamics much . This is in contrast to truncating tanh to leading non-linear order by setting σ ( x ) = x− x3/3 , which yields a Duffing type oscillator that is characterized by chaotic behavior ( Guckenheimer & Holmes , 1990 ) . Adding interactions between neurons leads to further accentuation of this generation of superposed wave forms ( see Fig . 6 in SM§C ) and even with very simple network topologies , one sees the emergence of non-trivial non-oscillatory hidden states from oscillatory inputs . In practice , a network of a large number of neurons is used and can lead to extremely rich global dynamics . Hence , we argue that the ability of a network of ( forced , driven ) oscillators to access a very rich set of output states may lead to high expressivity of the system , allowing it to approximate outputs from complicated sequential inputs . Oscillator networks are ubiquitous in nature and in engineering systems ( Guckenheimer & Holmes , 1990 ; Strogatz , 2015 ) with canonical examples being pendulums ( classical mechanics ) , business cycles ( economics ) , heartbeat ( biology ) for single oscillators and electrical circuits for networks of oscillators . Our motivating examples arise in neurobiology , where individual biological neurons can be viewed as oscillators with periodic spiking and firing of the action potential . Moreover , functional circuits of the brain , such as cortical columns and prefrontal-striatal-hippocampal circuits , are being increasingly interpreted by networks of oscillatory neurons , see Stiefel & Ermentrout ( 2016 ) for an overview . Following well-established paths in machine learning , such as for convolutional neural networks ( LeCun et al. , 2015 ) , our focus here is to abstract the essence of functional brain circuits being networks of oscillators and design an RNN based on much simpler mechanistic systems , such as those modeled by ( 2 ) , while ignoring the complicated biological details of neural function . Related work . There is an increasing trend of basing RNN architectures on ODEs and dynamical systems . These approaches can roughly be classified into two branches , namely RNNs based on discretized ODEs and continuous-time RNNs . Examples of continuous-time approaches include neural ODEs ( Chen et al. , 2018 ) with ODE-RNNs ( Rubanova et al. , 2019 ) as its recurrent extension as well as E ( 2017 ) and references therein , to name just a few . We focus , however , in this article on an ODE-inspired discrete-time RNN , as the proposed coRNN is derived from a discretization of the ODE ( 1 ) . A good example for a discrete-time ODE-based RNNs is the so-called anti-symmetric RNN of Chang et al . ( 2019 ) , where the RNN architecture is based on a stable ODE resulting from a skew-symmetric hidden weight matrix , thus constraining the stable ( gradient ) dynamics of the network . This approach has much in common with previously mentioned unitary/orthogonal/nonnormal RNNs in constraining the structure of the hidden-to-hidden layer weight matrices . However , adding such strong constraints might reduce expressivity of the resulting RNN and might lead to inadequate performance on complex tasks . In contrast to these approaches , our proposed coRNN does not explicitly constrain the weight matrices but relies on the dynamics of the underlying ODE ( and the IMEX discretization ( 3 ) ) , to provide gradient stability . Moreover , no gating mechanisms as in LSTMs/GRUs are used in the current version of coRNN . There is also an increasing interest in designing hybrid methods , which use a discretization of an ODE ( in particular a Hamiltonian system ) in order to learn the continuous representation of the data , see for instance Greydanus et al . ( 2019 ) ; Chen et al . ( 2020 ) . Overall , our approach here differs from these papers in our use of networks of oscillators to build the RNN . 3 RIGOROUS ANALYSIS OF THE PROPOSED RNN . An attractive feature of the underlying ODE system ( 2 ) lies in the fact that the resulting hidden states ( and their gradients ) are bounded ( see SM§D for precise statements and proofs ) . Hence , one can expect that a suitable discretization of the ODE ( 2 ) that preserves these bounds will not have exploding gradients . We claim that one such structure preserving discretization is given by the IMEX discretization that results in the RNN ( 3 ) and proceed to derive bounds on this RNN below . Following standard practice we set y ( 0 ) = z ( 0 ) = 0 and purely for the simplicity of exposition , we set the control parameters , = γ = 1 and n̄ = n in ( 3 ) leading to , yn = yn−1 + ∆tzn , zn = zn−1 1+∆t + ∆t 1+∆tσ ( An−1 ) − ∆t1+∆tyn−1 , An−1 : = Wyn−1 + Wzn−1 + Vun + b . ( 4 ) Analogous results and proofs for the case where n̄ = n− 1 and for general values of , γ are provided in SM§F . Bounds on the hidden states . As with the underlying ODE ( 2 ) , the hidden states of the RNN ( 3 ) are bounded , i.e . Proposition 3.1 Let yn , zn be the hidden states of the RNN ( 4 ) for 1 ≤ n ≤ N , then the hidden states satisfy the following ( energy ) bounds : y > n yn + z > n zn ≤ nm∆t = mtn ≤ m. ( 5 ) The proof of the energy bound ( 5 ) is provided in SM§E.1 and a straightforward variant of the proof ( see SM§E.2 ) yields an estimate on the sensitivity of the hidden states to changing inputs . As with the underlying ODE ( see SM§D ) , this bound rules out chaotic behavior of hidden states . Bounds on hidden state gradients . We train the RNN ( 3 ) to minimize the loss function , E : = 1 N N∑ n=1 En , En = 1 2 ‖yn − ȳn‖22 , ( 6 ) with ȳ being the underlying ground truth ( training data ) . During training , we compute gradients of the loss function ( 6 ) with respect to the weights and biases Θ = [ W , W , V , b ] , i.e . ∂E ∂θ = 1 N N∑ n=1 ∂En ∂θ , ∀ θ ∈ Θ . ( 7 ) Proposition 3.2 Let yn , zn be the hidden states generated by the RNN ( 4 ) . We assume that the time step ∆t < < 1 can be chosen such that , max { ∆t ( 1 + ‖W‖∞ ) 1 + ∆t , ∆t‖W‖∞ 1 + ∆t } = η ≤ ∆tr , 1 2 ≤ r ≤ 1 . ( 8 ) Denoting δ = 11+∆t , the gradient of the loss function E ( 6 ) with respect to any parameter θ ∈ Θ is bounded as , ∣∣∣∣∂E∂θ ∣∣∣∣ ≤ 32 ( m+ Ȳ√m ) , ( 9 ) with Ȳ = max 1≤n≤N ‖ȳn‖∞ be a bound on the underlying training data . Sketch of the proof . Denoting Xn = [ yn , zn ] , we can apply the chain rule repeatedly ( for instance as in Pascanu et al . ( 2013 ) ) to obtain , ∂En ∂θ = ∑ 1≤k≤n ∂En ∂Xn ∂Xn ∂Xk ∂+Xk ∂θ︸ ︷︷ ︸ ∂E ( k ) n ∂θ . ( 10 ) Here , the notation ∂ +Xk ∂θ refers to taking the partial derivative of Xk with respect to the parameter θ , while keeping the other arguments constant . This quantity can be readily calculated from the structure of the RNN ( 4 ) and is presented in the detailed proof provided in SM§E.3 . From ( 6 ) , we can directly compute that ∂En∂Xn = [ yn − ȳn , 0 ] . Repeated application of the chain rule and a direct calculation with ( 4 ) yields , ∂Xn ∂Xk = ∏ k < i≤n ∂Xi ∂Xi−1 , ∂Xi ∂Xi−1 = [ I + ∆tBi−1 ∆tCi−1 Bi−1 Ci−1 ] , ( 11 ) where I is the identity matrix and Bi−1 = δ∆t ( diag ( σ ′ ( Ai−1 ) ) W − I ) , Ci−1 = δ ( I + ∆tdiag ( σ′ ( Ai−1 ) ) W ) . ( 12 ) It is straightforward to calculate using the assumption ( 8 ) that ‖Bi−1‖∞ < η and ‖Ci−1‖∞ ≤ η+ δ . Using the definitions of matrix norms and ( 8 ) , we obtain : ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ max ( 1 + ∆t ( ‖Bi−1‖∞ + ‖Ci−1‖∞ ) , ‖Bi−1‖∞ + ‖Ci−1‖∞ ) ≤ max ( 1 + ∆t ( δ + 2η ) , δ + 2η ) ≤ 1 + 3∆tr . ( 13 ) Therefore , using ( 11 ) , we have∥∥∥∥∂Xn∂Xk ∥∥∥∥ ∞ ≤ ∏ k < i≤n ∥∥∥∥ ∂Xi∂Xi−1 ∥∥∥∥ ∞ ≤ ( 1 + 3∆tr ) n−k ≈ 1 + 3 ( n− k ) ∆tr . ( 14 ) Note that we have used an expansion around 1 and neglected terms of O ( ∆t2r ) as ∆t < < 1 . We remark that the bound ( 13 ) is the crux of our argument about gradient control as we see from the structure of the RNN that the recurrent matrices have close to unit norm . The detailed proof is presented in SM§E.3 . As the entire gradient of the loss function ( 6 ) , with respect to the weights and biases of the network , is bounded above in ( 9 ) , the exploding gradient problem is mitigated for this RNN . On the vanishing gradient problem . The vanishing gradient problem ( Pascanu et al. , 2013 ) arises if ∣∣∣∂E ( k ) n∂θ ∣∣∣ , defined in ( 10 ) , → 0 exponentially fast in k , for k < < n ( long-term dependencies ) . In that case , the RNN does not have long-term memory , as the contribution of the k-th hidden state to error at time step tn is infinitesimally small . We already see from ( 14 ) that ∥∥∥∂Xn∂Xk ∥∥∥∞ ≈ 1 ( independently of k ) . Thus , we should not expect the products in ( 10 ) to decay fast . In fact , we will provide a much more precise characterization of this gradient . To this end , we introduce the following order-notation , β = O ( α ) , for α , β ∈ R+ if there exists constants C , C such that Cα ≤ β ≤ Cα . M = O ( α ) , for M ∈ Rd1×d2 , α ∈ R+ if there exists constant C such that ‖M‖ ≤ Cα . ( 15 ) For simplicity of notation , we will also set ȳn = un ≡ 0 , for all n , b = 0 and r = 1 in ( 8 ) and we will only consider θ = Wi , j for some 1 ≤ i , j ≤ m in the following proposition . Proposition 3.3 Let yn be the hidden states generated by the RNN ( 4 ) . Under the assumption that yin = O ( √ tn ) , for all 1 ≤ i ≤ m and ( 8 ) , the gradient for long-term dependencies satisfies , ∂E ( k ) n ∂θ = O ( ĉδ∆t 3 2 ) +O ( ĉδ ( 1 + δ ) ∆t 5 2 ) +O ( ∆t3 ) , ĉ = sech2 ( √ k∆t ( 1 + ∆t ) ) , k < < n. ( 16 ) This precise bound ( 16 ) on the gradient shows that although the gradient can be small , i.e O ( ∆t 32 ) , it is in fact independent of k , ensuring that long-term dependencies contribute to gradients at much later steps and mitigating the vanishing gradient problem . The detailed proof is presented in SM§E.5 . Summarizing , we see that the RNN ( 3 ) indeed satisfied similar bounds to the underlying ODE ( 2 ) that resulted in upper bounds on the hidden states and its gradients . However , the lower bound on the gradient ( 16 ) is due to the specific choice of this discretization and does not appear to have a continuous analogue , making the specific choice of discretization of ( 2 ) crucial for mitigating the vanishing gradient problem .
Firstly, this paper conducts the rigorous analysis of the coRNN via the formula deduction to verify the bound. Then the coRNN is proved to mitigate the exploding and vanishing gradient problem and this is also validated in a series of experiments. Also, the performance of the coRNN is comparable or better compared to state-of-the-art models. This paper provides a new idea to address the exploding and vanishing gradient problem, which hinders the development of deeper neural networks tremendously. In my opinion, this coRNN model is meaningful for practical application, especially for the extension of more complicated neural networks.
SP:0a51115327ce08990aa3517ae1d20e88e80d6d65
Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks
1 INTRODUCTION . Recently , deep reinforcement learning ( RL ) has achieved a large number of breakthroughs in many domains including video games ( Mnih et al. , 2015 ; Vinyals et al. , 2019 ) , and board games ( Silver et al. , 2017 ) . Nonetheless , a central challenge in reinforcement learning ( RL ) is the sample efficiency ( Kakade et al. , 2003 ) ; it has been shown that the RL algorithm requires a large number of samples for successful learning in MDP with large state and action space . Moreover , the success of RL algorithm heavily hinges on the quality of collected samples ; the RL algorithm tends to fail if the collected trajectory does not contain enough evaluative feedback ( e.g. , sparse or delayed reward ) . To circumvent this challenge , planning-based methods utilize the environment ’ s model to improve or create a policy instead of interacting with environment . Recently , combining the planning method with an efficient path search algorithm , such as Monte-Carlo tree search ( MCTS ) ( Norvig , 2002 ; Coulom , 2006 ) , has demonstrated successful results ( Guo et al. , 2016 ; Vodopivec et al. , 2017 ; Silver et al. , 2017 ) . However , such tree search methods would require an accurate model of MDP and the complexity of planning may grow intractably large for complex domain . Model-based RL methods attempt to learn a model instead of assuming that model is given , but learning an accurate model also requires a large number of samples , which is often even harder to achieve than solving the given task . Model-free RL methods can be learned solely from the environment reward , without the need of a ( learned ) model . However , both value-based and policy-based methods suffer from poor sample efficiency especially in sparse-reward tasks . To tackle sparse reward problems , researchers have proposed to learn an intrinsic bonus function that measures the novelty of the state that agent visits ( Schmidhuber , 1991 ; Oudeyer & Kaplan , 2009 ; Pathak et al. , 2017 ; Savinov et al. , 2018b ; Choi et al. , 2018 ; Burda et al. , 2018 ) . However , when such intrinsic bonus is added to the reward , it often requires a careful balancing between environment reward and bonus and scheduling of the bonus scale in order to guarantee the convergence to optimal solution . To tackle aforementioned challenge of sample efficiency in sparse reward tasks , we introduce a constrained-RL framework that improves the sample efficiency of any model-free RL algorithm in sparse-reward tasks , under the mild assumptions on MDP ( see Appendix G ) . Of note , though our framework will be formulated for policy-based methods , our final form of cost function ( Eq . ( 10 ) in Section 4 ) is applicable to both policy-based and value-based methods . We propose a novel k-shortest-path ( k-SP ) constraint ( Definition 7 ) that improves sample efficiency of policy learning ( See Figure 1 ) . The k-SP constraint is applied to a trajectory rolled out by a policy ; all of its sub-path of length k is required to be a shortest-path under the π-distance metric which we define in Section 3.1 . We prove that applying our constraint preserves the optimality for any MDP ( Theorem 3 ) , except the stochastic and multi-goal MDP which requires additional assumptions . We relax the hard constraint into a soft cost formulation ( Tessler et al. , 2019 ) , and use a reachability network ( Savinov et al. , 2018b ) ( RNet ) to efficiently learn the cost function in an off-policy manner . We summarize our contributions as the following : ( 1 ) We propose a novel constraint that can improve the sample efficiency of any model-free RL method in sparse reward tasks . ( 2 ) We present several theoretical results including the proof that our proposed constraint preserves the optimal policy of given MDP . ( 3 ) We present a numerical result in tabular RL setting to precisely evaluate the effectiveness of the proposed method . ( 4 ) We propose a practical way to implement our proposed constraint , and demonstrate that it provides a significant improvement on two complex deep RL domains . ( 5 ) We demonstrate that our method significantly improves the sample-efficiency of PPO , and outperforms existing novelty-seeking methods on two complex domains in sparse reward setting . 2 PRELIMINARIES . Markov Decision Process ( MDP ) . We model a task as an MDP tuple M = ( S , A , P , R , ρ , γ ) , where S is a state set , A is an action set , P is a transition probability , R is a reward function , ρ is an initial state distribution , and γ ∈ [ 0 , 1 ) is a discount factor . For each state s , the value of a policy π is denoted by V π ( s ) = Eπ [ ∑ t γ trt | s0 = s ] . Then , the goal is to find the optimal policy π∗ that maximizes the expected return : π∗ = arg max π Eπs∼ρ [ ∑ t γ trt | s0 = s ] = arg max π Es∼ρ [ V π ( s ) ] . ( 1 ) Constrained MDP . A constrained Markov Decision Process ( CMDP ) is an MDP with extra constraints that restrict the domain of allowed policies ( Altman , 1999 ) . Specifically , CMDP introduces a constraint functionC ( π ) that maps a policy to a scalar , and a thresholdα ∈ R. The objective of CMDP is to maximize the expected returnR ( τ ) = ∑ t γ trt of a trajectory τ = { s0 , a0 , r1 , s1 , a1 , r2 , s2 , . . . } subject to a constraint : π∗ = arg maxπ Eτ∼π [ R ( τ ) ] , s.t . C ( π ) ≤ α . A popular choice of constraint is based on the transition cost function ( Tessler et al. , 2019 ) c ( s , a , r , s′ ) ∈ R which assigns a scalar-valued cost to each transition . Then the constraint function for a policy π is defined as the discounted sum of the cost under the policy : C ( π ) = Eτ∼π [ ∑ t γ tc ( st , at , rt+1 , st+1 ) ] . In this work , we propose a shortest-path constraint , that provably preserves the optimal policy of the original unconstrained MDP , while reducing the trajectory space . We will use a cost function-based formulation to implement our constraint ( see Section 3 and 4 ) . 3 FORMULATION : k-SHORTEST PATH CONSTRAINT We define the k-shortest-path ( k-SP ) constraint to remove redundant transitions ( e.g. , unnecessarily going back and forth ) , leading to faster policy learning . We show two important properties of our constraint : ( 1 ) the optimal policy is preserved , and ( 2 ) the policy search space is reduced . In this work , we limit our focus to MDPs satisfying R ( s ) +γV ∗ ( s ) > 0 for all initial states s ∈ ρ and all rewarding states that optimal policy visits with non-zero probability s ∈ { s|r ( s ) 6= 0 , π∗ ( s ) > 0 } . We exploit this mild assumption to prove that our constraint preserves optimality . Intuitively , we exclude the case when the optimal strategy for the agent is at best choosing a “ lesser of evils ” ( i.e. , largest but negative value ) which often still means a failure . We note that this is often caused by unnatural reward function design ; in principle , we can avoid this by simply offsetting reward function by a constant −|mins∈ { s|π∗ ( s ) > 0 } V ∗ ( s ) | for every transition , assuming the policy is proper1 . Goalconditioned RL ( Nachum et al. , 2018 ) and most of the well-known domains such as Atari ( Bellemare et al. , 2013 ) , DeepMind Lab ( Beattie et al. , 2016 ) , MiniGrid ( Chevalier-Boisvert et al. , 2018 ) , etc. , satisfy this assumption . Also , for general settings with stochastic MDP and multi-goals , we require additional assumptions to prove the optimality guarantee ( See Appendix G for details ) . 3.1 SHORTEST-PATH POLICY AND SHORTEST-PATH CONSTRAINT . Let τ be a path defined by a sequence of states : τ = { s0 , . . . , s ` ( τ ) } , where ` ( τ ) is the length of a path τ ( i.e. , ` ( τ ) = |τ | − 1 ) . We denote the set of all paths from s to s′ by Ts , s′ . A path τ∗ from s to s′ is called a shortest path from s to s′ if ` ( τ ) is minimum , i.e. , ` ( τ∗ ) = minτ∈Ts , s′ ` ( τ ) . Now we will define similar concepts ( length , shortest path , etc . ) with respect to a policy . Intuitively , a policy that rolls out shortest paths ( up to some stochasticity ) to a goal state or between any state pairs should be a counterpart . We consider a set of all admissible paths from s to s′ under a policy π : Definition 1 ( Path set ) . T πs , s′ = { τ | s0 = s , s ` ( τ ) = s′ , pπ ( τ ) > 0 , { st } t < ` ( τ ) 6= s′ } . That is , T πs , s′ is a set of all paths that policy π may roll out from s and terminate once visiting s′ . If the MDP is a single-goal task , i.e. , there exists a unique ( rewarding ) goal state sg ∈ S such that sg is a terminal state , and R ( s ) > 0 if and only if s = sg , any shortest path from an initial state to the goal state is the optimal path with the highest return R ( τ ) , and a policy that rolls out a shortest path is therefore optimal ( see Lemma 4 ) .2 This is because all states except for sg are non-rewarding states , but in general MDPs this is not necessarily true . However , this motivates us to limit the domain of shortest path to among non-rewarding states . We define non-rewarding paths from s to s′ as follows : Definition 2 ( Non-rewarding path set ) . T πs , s′ , nr = { τ | τ ∈ T πs , s′ , { rt } t < ` ( τ ) = 0 } . In words , T πs , s′ , nr is a set of all non-rewarding paths from s to s′ rolled out by policy π ( i.e. , τ ∈ T πs , s′ ) without any associated reward except the last step ( i.e. , { rt } t < |τ | = 0 ) . Now we are ready to define a notion of length with respect to a policy and shortest path policy : Definition 3 ( π-distance from s to s′ ) . Dπnr ( s , s′ ) = logγ ( Eτ∼π : τ∈T π s , s′ , nr [ γ ` ( τ ) ] ) Definition 4 ( Shortest path distance from s to s′ ) . Dnr ( s , s′ ) = minπDπnr ( s , s′ ) . We define π-distance to be the log-mean-exponential of the length ` ( τ ) of non-rewarding paths τ ∈ T πs , s′ , nr . When there exists no admissible path from s to s′ under policy π , the path length is defined to be ∞ : Dπnr ( s , s′ ) = ∞ if T πs , s′ , nr = ∅ . We note that when both MDP and policy are deterministic , Dπ ( s , s′ ) recovers the natural definition of path length , Dπnr ( s , s ′ ) = ` ( τ ) . We call a policy a shortest-path policy from s to s′ if it roll outs a path with the smallest π-distance : Definition 5 ( Shortest path policy from s to s′ ) . π ∈ ΠSPs→s′ = { π ∈ Π | Dπnr ( s , s′ ) = Dnr ( s , s′ ) } . Finally , we will define the shortest-path ( SP ) constraint . Let SIR = { s | R ( s ) > 0 or ρ ( s ) > 0 } be the union of all initial and rewarding states , and Φπ = { ( s , s′ ) | s , s′ ∈ SIR , ρ ( s ) > 0 , T πs , s′ , nr 6= ∅ } be the subset of SIR such that agent may roll out . Then , the SP constraint is applied to the nonrewarding sub-paths between states in Φπ : T πΦ , nr = ⋃ ( s , s′ ) ∈Φπ T πs , s′ , nr . We note that these definitions are used in the proofs ( Appendix G ) . Now , we define the shortest-path constraint as follows : Definition 6 ( Shortest-path constraint ) . A policy π satisfies the shortest-path ( SP ) constraint if π ∈ ΠSP , where ΠSP = { π | For all s , s′ ∈ T πΦ , nr , it holds π ∈ ΠSPs→s′ } . 1It is an instance of potential-based reward shaping which has optimality guarantee ( Ng et al. , 1999 ) . 2We refer the readers to Appendix F for more detailed discussion and proofs for single-goal MDPs . Intuitively , the SP constraint forces a policy to transition between initial and rewarding states via shortest paths . The SP constraint would be particularly effective in sparse-reward settings , where the distance between rewarding states is large . Given these definitions , we can show that an optimal policy indeed satisfies the SP constraint in a general MDP setting . In other words , the shortest path constraint should not change optimality : Theorem 1 . For any MDP , an optimal policy π∗ satisfies the shortest-path constraint : π∗ ∈ ΠSP . Proof . See Appendix G for the proof . 3.2 RELAXATION : k-SHORTEST-PATH CONSTRAINT Implementing the shortest-path constraint is , however , intractable since it requires a distance predictor Dnr ( s , s ′ ) . Note that the distance predictor addresses the optimization problem , which might be as difficult as solving the given task . To circumvent this challenge , we consider its more tractable version , namely a k-shortest path constraint , which reduces the shortest-path problem Dnr ( s , s′ ) to a binary decision problem — is the state s′ reachable from s within k steps ? — also known as k-reachability ( Savinov et al. , 2018b ) . The k-shortest path constraint is defined as follows : Definition 7 ( k-shortest-path constraint ) . A policy π satisfies the k-shortest-path constraint if π ∈ ΠSPk , where ΠSPk = { π | For all s , s′ ∈ T πΦ , nr , Dπnr ( s , s′ ) ≤ k , it holds π ∈ ΠSPs→s′ } . ( 2 ) Note that the SP constraint ( Definition 6 ) is relaxed by adding a condition Dπnr ( s , s ′ ) ≤ k. In other words , the k-SP constraint is imposed only for s , s′-path whose length is not greater than k. From Eq . ( 2 ) , we can prove an important property and then Theorem 3 ( optimality ) : Lemma 2 . For an MDPM , ΠSPm ⊂ ΠSPk if k < m. Proof . It is true since { ( s , s′ ) | Dπnr ( s , s′ ) ≤ k } ⊂ { ( s , s′ ) | Dπnr ( s , s′ ) ≤ m } for k < m. Theorem 3 . For an MDPM and any k ∈ R , an optimal policy π∗ is a k-shortest-path policy . Proof . Theorem 1 tells π∗ ∈ ΠSP . Eq . ( 2 ) tells ΠSP = ΠSP∞ and Lemma 2 tells ΠSP∞ ⊂ ΠSPk . Collectively , we have π∗ ∈ ΠSP = ΠSP∞ ⊂ ΠSPk . In conclusion , Theorem 3 states that the k-SP constraint does not change the optimality of policy , and Lemma 2 states a larger k results in a larger reduction in policy search space . Thus , it motivates us to apply the k-SP constraint in policy search to more efficiently find an optimal policy . For the numerical experiment on measuring the reduction in the policy roll-outs space , please refer to Section 6.4 .
This paper proposes a new constraint for constrained MDP, based on k-shortest path, which helps improve sample efficiency for (model-free) RL algorithms in sparse-reward MDP, while theoretically proving that the constraint retains the same optimal policy in the original MDP. Intuitively, for sparse (positive) reward setting, the optimal policy should reach the positive reward states with the shortest path (as it has the lowest discounting). The relaxed form of the constraint considers the that the distance between two states is less than $k$, rather than being optimal length (which the optimal policy still also satisfies). The constraint is then converted into its Lagrangian form as a cost term to the reward (i.e. a type of reward shaping). Practically, this requires a k-reachability network (RNet), which is a binary distance discriminator judging whether the distance between two states are reachable within $k$ steps. This network is trained with contrastive loss, similar to prior work SPTM by Savinov et al., 2018. However, SPTM uses RNet for graph-based planning (i.e. the local distance between states), while this paper uses RNet as the cost/constraint on the policy objective function. Experiments were conducted in several maze navigation environments (2D grid world MiniGrid, to first person 3D maze environments in DeepMind Lab), showing promising results compared to several baselines which use intrinsic curiosity. Several ablations were performed on the hyperparameters ($k$, and tolerance $\delta t$ on the constraint), as well as some qualitative examples of the policies learned compared to novelty reward shaping.
SP:0f0d7119df7043ccea815c96e8896114210290f0
Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks
1 INTRODUCTION . Recently , deep reinforcement learning ( RL ) has achieved a large number of breakthroughs in many domains including video games ( Mnih et al. , 2015 ; Vinyals et al. , 2019 ) , and board games ( Silver et al. , 2017 ) . Nonetheless , a central challenge in reinforcement learning ( RL ) is the sample efficiency ( Kakade et al. , 2003 ) ; it has been shown that the RL algorithm requires a large number of samples for successful learning in MDP with large state and action space . Moreover , the success of RL algorithm heavily hinges on the quality of collected samples ; the RL algorithm tends to fail if the collected trajectory does not contain enough evaluative feedback ( e.g. , sparse or delayed reward ) . To circumvent this challenge , planning-based methods utilize the environment ’ s model to improve or create a policy instead of interacting with environment . Recently , combining the planning method with an efficient path search algorithm , such as Monte-Carlo tree search ( MCTS ) ( Norvig , 2002 ; Coulom , 2006 ) , has demonstrated successful results ( Guo et al. , 2016 ; Vodopivec et al. , 2017 ; Silver et al. , 2017 ) . However , such tree search methods would require an accurate model of MDP and the complexity of planning may grow intractably large for complex domain . Model-based RL methods attempt to learn a model instead of assuming that model is given , but learning an accurate model also requires a large number of samples , which is often even harder to achieve than solving the given task . Model-free RL methods can be learned solely from the environment reward , without the need of a ( learned ) model . However , both value-based and policy-based methods suffer from poor sample efficiency especially in sparse-reward tasks . To tackle sparse reward problems , researchers have proposed to learn an intrinsic bonus function that measures the novelty of the state that agent visits ( Schmidhuber , 1991 ; Oudeyer & Kaplan , 2009 ; Pathak et al. , 2017 ; Savinov et al. , 2018b ; Choi et al. , 2018 ; Burda et al. , 2018 ) . However , when such intrinsic bonus is added to the reward , it often requires a careful balancing between environment reward and bonus and scheduling of the bonus scale in order to guarantee the convergence to optimal solution . To tackle aforementioned challenge of sample efficiency in sparse reward tasks , we introduce a constrained-RL framework that improves the sample efficiency of any model-free RL algorithm in sparse-reward tasks , under the mild assumptions on MDP ( see Appendix G ) . Of note , though our framework will be formulated for policy-based methods , our final form of cost function ( Eq . ( 10 ) in Section 4 ) is applicable to both policy-based and value-based methods . We propose a novel k-shortest-path ( k-SP ) constraint ( Definition 7 ) that improves sample efficiency of policy learning ( See Figure 1 ) . The k-SP constraint is applied to a trajectory rolled out by a policy ; all of its sub-path of length k is required to be a shortest-path under the π-distance metric which we define in Section 3.1 . We prove that applying our constraint preserves the optimality for any MDP ( Theorem 3 ) , except the stochastic and multi-goal MDP which requires additional assumptions . We relax the hard constraint into a soft cost formulation ( Tessler et al. , 2019 ) , and use a reachability network ( Savinov et al. , 2018b ) ( RNet ) to efficiently learn the cost function in an off-policy manner . We summarize our contributions as the following : ( 1 ) We propose a novel constraint that can improve the sample efficiency of any model-free RL method in sparse reward tasks . ( 2 ) We present several theoretical results including the proof that our proposed constraint preserves the optimal policy of given MDP . ( 3 ) We present a numerical result in tabular RL setting to precisely evaluate the effectiveness of the proposed method . ( 4 ) We propose a practical way to implement our proposed constraint , and demonstrate that it provides a significant improvement on two complex deep RL domains . ( 5 ) We demonstrate that our method significantly improves the sample-efficiency of PPO , and outperforms existing novelty-seeking methods on two complex domains in sparse reward setting . 2 PRELIMINARIES . Markov Decision Process ( MDP ) . We model a task as an MDP tuple M = ( S , A , P , R , ρ , γ ) , where S is a state set , A is an action set , P is a transition probability , R is a reward function , ρ is an initial state distribution , and γ ∈ [ 0 , 1 ) is a discount factor . For each state s , the value of a policy π is denoted by V π ( s ) = Eπ [ ∑ t γ trt | s0 = s ] . Then , the goal is to find the optimal policy π∗ that maximizes the expected return : π∗ = arg max π Eπs∼ρ [ ∑ t γ trt | s0 = s ] = arg max π Es∼ρ [ V π ( s ) ] . ( 1 ) Constrained MDP . A constrained Markov Decision Process ( CMDP ) is an MDP with extra constraints that restrict the domain of allowed policies ( Altman , 1999 ) . Specifically , CMDP introduces a constraint functionC ( π ) that maps a policy to a scalar , and a thresholdα ∈ R. The objective of CMDP is to maximize the expected returnR ( τ ) = ∑ t γ trt of a trajectory τ = { s0 , a0 , r1 , s1 , a1 , r2 , s2 , . . . } subject to a constraint : π∗ = arg maxπ Eτ∼π [ R ( τ ) ] , s.t . C ( π ) ≤ α . A popular choice of constraint is based on the transition cost function ( Tessler et al. , 2019 ) c ( s , a , r , s′ ) ∈ R which assigns a scalar-valued cost to each transition . Then the constraint function for a policy π is defined as the discounted sum of the cost under the policy : C ( π ) = Eτ∼π [ ∑ t γ tc ( st , at , rt+1 , st+1 ) ] . In this work , we propose a shortest-path constraint , that provably preserves the optimal policy of the original unconstrained MDP , while reducing the trajectory space . We will use a cost function-based formulation to implement our constraint ( see Section 3 and 4 ) . 3 FORMULATION : k-SHORTEST PATH CONSTRAINT We define the k-shortest-path ( k-SP ) constraint to remove redundant transitions ( e.g. , unnecessarily going back and forth ) , leading to faster policy learning . We show two important properties of our constraint : ( 1 ) the optimal policy is preserved , and ( 2 ) the policy search space is reduced . In this work , we limit our focus to MDPs satisfying R ( s ) +γV ∗ ( s ) > 0 for all initial states s ∈ ρ and all rewarding states that optimal policy visits with non-zero probability s ∈ { s|r ( s ) 6= 0 , π∗ ( s ) > 0 } . We exploit this mild assumption to prove that our constraint preserves optimality . Intuitively , we exclude the case when the optimal strategy for the agent is at best choosing a “ lesser of evils ” ( i.e. , largest but negative value ) which often still means a failure . We note that this is often caused by unnatural reward function design ; in principle , we can avoid this by simply offsetting reward function by a constant −|mins∈ { s|π∗ ( s ) > 0 } V ∗ ( s ) | for every transition , assuming the policy is proper1 . Goalconditioned RL ( Nachum et al. , 2018 ) and most of the well-known domains such as Atari ( Bellemare et al. , 2013 ) , DeepMind Lab ( Beattie et al. , 2016 ) , MiniGrid ( Chevalier-Boisvert et al. , 2018 ) , etc. , satisfy this assumption . Also , for general settings with stochastic MDP and multi-goals , we require additional assumptions to prove the optimality guarantee ( See Appendix G for details ) . 3.1 SHORTEST-PATH POLICY AND SHORTEST-PATH CONSTRAINT . Let τ be a path defined by a sequence of states : τ = { s0 , . . . , s ` ( τ ) } , where ` ( τ ) is the length of a path τ ( i.e. , ` ( τ ) = |τ | − 1 ) . We denote the set of all paths from s to s′ by Ts , s′ . A path τ∗ from s to s′ is called a shortest path from s to s′ if ` ( τ ) is minimum , i.e. , ` ( τ∗ ) = minτ∈Ts , s′ ` ( τ ) . Now we will define similar concepts ( length , shortest path , etc . ) with respect to a policy . Intuitively , a policy that rolls out shortest paths ( up to some stochasticity ) to a goal state or between any state pairs should be a counterpart . We consider a set of all admissible paths from s to s′ under a policy π : Definition 1 ( Path set ) . T πs , s′ = { τ | s0 = s , s ` ( τ ) = s′ , pπ ( τ ) > 0 , { st } t < ` ( τ ) 6= s′ } . That is , T πs , s′ is a set of all paths that policy π may roll out from s and terminate once visiting s′ . If the MDP is a single-goal task , i.e. , there exists a unique ( rewarding ) goal state sg ∈ S such that sg is a terminal state , and R ( s ) > 0 if and only if s = sg , any shortest path from an initial state to the goal state is the optimal path with the highest return R ( τ ) , and a policy that rolls out a shortest path is therefore optimal ( see Lemma 4 ) .2 This is because all states except for sg are non-rewarding states , but in general MDPs this is not necessarily true . However , this motivates us to limit the domain of shortest path to among non-rewarding states . We define non-rewarding paths from s to s′ as follows : Definition 2 ( Non-rewarding path set ) . T πs , s′ , nr = { τ | τ ∈ T πs , s′ , { rt } t < ` ( τ ) = 0 } . In words , T πs , s′ , nr is a set of all non-rewarding paths from s to s′ rolled out by policy π ( i.e. , τ ∈ T πs , s′ ) without any associated reward except the last step ( i.e. , { rt } t < |τ | = 0 ) . Now we are ready to define a notion of length with respect to a policy and shortest path policy : Definition 3 ( π-distance from s to s′ ) . Dπnr ( s , s′ ) = logγ ( Eτ∼π : τ∈T π s , s′ , nr [ γ ` ( τ ) ] ) Definition 4 ( Shortest path distance from s to s′ ) . Dnr ( s , s′ ) = minπDπnr ( s , s′ ) . We define π-distance to be the log-mean-exponential of the length ` ( τ ) of non-rewarding paths τ ∈ T πs , s′ , nr . When there exists no admissible path from s to s′ under policy π , the path length is defined to be ∞ : Dπnr ( s , s′ ) = ∞ if T πs , s′ , nr = ∅ . We note that when both MDP and policy are deterministic , Dπ ( s , s′ ) recovers the natural definition of path length , Dπnr ( s , s ′ ) = ` ( τ ) . We call a policy a shortest-path policy from s to s′ if it roll outs a path with the smallest π-distance : Definition 5 ( Shortest path policy from s to s′ ) . π ∈ ΠSPs→s′ = { π ∈ Π | Dπnr ( s , s′ ) = Dnr ( s , s′ ) } . Finally , we will define the shortest-path ( SP ) constraint . Let SIR = { s | R ( s ) > 0 or ρ ( s ) > 0 } be the union of all initial and rewarding states , and Φπ = { ( s , s′ ) | s , s′ ∈ SIR , ρ ( s ) > 0 , T πs , s′ , nr 6= ∅ } be the subset of SIR such that agent may roll out . Then , the SP constraint is applied to the nonrewarding sub-paths between states in Φπ : T πΦ , nr = ⋃ ( s , s′ ) ∈Φπ T πs , s′ , nr . We note that these definitions are used in the proofs ( Appendix G ) . Now , we define the shortest-path constraint as follows : Definition 6 ( Shortest-path constraint ) . A policy π satisfies the shortest-path ( SP ) constraint if π ∈ ΠSP , where ΠSP = { π | For all s , s′ ∈ T πΦ , nr , it holds π ∈ ΠSPs→s′ } . 1It is an instance of potential-based reward shaping which has optimality guarantee ( Ng et al. , 1999 ) . 2We refer the readers to Appendix F for more detailed discussion and proofs for single-goal MDPs . Intuitively , the SP constraint forces a policy to transition between initial and rewarding states via shortest paths . The SP constraint would be particularly effective in sparse-reward settings , where the distance between rewarding states is large . Given these definitions , we can show that an optimal policy indeed satisfies the SP constraint in a general MDP setting . In other words , the shortest path constraint should not change optimality : Theorem 1 . For any MDP , an optimal policy π∗ satisfies the shortest-path constraint : π∗ ∈ ΠSP . Proof . See Appendix G for the proof . 3.2 RELAXATION : k-SHORTEST-PATH CONSTRAINT Implementing the shortest-path constraint is , however , intractable since it requires a distance predictor Dnr ( s , s ′ ) . Note that the distance predictor addresses the optimization problem , which might be as difficult as solving the given task . To circumvent this challenge , we consider its more tractable version , namely a k-shortest path constraint , which reduces the shortest-path problem Dnr ( s , s′ ) to a binary decision problem — is the state s′ reachable from s within k steps ? — also known as k-reachability ( Savinov et al. , 2018b ) . The k-shortest path constraint is defined as follows : Definition 7 ( k-shortest-path constraint ) . A policy π satisfies the k-shortest-path constraint if π ∈ ΠSPk , where ΠSPk = { π | For all s , s′ ∈ T πΦ , nr , Dπnr ( s , s′ ) ≤ k , it holds π ∈ ΠSPs→s′ } . ( 2 ) Note that the SP constraint ( Definition 6 ) is relaxed by adding a condition Dπnr ( s , s ′ ) ≤ k. In other words , the k-SP constraint is imposed only for s , s′-path whose length is not greater than k. From Eq . ( 2 ) , we can prove an important property and then Theorem 3 ( optimality ) : Lemma 2 . For an MDPM , ΠSPm ⊂ ΠSPk if k < m. Proof . It is true since { ( s , s′ ) | Dπnr ( s , s′ ) ≤ k } ⊂ { ( s , s′ ) | Dπnr ( s , s′ ) ≤ m } for k < m. Theorem 3 . For an MDPM and any k ∈ R , an optimal policy π∗ is a k-shortest-path policy . Proof . Theorem 1 tells π∗ ∈ ΠSP . Eq . ( 2 ) tells ΠSP = ΠSP∞ and Lemma 2 tells ΠSP∞ ⊂ ΠSPk . Collectively , we have π∗ ∈ ΠSP = ΠSP∞ ⊂ ΠSPk . In conclusion , Theorem 3 states that the k-SP constraint does not change the optimality of policy , and Lemma 2 states a larger k results in a larger reduction in policy search space . Thus , it motivates us to apply the k-SP constraint in policy search to more efficiently find an optimal policy . For the numerical experiment on measuring the reduction in the policy roll-outs space , please refer to Section 6.4 .
This paper proposes the k-Shortest-Path (k-SP) constraint to restrict the agent’s trajectory to avoid redundant exploration and thus improves sample efficiency in sparse-reward MDPs. Specifically, k-SP constraint is applied to a trajectory rolled out by a policy where all of its sub-path of length k is required to be a shortest-path under the π-distance metric. Instead of a hard constraint, a cost function-based formulation is proposed to implement the constraint. The method can improve the sample efficiency in sparse reward tasks and also preserve the optimality of given MDP. Numerical results in the paper also demonstrate the effectiveness of k-SP compared with existing methods on two domains (1) Mini-Grid and (2) DeepMind Lab in sparse reward settings.
SP:0f0d7119df7043ccea815c96e8896114210290f0
CPT: Efficient Deep Neural Network Training via Cyclic Precision
1 INTRODUCTION . The record-breaking performance of modern deep neural networks ( DNNs ) comes at a prohibitive training cost due to the required massive training data and parameters , limiting the development of the highly demanded DNN-powered intelligent solutions for numerous applications ( Liu et al. , 2018 ; Wu et al. , 2018 ) . As an illustration , training ResNet-50 involves 1018 FLOPs ( floating-point operations ) and can take 14 days on one state-of-the-art ( SOTA ) GPU ( You et al. , 2020b ) . Meanwhile , the large DNN training costs have raised increasing financial and environmental concerns . For example , it is estimated that training one DNN can cost more than $ 10K US dollars and emit carbon as high as a car ’ s lifetime emissions . In parallel , recent DNN advances have fueled a tremendous need for intelligent edge devices , many of which require on-device in-situ learning to ensure the accuracy under dynamic real-world environments , where there is a mismatch between the devices ’ limited resources and the prohibitive training costs ( Wang et al. , 2019b ; Li et al. , 2020 ; You et al. , 2020a ) . To address the aforementioned challenges , extensive research efforts have been devoted to developing efficient DNN training techniques . Among them , low-precision training has gained significant attention as it can largely boost the training time/energy efficiency ( Jacob et al. , 2018 ; Wang et al. , 2018a ; Sun et al. , 2019 ) . For instance , GPUs can now perform mixed-precision DNN training with 16-bit IEEE Half-Precision floating-point formats ( Micikevicius et al. , 2017b ) . Despite their promise , existing low-precision works have not yet fully explored the opportunity of leveraging recent findings in understanding DNN training . In particular , existing works mostly fix the model precision during the whole training process , i.e. , adopt a static quantization strategy , while recent works in DNN training optimization suggest dynamic hyper-parameters along DNNs ’ training trajectory . For example , ( Li et al. , 2019 ) shows that a large initial learning rate helps the model to memorize easier-to-fit and more generalizable patterns , which aligns with the common practice to start from a large learning rate for exploration and anneal to a small one for final convergence ; and ( Smith , 2017 ; Loshchilov & Hutter , 2016 ) improve DNNs ’ classification accuracy by adopting cyclical learning rates . In this work , we advocate dynamic precision training , and make the following contributions : • We show that DNNs ’ precision seems to have a similar effect as the learning rate during DNN training , i.e. , low precision with large quantization noise helps DNN training exploration while high precision with more accurate updates aids model convergence , and dynamic precision schedules help DNNs converge to a better minima . This finding opens up a design knob for simultaneously improving the optimization and efficiency of DNN training . • We propose Cyclic Precision Training ( CPT ) which adopts a cyclic precision schedule along DNNs ’ training trajectory for pushing forward the achievable trade-offs between DNNs ’ accuracy and training efficiency . Furthermore , we show that the cyclic precision bounds can be automatically identified at the very early stage of training using a simple precision range test , which has a negligible computational overhead . • Extensive experiments on five datasets and eleven models across a wide spectrum of applications ( including classification and language modeling ) validate the consistent effectiveness of the proposed CPT technique in boosting the training efficiency while leading to a comparable or even better accuracy . Furthermore , we provide loss surface visualization for better understanding CPT ’ s effectiveness and discuss its connection with recent findings in understanding DNNs ’ training optimization . 2 RELATED WORKS . Quantized DNNs . DNN quantization ( Courbariaux et al. , 2015 ; 2016 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ; Li et al. , 2016 ; Jacob et al. , 2018 ; Mishra & Marr , 2017 ; Mishra et al. , 2017 ; Park et al. , 2017 ; Zhou et al. , 2016 ) has been well explored based on the target accuracy-efficiency tradeoffs . For example , ( Jacob et al. , 2018 ) proposes quantization-aware training to preserve the post quantization accuracy ; ( Jung et al. , 2019 ; Bhalgat et al. , 2020 ; Esser et al. , 2019 ; Park & Yoo , 2020 ) strive to improve low-precision DNNs ’ accuracy using learnable quantizers . Mixed-precision DNN quantization ( Wang et al. , 2019a ; Xu et al. , 2018 ; Elthakeb et al. , 2020 ; Zhou et al. , 2017 ) assigns different bitwidths for different layers/filters . While these works all adopt a static quantization strategy , i.e. , the assigned precision is fixed post quantization , CPT adopts a dynamic precision schedule during the training process . Low-precision DNN training . Pioneering works ( Wang et al. , 2018a ; Banner et al. , 2018 ; Micikevicius et al. , 2017a ; Gupta et al. , 2015 ; Sun et al. , 2019 ) have shown that DNNs can be trained with reduced precision . For distributed learning , ( Seide et al. , 2014 ; De Sa et al. , 2017 ; Wen et al. , 2017 ; Bernstein et al. , 2018 ) quantize the gradients to reduce the communication costs , where the training computations still adopt full precision ; For centralized/on-device learning , the weights , activations , gradients , and errors involved in both the forward and backward computations all adopt reduced precision . Our CPT can be applied on top of these low-precision training techniques , all of which adopt a static precision during the whole training trajectory , to further boost the training efficiency . Dynamic-precision DNNs . There exist some dynamic precision works which aim to derive a quantized DNN for inference after the full-precision training . Specifically , ( Zhuang et al. , 2018 ) first trains a full-precision model to reach convergence and then gradually decreases the model precision to the target one for achieving better inference accuracy ; ( Khoram & Li , 2018 ) also starts from a full-precision model and then gradually learns the precision of each layer to derive a mixed-precision counterpart ; ( Yang & Jin , 2020 ) learns a fractional precision of each layer/filter based on the linear interpolation of two consecutive bitwidths which doubles the computation and requires an extra fine-tuning step ; and ( Shen et al. , 2020 ) proposes to adapt the precision of each layer during inference in an input-dependent manner to balance computational cost and accuracy . 3 THE PROPOSED CPT TECHNIQUE . In this section , we first introduce the hypothesis that motivates us to develop CPT using visualization examples in Sec . 3.1 , and then present the CPT concept in Sec . 3.2 followed by the Precision Range Test ( PRT ) method in Sec . 3.3 , where PRT aims to automate the precision schedule for CPT . 3.1 CPT : MOTIVATION . Hypothesis 1 : DNN ’ s precision has a similar effect as the learning rate . Existing works ( Grandvalet et al. , 1997 ; Neelakantan et al. , 2015 ) show that noise can help DNN training theoretically or empirically , motivating us to rethink the role of quantization in DNN training . We conjecture that low precision with large quantization noise helps DNN training exploration with an effect similar to a high learning rate , while high precision with more accurate updates aids model convergence , similar to a low learning rate . Validating Hypothesis 1 . Settings : To empirically justify our hypothesis , we train ResNet-38/74 on the CIFAR-100 dataset for 160 epochs following the basic training setting as in Sec . 4.1 . In particular , we divide the training of 160 epochs into three stages : [ 0-th , 80-th ] , [ 80-th,120-th ] , and [ 120-th , 160-th ] : for the first training stage of [ 0-th , 80-th ] , we adopt different learning rates and precisions for the weights and activations , while using full precision for the remaining two stages with a learning rate of 0.01 for the [ 80-th,120-th ] epochs and 0.001 for the [ 120-th , 160-th ] epochs in all the experiments in order to explore the relationship between the learning rate and precision in the first training stage . Results : As shown in Tab . 1 , we can observe that as the learning rate is sufficiently reduced for the first training stage , adopting a lower precision for this stage will lead to a higher accuracy than training with full precision . In particular , with the standard initial learning rate of 0.1 , full precision training achieves a 1.00 % /0.70 % higher accuracy than the 4-bit one on ResNet-38/74 , respectively ; whereas as the initial learning rate decreases , this accuracy gap gradually narrows and then reverses , e.g. , when the initial learning rate becomes 1e-2 , training with [ 0-th , 80-th ] of 4-bit achieves a 1.40 % /1.57 % higher accuracy than the full precision ones . Insights : This set of experiments show that ( 1 ) when the initial learning rate is low , training with lower initial precisions consistently leads to a better accuracy than training with full precision , indicating that lowering the precision introduces a similar effect of favoring exploration as that of a high learning rate ; and ( 2 ) although a low precision can alleviate the accuracy drop caused by a low learning rate , a high learning rate is in general necessary to maximize the accuracy . Hypothesis 2 : Dynamic precision helps DNN generalization . Recent findings in DNN training have motivated us to better utilize DNN precision to achieve a win-win in both DNN accuracy and efficiency . Specifically , it has been discussed that ( 1 ) DNNs learn to fit different patterns at different training stages , e.g. , ( Rahaman et al. , 2019 ; Xu et al. , 2019 ) reveal that DNN training first learns lowerfrequency components and then high-frequency features , with the former being more robust to perturbations and noises ; and ( 2 ) dynamic learning rate schedules help to improve the optimization in DNN training , e.g. , ( Li et al. , 2019 ) points out that a large initial learning rate helps the model to memorize easier-to-fit and more generalizable patterns while ( Smith , 2017 ; Loshchilov & Hutter , 2016 ) show that cyclical learning rate schedules improve DNNs ’ classification accuracy . These works inspire us to hypothesize that dynamic precision might help DNNs to reach a better optimum in the optimization landscape , especially considering the similar effect between the learning rate and precision validated in our Hypothesis 1 . Validating Hypothesis 2 . Our Hypothesis 2 has been consistently confirmed by various empirical observations . For example , a recent work ( Fu et al. , 2020 ) proposes to progressively increase the precision during the training process , and we follow their settings to validate our hypothesis . Settings : We train a ResNet-74 on CIFAR-100 using the same training setting as ( Wang et al. , 2018b ) except that we quantize the weights , activations , and gradients during training ; for the progressive precision case we uniformly increase the precision of weights and activations from 3-bit to 8-bit in the first 80 epochs and adopt static 8-bit gradients , while the static precision baseline uses 8-bit for all the weights/activations/gradients . Results : Fig . 1 shows that training with progressive precision schedule achieves a slightly higher accuracy ( +0.3 % ) than its static counterpart , while the former can reduce training costs . Furthermore , we visualize the loss landscape ( following the method in ( Li et al. , 2018 ) ) in Fig . 2 ( b ) : interestingly the progressive precision schedule helps to converge to a better local minima with wider contours , indicating a lower generalization error ( Li et al. , 2018 ) over the static 8-bit baseline in Fig . 2 ( a ) . The progressive precision schedule in ( Fu et al. , 2020 ) relies on manual hyper-parameter tuning . As such , a natural following question would be : what kind of dynamic schedules would be effective while being simple to implement for different tasks/models ? In this work , we show that a simple cyclic schedule consistently benefits the training convergence while boosting the training efficiency . 3.2 CPT : THE KEY CONCEPT The key concept of CPT draws inspiration from ( Li et al. , 2019 ) which demonstrates that a large initial learning rate helps the model to learn more generalizable patterns . We thus hypothesize that a lower precision that leads to a shortterm poor accuracy might actually help the DNN exploration during training thanks to its associated larger quantization noise , while it is well known that a higher precision enables the learning of higher-complexity , fine-grained patterns that is critical to better convergence . Together , this combination could improve the achieved ac- curacy as it might better balance coarse-grained exploration and fine-grained optimization during DNN training , which leads to the idea of CPT . Specifically , as shown in Fig . 3 , CPT varies the precision cyclically between two bounds instead of fixing the precision during training , letting the models explore the optimization landscape with different granularities . While CPT can be implemented using different cyclic scheduling methods , here we present as an example an implementation of CPT in a cosine manner : B n t = ⌈Bnmin + 1 2 ( Bnmax −Bnmin ) ( 1 − cos ( t % Tn Tn π ) ) ⌋ ( 1 ) where Bnmin and B n max are the lower and upper precision bound , respectively , in the n-th cycle of precision schedule , ⌈⋅⌋ and % denote the rounding operation and the remainder operation , respectively , and Bnt is the precision at the t-th global epoch which falls into the n-th cycle with a cycle length of Tn . Note that the cycle length Tn is equal to the total number of training epochs divided by the total number of cycles denoted as N , where N is a hyper-parameter of CPT . For example , if N = 2 , then a DNN training with CPT will experience two cycles of cyclic precision schedule during training . As shown in Sec . 4.3 , we find that the benefits of CPT are maintained when adopting different total number of cyclic precision schedule cycles during training , i.e. , CPT is not sensitive to N . A visualization example for the precision schedule can be found in Appendix A. Additionally , we find that CPT is generally effective when using different dynamic precision schedule patterns ( i.e. , not necessarily the cosine schedule in Eq . ( 1 ) . We implement CPT following Eq . ( 1 ) in this work and discuss the potential variants in Sec . 4.3 . We visualize the training curve of CPT on ResNet-74 with CIFAR-100 in Fig . 1 and find that it achieves a 0.91 % higher accuracy paired with a 36.7 % reduction in the required training BitOPs ( bit operations ) , as compared to its static fixed precision counterpart . In addition , Fig . 2 ( c ) visualizes the corresponding loss landscape , showing the effectiveness of CPT , i.e. , such a simple and automated precision schedule leads to a better convergence with lower sharpness . 3.3 CPT : PRECISION RANGE TEST The concept of CPT is simple enough to be plugged into any model or task to boost the training efficiency . One remaining question is how to determine the precision bounds , i.e. , Bimin and B i max in Eq . ( 1 ) , which we find can be automatically decided in the first cycle ( i.e. , Ti = T0 ) of the precision schedule using a simple PRT at a negligible computational cost . Specifically , PRT starts from the lowest possible precision , e.g. , 2-bit , and gradually increases the precision while monitoring the difference in the training accuracy magnitude averaged over several consecutive iterations ; once this training accuracy difference is larger than a preset threshold , indicating that the training can at least partially converge , PRT would claim that the lower bound is identified . While the upper bound can be sim- ilarly determined , there exists an alternative which suggests simply adopting the precision of CPT ’ s static precision counterpart . The remaining cycles use the same precision bounds . Fig . 4 visualizes the PRT for ResNet-152/MobileNetV2 trained on CIFAR-100 . We can see that the lower precision bound identified when the model experiences a notable training accuracy improvement for ResNet-152 is 3-bit while that for MobileNetV2 is 4-bit , aligning with the common observation that ResNet-152 is more robust to quantization than the more compact model MobileNetV2 .
The authors proposed an interesting low-precision training method using a dynamic precision schedule. Their proposed Cyclic Precision Training (CPT) cyclically varies the precision during the training and the boundary of precision values is determined by a precision range test (PRT). As shown in their empirical results, CPT largely reduces time/energy costs during training while maintaining comparable accuracy. While CPT does show a promising empirical performance, the motivation and reason behind the proposed method require further explanations.
SP:df423ffa99360482fdfa31ad2c7e2ecedfa8bf5a
CPT: Efficient Deep Neural Network Training via Cyclic Precision
1 INTRODUCTION . The record-breaking performance of modern deep neural networks ( DNNs ) comes at a prohibitive training cost due to the required massive training data and parameters , limiting the development of the highly demanded DNN-powered intelligent solutions for numerous applications ( Liu et al. , 2018 ; Wu et al. , 2018 ) . As an illustration , training ResNet-50 involves 1018 FLOPs ( floating-point operations ) and can take 14 days on one state-of-the-art ( SOTA ) GPU ( You et al. , 2020b ) . Meanwhile , the large DNN training costs have raised increasing financial and environmental concerns . For example , it is estimated that training one DNN can cost more than $ 10K US dollars and emit carbon as high as a car ’ s lifetime emissions . In parallel , recent DNN advances have fueled a tremendous need for intelligent edge devices , many of which require on-device in-situ learning to ensure the accuracy under dynamic real-world environments , where there is a mismatch between the devices ’ limited resources and the prohibitive training costs ( Wang et al. , 2019b ; Li et al. , 2020 ; You et al. , 2020a ) . To address the aforementioned challenges , extensive research efforts have been devoted to developing efficient DNN training techniques . Among them , low-precision training has gained significant attention as it can largely boost the training time/energy efficiency ( Jacob et al. , 2018 ; Wang et al. , 2018a ; Sun et al. , 2019 ) . For instance , GPUs can now perform mixed-precision DNN training with 16-bit IEEE Half-Precision floating-point formats ( Micikevicius et al. , 2017b ) . Despite their promise , existing low-precision works have not yet fully explored the opportunity of leveraging recent findings in understanding DNN training . In particular , existing works mostly fix the model precision during the whole training process , i.e. , adopt a static quantization strategy , while recent works in DNN training optimization suggest dynamic hyper-parameters along DNNs ’ training trajectory . For example , ( Li et al. , 2019 ) shows that a large initial learning rate helps the model to memorize easier-to-fit and more generalizable patterns , which aligns with the common practice to start from a large learning rate for exploration and anneal to a small one for final convergence ; and ( Smith , 2017 ; Loshchilov & Hutter , 2016 ) improve DNNs ’ classification accuracy by adopting cyclical learning rates . In this work , we advocate dynamic precision training , and make the following contributions : • We show that DNNs ’ precision seems to have a similar effect as the learning rate during DNN training , i.e. , low precision with large quantization noise helps DNN training exploration while high precision with more accurate updates aids model convergence , and dynamic precision schedules help DNNs converge to a better minima . This finding opens up a design knob for simultaneously improving the optimization and efficiency of DNN training . • We propose Cyclic Precision Training ( CPT ) which adopts a cyclic precision schedule along DNNs ’ training trajectory for pushing forward the achievable trade-offs between DNNs ’ accuracy and training efficiency . Furthermore , we show that the cyclic precision bounds can be automatically identified at the very early stage of training using a simple precision range test , which has a negligible computational overhead . • Extensive experiments on five datasets and eleven models across a wide spectrum of applications ( including classification and language modeling ) validate the consistent effectiveness of the proposed CPT technique in boosting the training efficiency while leading to a comparable or even better accuracy . Furthermore , we provide loss surface visualization for better understanding CPT ’ s effectiveness and discuss its connection with recent findings in understanding DNNs ’ training optimization . 2 RELATED WORKS . Quantized DNNs . DNN quantization ( Courbariaux et al. , 2015 ; 2016 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ; Li et al. , 2016 ; Jacob et al. , 2018 ; Mishra & Marr , 2017 ; Mishra et al. , 2017 ; Park et al. , 2017 ; Zhou et al. , 2016 ) has been well explored based on the target accuracy-efficiency tradeoffs . For example , ( Jacob et al. , 2018 ) proposes quantization-aware training to preserve the post quantization accuracy ; ( Jung et al. , 2019 ; Bhalgat et al. , 2020 ; Esser et al. , 2019 ; Park & Yoo , 2020 ) strive to improve low-precision DNNs ’ accuracy using learnable quantizers . Mixed-precision DNN quantization ( Wang et al. , 2019a ; Xu et al. , 2018 ; Elthakeb et al. , 2020 ; Zhou et al. , 2017 ) assigns different bitwidths for different layers/filters . While these works all adopt a static quantization strategy , i.e. , the assigned precision is fixed post quantization , CPT adopts a dynamic precision schedule during the training process . Low-precision DNN training . Pioneering works ( Wang et al. , 2018a ; Banner et al. , 2018 ; Micikevicius et al. , 2017a ; Gupta et al. , 2015 ; Sun et al. , 2019 ) have shown that DNNs can be trained with reduced precision . For distributed learning , ( Seide et al. , 2014 ; De Sa et al. , 2017 ; Wen et al. , 2017 ; Bernstein et al. , 2018 ) quantize the gradients to reduce the communication costs , where the training computations still adopt full precision ; For centralized/on-device learning , the weights , activations , gradients , and errors involved in both the forward and backward computations all adopt reduced precision . Our CPT can be applied on top of these low-precision training techniques , all of which adopt a static precision during the whole training trajectory , to further boost the training efficiency . Dynamic-precision DNNs . There exist some dynamic precision works which aim to derive a quantized DNN for inference after the full-precision training . Specifically , ( Zhuang et al. , 2018 ) first trains a full-precision model to reach convergence and then gradually decreases the model precision to the target one for achieving better inference accuracy ; ( Khoram & Li , 2018 ) also starts from a full-precision model and then gradually learns the precision of each layer to derive a mixed-precision counterpart ; ( Yang & Jin , 2020 ) learns a fractional precision of each layer/filter based on the linear interpolation of two consecutive bitwidths which doubles the computation and requires an extra fine-tuning step ; and ( Shen et al. , 2020 ) proposes to adapt the precision of each layer during inference in an input-dependent manner to balance computational cost and accuracy . 3 THE PROPOSED CPT TECHNIQUE . In this section , we first introduce the hypothesis that motivates us to develop CPT using visualization examples in Sec . 3.1 , and then present the CPT concept in Sec . 3.2 followed by the Precision Range Test ( PRT ) method in Sec . 3.3 , where PRT aims to automate the precision schedule for CPT . 3.1 CPT : MOTIVATION . Hypothesis 1 : DNN ’ s precision has a similar effect as the learning rate . Existing works ( Grandvalet et al. , 1997 ; Neelakantan et al. , 2015 ) show that noise can help DNN training theoretically or empirically , motivating us to rethink the role of quantization in DNN training . We conjecture that low precision with large quantization noise helps DNN training exploration with an effect similar to a high learning rate , while high precision with more accurate updates aids model convergence , similar to a low learning rate . Validating Hypothesis 1 . Settings : To empirically justify our hypothesis , we train ResNet-38/74 on the CIFAR-100 dataset for 160 epochs following the basic training setting as in Sec . 4.1 . In particular , we divide the training of 160 epochs into three stages : [ 0-th , 80-th ] , [ 80-th,120-th ] , and [ 120-th , 160-th ] : for the first training stage of [ 0-th , 80-th ] , we adopt different learning rates and precisions for the weights and activations , while using full precision for the remaining two stages with a learning rate of 0.01 for the [ 80-th,120-th ] epochs and 0.001 for the [ 120-th , 160-th ] epochs in all the experiments in order to explore the relationship between the learning rate and precision in the first training stage . Results : As shown in Tab . 1 , we can observe that as the learning rate is sufficiently reduced for the first training stage , adopting a lower precision for this stage will lead to a higher accuracy than training with full precision . In particular , with the standard initial learning rate of 0.1 , full precision training achieves a 1.00 % /0.70 % higher accuracy than the 4-bit one on ResNet-38/74 , respectively ; whereas as the initial learning rate decreases , this accuracy gap gradually narrows and then reverses , e.g. , when the initial learning rate becomes 1e-2 , training with [ 0-th , 80-th ] of 4-bit achieves a 1.40 % /1.57 % higher accuracy than the full precision ones . Insights : This set of experiments show that ( 1 ) when the initial learning rate is low , training with lower initial precisions consistently leads to a better accuracy than training with full precision , indicating that lowering the precision introduces a similar effect of favoring exploration as that of a high learning rate ; and ( 2 ) although a low precision can alleviate the accuracy drop caused by a low learning rate , a high learning rate is in general necessary to maximize the accuracy . Hypothesis 2 : Dynamic precision helps DNN generalization . Recent findings in DNN training have motivated us to better utilize DNN precision to achieve a win-win in both DNN accuracy and efficiency . Specifically , it has been discussed that ( 1 ) DNNs learn to fit different patterns at different training stages , e.g. , ( Rahaman et al. , 2019 ; Xu et al. , 2019 ) reveal that DNN training first learns lowerfrequency components and then high-frequency features , with the former being more robust to perturbations and noises ; and ( 2 ) dynamic learning rate schedules help to improve the optimization in DNN training , e.g. , ( Li et al. , 2019 ) points out that a large initial learning rate helps the model to memorize easier-to-fit and more generalizable patterns while ( Smith , 2017 ; Loshchilov & Hutter , 2016 ) show that cyclical learning rate schedules improve DNNs ’ classification accuracy . These works inspire us to hypothesize that dynamic precision might help DNNs to reach a better optimum in the optimization landscape , especially considering the similar effect between the learning rate and precision validated in our Hypothesis 1 . Validating Hypothesis 2 . Our Hypothesis 2 has been consistently confirmed by various empirical observations . For example , a recent work ( Fu et al. , 2020 ) proposes to progressively increase the precision during the training process , and we follow their settings to validate our hypothesis . Settings : We train a ResNet-74 on CIFAR-100 using the same training setting as ( Wang et al. , 2018b ) except that we quantize the weights , activations , and gradients during training ; for the progressive precision case we uniformly increase the precision of weights and activations from 3-bit to 8-bit in the first 80 epochs and adopt static 8-bit gradients , while the static precision baseline uses 8-bit for all the weights/activations/gradients . Results : Fig . 1 shows that training with progressive precision schedule achieves a slightly higher accuracy ( +0.3 % ) than its static counterpart , while the former can reduce training costs . Furthermore , we visualize the loss landscape ( following the method in ( Li et al. , 2018 ) ) in Fig . 2 ( b ) : interestingly the progressive precision schedule helps to converge to a better local minima with wider contours , indicating a lower generalization error ( Li et al. , 2018 ) over the static 8-bit baseline in Fig . 2 ( a ) . The progressive precision schedule in ( Fu et al. , 2020 ) relies on manual hyper-parameter tuning . As such , a natural following question would be : what kind of dynamic schedules would be effective while being simple to implement for different tasks/models ? In this work , we show that a simple cyclic schedule consistently benefits the training convergence while boosting the training efficiency . 3.2 CPT : THE KEY CONCEPT The key concept of CPT draws inspiration from ( Li et al. , 2019 ) which demonstrates that a large initial learning rate helps the model to learn more generalizable patterns . We thus hypothesize that a lower precision that leads to a shortterm poor accuracy might actually help the DNN exploration during training thanks to its associated larger quantization noise , while it is well known that a higher precision enables the learning of higher-complexity , fine-grained patterns that is critical to better convergence . Together , this combination could improve the achieved ac- curacy as it might better balance coarse-grained exploration and fine-grained optimization during DNN training , which leads to the idea of CPT . Specifically , as shown in Fig . 3 , CPT varies the precision cyclically between two bounds instead of fixing the precision during training , letting the models explore the optimization landscape with different granularities . While CPT can be implemented using different cyclic scheduling methods , here we present as an example an implementation of CPT in a cosine manner : B n t = ⌈Bnmin + 1 2 ( Bnmax −Bnmin ) ( 1 − cos ( t % Tn Tn π ) ) ⌋ ( 1 ) where Bnmin and B n max are the lower and upper precision bound , respectively , in the n-th cycle of precision schedule , ⌈⋅⌋ and % denote the rounding operation and the remainder operation , respectively , and Bnt is the precision at the t-th global epoch which falls into the n-th cycle with a cycle length of Tn . Note that the cycle length Tn is equal to the total number of training epochs divided by the total number of cycles denoted as N , where N is a hyper-parameter of CPT . For example , if N = 2 , then a DNN training with CPT will experience two cycles of cyclic precision schedule during training . As shown in Sec . 4.3 , we find that the benefits of CPT are maintained when adopting different total number of cyclic precision schedule cycles during training , i.e. , CPT is not sensitive to N . A visualization example for the precision schedule can be found in Appendix A. Additionally , we find that CPT is generally effective when using different dynamic precision schedule patterns ( i.e. , not necessarily the cosine schedule in Eq . ( 1 ) . We implement CPT following Eq . ( 1 ) in this work and discuss the potential variants in Sec . 4.3 . We visualize the training curve of CPT on ResNet-74 with CIFAR-100 in Fig . 1 and find that it achieves a 0.91 % higher accuracy paired with a 36.7 % reduction in the required training BitOPs ( bit operations ) , as compared to its static fixed precision counterpart . In addition , Fig . 2 ( c ) visualizes the corresponding loss landscape , showing the effectiveness of CPT , i.e. , such a simple and automated precision schedule leads to a better convergence with lower sharpness . 3.3 CPT : PRECISION RANGE TEST The concept of CPT is simple enough to be plugged into any model or task to boost the training efficiency . One remaining question is how to determine the precision bounds , i.e. , Bimin and B i max in Eq . ( 1 ) , which we find can be automatically decided in the first cycle ( i.e. , Ti = T0 ) of the precision schedule using a simple PRT at a negligible computational cost . Specifically , PRT starts from the lowest possible precision , e.g. , 2-bit , and gradually increases the precision while monitoring the difference in the training accuracy magnitude averaged over several consecutive iterations ; once this training accuracy difference is larger than a preset threshold , indicating that the training can at least partially converge , PRT would claim that the lower bound is identified . While the upper bound can be sim- ilarly determined , there exists an alternative which suggests simply adopting the precision of CPT ’ s static precision counterpart . The remaining cycles use the same precision bounds . Fig . 4 visualizes the PRT for ResNet-152/MobileNetV2 trained on CIFAR-100 . We can see that the lower precision bound identified when the model experiences a notable training accuracy improvement for ResNet-152 is 3-bit while that for MobileNetV2 is 4-bit , aligning with the common observation that ResNet-152 is more robust to quantization than the more compact model MobileNetV2 .
The authors propose cyclic precision training (CPT), a method to train integer-quantized neural networks with high precision while saving bit operations. CPT alternates the numerical precision of the network during training between low (2-3 bits) and high (the final desired precision, e.g. 8 bits). A total of 32 cycles of low to high are used, and this is only applied to the weights and activations, the backward pass is always in high precision. CPT is able to achieve improved accuracy across a variety of models on CIFAR-10/100 as well as transformers on PTB and WikiText. On ImageNet, CPT achieves accuracy on part with regular quantized training but saves bit ops.
SP:df423ffa99360482fdfa31ad2c7e2ecedfa8bf5a
Interpreting and Boosting Dropout from a Game-Theoretic View
1 INTRODUCTION . Deep neural networks ( DNNs ) have exhibited significant success in various tasks , but the overfitting problem is still a considerable challenge for deep learning . Dropout is usually considered as an effective operation to alleviate the over-fitting problem of DNNs . Hinton et al . ( 2012 ) ; Srivastava et al . ( 2014 ) thought that dropout could encourage each unit in an intermediate-layer feature to model useful information without much dependence on other units . Konda et al . ( 2016 ) considered dropout as a specific method of data augmentation . Gal & Ghahramani ( 2016 ) proved that dropout was equivalent to the Bayesian approximation in a Gaussian process . Our research group led by Dr. Quanshi Zhang has proposed game-theoretic interactions , including interactions of different orders ( Zhang et al. , 2020 ) and multivariate interactions ( Zhang et al. , 2021b ) . As a basic metric , the interaction can be used to explain signal-processing behaviors in trained DNNs from different perspectives . For example , we have built up a tree structure to explain hierarchical interactions between words encoded in NLP models ( Zhang et al. , 2021a ) . We also prove a close relationship between the interaction and the adversarial robustness ( Ren et al. , 2021 ) and transferability ( Wang et al. , 2020 ) . Many previous methods of boosting adversarial transferability can be explained as the reduction of interactions , and the interaction can also explain the utility of the adversarial training ( Ren et al. , 2021 ) . As an extension of the system of game-theoretic interactions , in this paper , we aim to explain , model , and improve the utility of dropout from the following perspectives . First , we prove that the dropout ∗Correspondence . This study is conducted under the supervision of Dr. Quanshi Zhang . zqs1022 @ sjtu.edu.cn . Quanshi Zhang is with the John Hopcroft Center and the MoE Key Lab of Artificial Intelligence , AI Institute , at the Shanghai Jiao Tong University , China . operation suppresses interactions between input units encoded by DNNs . This is also verified by various experiments . To this end , the interaction is defined in game theory , as follows . Let x denote the input , and let f ( x ) denote the output of the DNN . For the i-th input variable , we can compute its importance value φ ( i ) , which measures the numerical contribution of the i-th variable to the output f ( x ) . We notice that the importance value of the i-th variable would be different when we mask the j-th variable w.r.t . the case when we do not mask the j-th variable . Thus , the interaction between input variables i and j is measured as the difference φw/ j ( i ) − φw/o j ( i ) . Second , we also discover a strong correlation between interactions of input variables and the overfitting problem of the DNN . Specifically , the over-fitted samples usually exhibit much stronger interactions than ordinary samples . Therefore , we consider that the utility of dropout is to alleviate the significance of over-fitting by decreasing the strength of interactions encoded by the DNN . Based on this understanding , we propose an interaction loss to further improve the utility of dropout . The interaction loss directly penalizes the interaction strength , in order to improve the performance of DNNs . The interaction loss exhibits the following two distinct advantages over the dropout operation . ( 1 ) The interaction loss explicitly controls the penalty of the interaction strength , which enables people to trade off between over-fitting and under-fitting . ( 2 ) Unlike dropout which is incompatible with the batch normalization operation ( Li et al. , 2019 ) , the interaction loss can work in harmony with batch normalization . Various experimental results show that the interaction loss can boost the performance of DNNs . Furthermore , we analyze interactions encoded by DNNs from the following three perspectives . ( 1 ) First , we discover the consistency between the sampling process in dropout ( when the dropout rate p = 0.5 ) and the sampling in the computation of the Banzhaf value . The Banzhaf value ( Banzhaf III , 1964 ) is another metric to measure the importance of each input variable in game theory . Unlike the Shapley value , the Banzhaf value is computed under the assumption that each input variable independently participates in the game with the probability 0.5 . We find that the frequent inference patterns in Banzhaf interactions ( Grabisch & Roubens , 1999 ) are also prone to be frequently sampled by dropout , thereby being stably learned . This ensures the DNN to encode smooth Banzhaf interactions . We also prove that the Banzhaf interaction is close to the aforementioned interaction , which also relates to the dropout operation with the interaction used in this paper . ( 2 ) Besides , we find that the interaction loss is better to be applied to low layers than being applied to high layers . ( 3 ) Furthermore , we decompose the overall interaction into interaction components of different orders . We visualize the strongly interacted regions within each input sample . We find out that interaction components of low orders take the main part of interactions and are suppressed by the dropout operation and the interaction loss . Contributions of this paper can be summarized as follows . ( 1 ) We mathematically represent the dependence of feature variables using as the game-theoretic interactions , and prove that dropout can suppress the strength of interactions encoded by a DNN . In comparison , previous studies ( Hinton et al. , 2012 ; Krizhevsky et al. , 2012 ; Srivastava et al. , 2014 ) did not mathematically model the the dependence of feature variables or theoretically proved its relationship with the dropout . ( 2 ) We find that the over-fitted samples usually contain stronger interactions than other samples . ( 3 ) Based on this , we consider the utility of dropout is to alleviate over-fitting by decreasing the interaction . We design a novel loss function to penalize the strength of interactions , which improves the performance of DNNs . ( 4 ) We analyze the properties of interactions encoded by DNNs , and conduct comparative studies to obtain new insights into interactions encoded by DNNs . 2 RELATED WORK . The dropout operation . Dropout is an effective operation to alleviate the over-fitting problem and improve the performance of DNNs ( Hinton et al. , 2012 ) . Several studies have been proposed to explain the inherent mechanism of dropout . According to ( Hinton et al. , 2012 ; Krizhevsky et al. , 2012 ; Srivastava et al. , 2014 ) , dropout could prevent complex co-adaptation between units in intermediate layers , and could encourage each unit to encode useful representations itself . However , these studies only qualitatively analyzed the utility of dropout , instead of providing quantitative results . Wager et al . ( 2013 ) showed that dropout performed as an adaptive regularization , and established a connection to the algorithm AdaGrad . Konda et al . ( 2016 ) interpreted dropout as a kind of data augmentation in the input space , and Gal & Ghahramani ( 2016 ) proved that dropout was equiva- lent to a Bayesian approximation in the Gaussian process . Gao et al . ( 2019 ) disentangled the dropout operation into the forward dropout and the backward dropout , and improved the performance by setting different dropout rates for the forward dropout and the backward dropout , respectively . Gomez et al . ( 2018 ) proposed the targeted dropout , which only randomly dropped variables with low activation values , and kept variables with high activation values . In comparison , we aim to explain the utility of dropout from the view of game theory . Our interaction metric quantified the interaction between all pairs of variables considering all potential contexts , which were randomly sampled from all variables . Furthermore , we propose a method to improve the utility of dropout . Interaction . Previous studies have explored interactions between input variables . Bien et al . ( 2013 ) developed an algorithm to learn hierarchical pairwise interactions inside an additive model . Sorokina et al . ( 2008 ) detected the statistical interaction using an additive model-based ensemble of regression trees . Murdoch et al . ( 2018 ) ; Singh et al . ( 2018 ) ; Jin et al . ( 2019 ) proposed and extended the contextual decomposition to measure the interaction encoded by DNNs in NLP tasks . Tsang et al . ( 2018 ) measured the pairwise interaction based on the learned weights of the DNN . Tsang et al . ( 2020 ) proposed a method , namely GLIDER , to detect the feature interaction modeled by a recommender system . Janizek et al . ( 2020 ) proposed the Integrated-Hessian value to measure interactions , based on Integral Gradient ( Sundararajan et al. , 2017 ) . Integral Gradient measures the importance value for each input variable w.r.t . the DNN . Given an input vector x ∈ Rn , Integrated Hessians measures the interaction between input variables ( dimensions ) xi and xj as the numerical impact of xj on the importance of xi . To this end , Janizek et al . ( 2020 ) used Integral Gradient to compute Integrated Hessians . Appendix J further compares Integral Hessians with the interaction proposed in this paper . Besides interactions measured from the above views , game theory is also a typical perspective to analyze the interaction . Several studies explored the interaction based on game theory . Lundberg et al . ( 2018 ) defined the interaction between two variables based on the Shapley value for tree ensembles . Because Shapley value was considered as the unique standard method to estimate contributions of input words to the prediction score with solid theoretic foundations ( Weber , 1988 ) , this definition of interaction can be regarded to objectively reflect the collaborative/adversarial effects between variables w.r.t the prediction score . Furthermore , Grabisch & Roubens ( 1999 ) extended this definition to interactions among different numbers of input variables . Grabisch & Roubens ( 1999 ) also proposed the interaction based on the Banzhaf value . In comparison , the target interaction used in this paper is based on the Shapley value ( Shapley , 1953 ) . Since the Shapley is the unique metric that satisfies the linearity property , the dummy property , the symmetry property , and the efficiency property ( Ancona et al. , 2019 ) , the interaction based on the Shapley value is usually considered as a more standard metric than the interaction based on the Banzhaf value . In this paper , we aim to explain the utility of dropout using the interaction defined in game theory . We reveal the close relationship between the strength of interactions and the over-fitting of DNNs . We also design an interaction loss to improve the performance of DNNs . 3 GAME-THEORETIC EXPLANATIONS OF DROPOUT . Preliminaries : Shapley values . The Shapley value was initially proposed by Shapley ( 1953 ) in game theory . It is considered as a unique unbiased metric that fairly allocates the numerical contribution of each player to the total reward . Given a set of players N = { 1 , 2 , · · · , n } , 2N def= { S|S ⊆ N } denotes all possible subsets of N . A game f : 2N → R is a function that maps from a subset to a real number . f ( S ) is the score obtained by the subset S ⊆ N . Thus , f ( N ) − f ( ∅ ) denotes the reward obtained by all players in the game . The Shapley value allocates the overall reward to each player , as its numerical contribution φ ( i|N ) , as shown in Equation ( 1 ) . The Shapley value of player i in the game f , φ ( i|N ) , is computed as follows.∑n i=1 φ ( i|N ) = f ( N ) − f ( ∅ ) , φ ( i|N ) = ∑ S⊆N\ { i } PShapley ( S ∣∣N \ { i } ) [ f ( S ∪ { i } ) − f ( S ) ] ( 1 ) where PShapley ( S|M ) = ( |M|−|S| ) ! |S| ! ( |M|+1 ) ! is the likelihood of S being sampled , S⊆M . The Shapley value is the unique metric that satisfies the linearity property , the dummy property , the symmetry property , and the efficiency property ( Ancona et al. , 2019 ) . We summarize these properties in Appendix A . Understanding DNNs via game theory . In game theory , some players may form a coalition to compete with other players , and win a reward ( Grabisch & Roubens , 1999 ) . Accordingly , a DNN f can be considered as a game , and the output of the DNN corresponds to the score f ( · ) in Equation ( 1 ) . For example , if the DNN has a scalar output , we can take this output as the score . If the DNN outputs a vector for multi-category classification , we select the classification score corresponding to the true class as the score f ( · ) . Alternatively , f can also be set as the loss value of the DNN . The set of players N corresponds to the set of input variables . We can analyze the interaction and the element-wise contribution at two different levels . ( 1 ) We can consider input variables ( players ) as the input of the entire DNN , e.g . pixels in images and words in sentences . In this case , the game f is considered as the entire DNN . ( 2 ) Alternatively , we can also consider input variables as a set of activation units before the dropout operation . In this case , the game f is considered as consequent modules of the DNN . S⊆N in Equation ( 1 ) denotes the context of the input variable i , which consists of a subset of input variables . In order to compute the network output f ( S ) , we replace variables in N \S with the baseline value ( e.g . mask them ) , and we do not change the variables in S. In particular , when we consider neural activations before dropout as input variables , such activations are usually non-negative after ReLU . Thus , their baseline values are set to 0 . Both ( Ancona et al. , 2019 ) and Appendix G introduce details about the baseline value . In this way , f ( ∅ ) measures the output score when all input variables are masked , and f ( S ) −f ( ∅ ) measures the entire reward obtained by all input variables in S. Interactions encoded by DNNs . In this section , we introduce how to use the interaction defined in game theory to explain DNNs . Two input variables may interact with each other to contribute to the output of a DNN . Let us suppose input variables i and j have an interaction . In other words , the contribution of i and j when they work jointly is different with the case when they work individually . For example , in the sentence he is a green hand , the word green and the word hand have a strong interaction , because the words green and hand contribute to the person ’ s identity jointly , rather than independently . In this case , we can consider these two input variables to form a certain inference pattern as a singleton player Sij = { i , j } . Thus , this DNN can be considered to have only ( n−1 ) input variables , N ′ = N \ { i , j } ∪Sij , i.e . Sij is supposed to be always absent or present simultaneously as a constituent . In this way , the interaction I ( i , j ) between input variables i and j is defined by Grabisch & Roubens ( 1999 ) , as the contribution increase of Sij when input variables i and j cooperate with each other w.r.t . the case when i and j work individually , as follows . I ( i , j ) def = φ ( Sij |N ′ ) − [ φ ( i|N \ { j } ) + φ ( j|N \ { i } ) ] = ∑ S⊆N\ { i , j } PShapley ( S ∣∣N \ { i , j } ) ∆f ( S , i , j ) ( 2 ) where ∆f ( S , i , j ) def= f ( S ∪ { i , j } ) − f ( S ∪ { j } ) − f ( S ∪ { i } ) + f ( S ) . φ ( i|N \ { j } ) and φ ( j|N \ { i } ) correspond to the contribution to the DNN output when i and j work individually . Theoretically , I ( i , j ) is also equal to the change of the variable i ’ s Shapley value when we mask another input variable j w.r.t . the case when we do not mask j . If I ( i , j ) > 0 , input variables i and j cooperate with each other for a higher output value . Whereas , if I ( i , j ) < 0 , i and j have a negative/adversarial effect . The strength of the interaction can be computed as the absolute value of the interaction , i.e . |I ( i , j ) | . We find that the overall interaction I ( i , j ) can be decomposed into interaction components with different orders s. We use the multi-order interaction defined in ( Zhang et al. , 2020 ) , as follows . I ( i , j ) = n−2∑ s=0 [ I ( s ) ( i , j ) n− 1 ] , I ( s ) ( i , j ) def= ES⊆N\ { i , j } , |S|=s [ ∆f ( S , i , j ) ] ( 3 ) where s denote the size of the context S for the interaction . We use I ( s ) ( i , j ) to represent the s-order interaction between input variables i and j. I ( s ) ( i , j ) reflects the average interaction between input variables i and j among all contexts S with s input variables . For example , when s is small , I ( s ) ( i , j ) measures the interaction relying on inference patterns consisting of very few input variables , i.e . the interaction depends on a small context . When s is large , I ( s ) ( i , j ) corresponds to the interaction relying on inference patterns consisting of a large number of input variables , i.e . the interaction depends on the context of a large scale . Visualization of interactions in each sample , which are encoded by the DNN : There exists a specific interaction between each pair of pixels , which boosts the difficulty of visualization . In order to simplify the visualization , we divide the original image into 16 × 16 grids , and we only visualize the strength of interactions between each grid g and its neighboring grids g′ as Color ( g ) = Eg′∈neighbor ( g ) [ |I ( g , g′ ) | ] . Figure 1 visualizes the interaction strength within the image in the CelebA 0 , 17 , 11 , 20 , 21 , 25 , 31 , 49 , 43 , 46 , 6 dataset ( Liu et al. , 2015 ) , which are normalized to the range [ 0 , 1 ] . Grids on the face usually contain more significant interactions with neighboring grids than grids in the background . Proof of the relationship between dropout and the interaction . In this section , we aim to mathematically prove that dropout is an effective method to suppress the interaction strength encoded by DNNs . Given the context S , let us consider its subset T ⊆ S , which forms a coalition to represent a specific inference pattern T ∪ { i , j } . Note that for dropout , the context refers to activation units in the intermediate-layer feature without semantic meanings . Nevertheless , we just consider i , j as pixels as a toy example to illustrate the basic idea , in order to simplify the introduction . For example , let S represent the face , and let T ∪ { i , j } represent pixels of an eye in the face . Let RT ( i , j ) quantify the marginal reward obtained from the inference pattern of an eye . All interaction effects from smaller coalitions T ′ $ T are removed from RT ( i , j ) . According to the above example , Let T ′ $ T correspond to the pupil inside the eye . Then , RT ′ ( i , j ) measures the marginal reward benefited from the existence of the pupil T ′ ∪ { i , j } , while RT ( i , j ) represents the marginal benefit from the existence of the entire eye , in which the reward from the pupil has been removed . I.e . the inference pattern T ∪ { i , j } can be exclusively triggered by the co-occurrence of all pixels in the eye , but can not be triggered by a subset of pixels in the pupil T ′ ∪ { i , j } . The benefit from the pupil pattern has been removed from RT ( i , j ) . Thus , the s-order interaction can be decomposed into components w.r.t . all inference patterns T ∪ { i , j } , T ⊆ S. I ( s ) ( i , j ) = ES⊆N\ { i , j } , |S|=s [ ∑ T⊆S RT ( i , j ) ] = ∑ 0≤q≤s ( s q ) J ( q ) ( i , j ) = ∑ 0≤q≤s Γ ( q ) ( i , j|s ) ( 4 ) where Jq ( i , j ) def= ET⊆N\ { i , j } , |T |=q [ RT ( i , j ) ] denotes the average interaction between i and j given all potential inference patterns T ∪ { i , j } with a fixed inference pattern size |T | = q ; Γ ( q ) ( i , j|s ) def= ( s q ) J ( q ) ( i , j ) . The computation of RT ( i , j ) and the proof of Equation ( 4 ) are provided in Appendices D and B , respectively . However , when input variables in N are randomly removed by the dropout operation , the computation of I ( s ) dropout ( i , j ) only involves a subset of inference patterns consisting of variables that are not dropped . Let the dropout rate be ( 1 − p ) , p ∈ [ 0 , 1 ] , and let S′ ⊆ S denote the input variables that remain in the context S after the dropout operation . Let us consider cases when r activation units remain after the dropout operation , i.e . |S′| = r. Then , the average interaction I ( s ) dropout ( i , j ) in these cases can be computed as follows . I ( s ) dropout ( i , j ) = E S⊆N\ { i , j } , |S|=s [ E S′⊆S , |S′|=r ( ∑ T⊆S′ RT ( i , j ) ) ] = ∑ 0≤q≤r Γ ( q ) ( i , j|r ) ( 5 ) The interaction only comprises the marginal reward from the inference patterns consisting of at most r ∼ B ( s , p ) variables , where B ( s , p ) is the binomial distribution with the sample number s and the sample rate p. Since r = |S′| ≤ s , we have 1 ≥ Γ ( 1 ) ( i , j|r ) Γ ( 1 ) ( i , j|s ) ≥ · · · ≥ Γ ( r ) ( i , j|r ) Γ ( r ) ( i , j|s ) ≥ 0 , I ( s ) dropout ( i , j ) I ( s ) ( i , j ) = ∑ 0≤q≤r Γ ( q ) ( i , j|r ) ∑ 0≤q≤s Γ ( q ) ( i , j|s ) ≤ 1 . ( 6 ) We assume that most Γ ( q ) ( i , j|s ) , 0 ≤ q ≤ s share the same signal . In this way , we can get Equation ( 6 , right ) based on law of large numbers , which shows that the number of inference patterns usually significantly decreases when we use dropout to remove s − r , r ∼ B ( s , p ) , activation units . Please see Appendix C for the proof . Experimental verification 1 : Inference patterns with more activation units are less likely to be sampled when we use dropout , thereby being more vulnerable to the dropout operation , according to Equation ( 6 , left ) . Inspired by this , we conducted experiments to explore the following two terms , ( 1 ) which component { I ( s ) ( i , j ) } ( s = 0 , ... , n− 2 ) took the main part of the overall interaction strength among all interaction components with different orders ; ( 2 ) which interaction component was mainly penalized by dropout . To this end , the strength of the s-order interaction component was averaged over images , i.e . I ( s ) = Eimage [ E ( i , j ) ∈image ( |I ( s ) ( i , j ) | ) ] . Specifically , when we analyzed the strength of interaction components , we selected the following orders , s = 0.1n , 0.3n , ... , 0.9n . Note that we randomly sampled the value of s from [ 0.0 , 0.2n ] for each context S to approximate the interaction component with the order s = 0.1n . We also used the similar approximation for s = 0.3n , ... , 0.9n . Figure 2 shows curves of different interaction components1 within VGG-11/19 ( Simonyan & Zisserman , 2015 ) learned on the CIFAR-10 ( Krizhevsky & Hinton , 2009 ) dataset with and without the dropout operation . We found that the interaction components with low orders took the main part of the interaction . Experimental verification 2 : We also conducted experiments to illustrate how dropout suppressed the interaction modeled by DNNs , which was a verification of Equation ( 6 , right ) . In experiments , we trained AlexNet ( Krizhevsky et al. , 2012 ) , and VGG-11/16/19 on CIFAR-10 with and without the dropout . Figure 3 ( left ) compares the strength of interactions1 encoded by DNNs , which were learned with or without the dropout operation . When we learned DNNs with dropout , we set the dropout rate as 0.5 . Please see Appendix H.1 for experiments on different dropout rates . We averaged the strength of interactions over images , i.e . I = Eimage [ E ( i , j ) ∈image ( |I ( i , j ) | ) ] , where I ( i , j ) was obtained according to Equation ( 2 ) . Note that accurately computing the interaction of two input variables was an NP-hard problem . Thus , we applied a sampling-based method ( Castro et al. , 2009 ) to approximate the strength of interactions . Furthermore , we conducted an experiment to explore the accuracy of the interactions approximated via the sampling-based method . Please see Appendix K for details . Castro et al . ( 2009 ) proposed a method to approximate the Shapley value , which can be extended to the approximation of the interaction . We found that dropout could effectively suppress the strength of the interaction , which verified the above proof . Property : The sampling process in dropout is the same as the computation in the Banzhaf value . In this section , we aim to show that the sampling process in dropout ( when the dropout rate is 0.5 ) is similar to the sampling in the computation of the Banzhaf value . Just like the Shapley value , the Banzhaf value ( Banzhaf III , 1964 ) is another typical metric to measure importance of each input variable in game theory . Unlike the Shapley value , the Banzhaf value is computed under the assumption that each input variable independently participates in the game with the probability 0.5 . The Banzhaf value is computed as ψ ( i|N ) = ∑ S⊆N\ { i } PBanzhaf ( S|N\ { i } ) [ f ( S ∪ { i } ) − f ( S ) ] , where PBanzhaf ( S|N \ { i } ) = 0.5n−1 is the likelihood of S being sampled . The form of the Banzhaf value is similar to that of the Shapley value in Equation ( 1 ) , but the sampling weight of the Banzhaf value 1For fair comparison , we normalize the value of interaction using the range of output scores of DNNs . Please see Appendix E for more details . PBanzhaf ( S|N\ { i } ) is different from PShapley ( S|N\ { i } ) of the Shapley value . For dropout with the dropout rate 0.5 , let n be the number of input variables , and let S be the units not dropped inN \ { i } . Then , the likelihood of S not being dropped is given as Pdropout ( S|N\ { i } ) = 0.5|S|0.5n−|S|−1 = PBanzhaf ( S|N\ { i } ) . In this way , dropout usually generates activation units S following PBanzhaf ( S|N\ { i } ) . Therefore , the frequent inference patterns in the computation of the Banzhaf value are also frequently generated by dropout , thereby being reliably learned by the DNN . This makes that f ( S ∪ { i } ) − f ( S ) in the computation of the Banzhaf value can be sophisticatedly modeled without lots of outlier values . Thus , the dropout can be considered as a smooth factor in terms of the Banzhaf value . Grabisch & Roubens ( 1999 ) defined the interaction based on the Banzhaf value as IBanzhaf ( i , j ) =∑ S⊆N\ { i , j } PBanzhaf ( S ∣∣N\ { i , j } ) ∆f ( S , i , j ) , just like Equation ( 2 ) . Thus , dropout can also be considered as a smooth factor in the computation of the Banzhaf interaction . Figure 3 ( right ) shows that the target interaction used in this study is closely related to the Banzhaf interaction , which potentially connects the target interaction based on the Shapley value to the dropout operation .
This paper analyzes the effect of dropout on interaction between units in a neural network. The strength of the interaction is measured using a metric that is used in game theory to quantify interaction between players in a co-operative game. The paper shows that dropout reduces high-order interaction (as measured by this metric), and that reduction in interaction is correlated with better generalization. The paper introduces a new regularizer that explicitly minimizes the metric and claims that using this regularizer instead of dropout has some advantages.
SP:5f5615d414a232aeaec93033053471ce6bb09fc4
Interpreting and Boosting Dropout from a Game-Theoretic View
1 INTRODUCTION . Deep neural networks ( DNNs ) have exhibited significant success in various tasks , but the overfitting problem is still a considerable challenge for deep learning . Dropout is usually considered as an effective operation to alleviate the over-fitting problem of DNNs . Hinton et al . ( 2012 ) ; Srivastava et al . ( 2014 ) thought that dropout could encourage each unit in an intermediate-layer feature to model useful information without much dependence on other units . Konda et al . ( 2016 ) considered dropout as a specific method of data augmentation . Gal & Ghahramani ( 2016 ) proved that dropout was equivalent to the Bayesian approximation in a Gaussian process . Our research group led by Dr. Quanshi Zhang has proposed game-theoretic interactions , including interactions of different orders ( Zhang et al. , 2020 ) and multivariate interactions ( Zhang et al. , 2021b ) . As a basic metric , the interaction can be used to explain signal-processing behaviors in trained DNNs from different perspectives . For example , we have built up a tree structure to explain hierarchical interactions between words encoded in NLP models ( Zhang et al. , 2021a ) . We also prove a close relationship between the interaction and the adversarial robustness ( Ren et al. , 2021 ) and transferability ( Wang et al. , 2020 ) . Many previous methods of boosting adversarial transferability can be explained as the reduction of interactions , and the interaction can also explain the utility of the adversarial training ( Ren et al. , 2021 ) . As an extension of the system of game-theoretic interactions , in this paper , we aim to explain , model , and improve the utility of dropout from the following perspectives . First , we prove that the dropout ∗Correspondence . This study is conducted under the supervision of Dr. Quanshi Zhang . zqs1022 @ sjtu.edu.cn . Quanshi Zhang is with the John Hopcroft Center and the MoE Key Lab of Artificial Intelligence , AI Institute , at the Shanghai Jiao Tong University , China . operation suppresses interactions between input units encoded by DNNs . This is also verified by various experiments . To this end , the interaction is defined in game theory , as follows . Let x denote the input , and let f ( x ) denote the output of the DNN . For the i-th input variable , we can compute its importance value φ ( i ) , which measures the numerical contribution of the i-th variable to the output f ( x ) . We notice that the importance value of the i-th variable would be different when we mask the j-th variable w.r.t . the case when we do not mask the j-th variable . Thus , the interaction between input variables i and j is measured as the difference φw/ j ( i ) − φw/o j ( i ) . Second , we also discover a strong correlation between interactions of input variables and the overfitting problem of the DNN . Specifically , the over-fitted samples usually exhibit much stronger interactions than ordinary samples . Therefore , we consider that the utility of dropout is to alleviate the significance of over-fitting by decreasing the strength of interactions encoded by the DNN . Based on this understanding , we propose an interaction loss to further improve the utility of dropout . The interaction loss directly penalizes the interaction strength , in order to improve the performance of DNNs . The interaction loss exhibits the following two distinct advantages over the dropout operation . ( 1 ) The interaction loss explicitly controls the penalty of the interaction strength , which enables people to trade off between over-fitting and under-fitting . ( 2 ) Unlike dropout which is incompatible with the batch normalization operation ( Li et al. , 2019 ) , the interaction loss can work in harmony with batch normalization . Various experimental results show that the interaction loss can boost the performance of DNNs . Furthermore , we analyze interactions encoded by DNNs from the following three perspectives . ( 1 ) First , we discover the consistency between the sampling process in dropout ( when the dropout rate p = 0.5 ) and the sampling in the computation of the Banzhaf value . The Banzhaf value ( Banzhaf III , 1964 ) is another metric to measure the importance of each input variable in game theory . Unlike the Shapley value , the Banzhaf value is computed under the assumption that each input variable independently participates in the game with the probability 0.5 . We find that the frequent inference patterns in Banzhaf interactions ( Grabisch & Roubens , 1999 ) are also prone to be frequently sampled by dropout , thereby being stably learned . This ensures the DNN to encode smooth Banzhaf interactions . We also prove that the Banzhaf interaction is close to the aforementioned interaction , which also relates to the dropout operation with the interaction used in this paper . ( 2 ) Besides , we find that the interaction loss is better to be applied to low layers than being applied to high layers . ( 3 ) Furthermore , we decompose the overall interaction into interaction components of different orders . We visualize the strongly interacted regions within each input sample . We find out that interaction components of low orders take the main part of interactions and are suppressed by the dropout operation and the interaction loss . Contributions of this paper can be summarized as follows . ( 1 ) We mathematically represent the dependence of feature variables using as the game-theoretic interactions , and prove that dropout can suppress the strength of interactions encoded by a DNN . In comparison , previous studies ( Hinton et al. , 2012 ; Krizhevsky et al. , 2012 ; Srivastava et al. , 2014 ) did not mathematically model the the dependence of feature variables or theoretically proved its relationship with the dropout . ( 2 ) We find that the over-fitted samples usually contain stronger interactions than other samples . ( 3 ) Based on this , we consider the utility of dropout is to alleviate over-fitting by decreasing the interaction . We design a novel loss function to penalize the strength of interactions , which improves the performance of DNNs . ( 4 ) We analyze the properties of interactions encoded by DNNs , and conduct comparative studies to obtain new insights into interactions encoded by DNNs . 2 RELATED WORK . The dropout operation . Dropout is an effective operation to alleviate the over-fitting problem and improve the performance of DNNs ( Hinton et al. , 2012 ) . Several studies have been proposed to explain the inherent mechanism of dropout . According to ( Hinton et al. , 2012 ; Krizhevsky et al. , 2012 ; Srivastava et al. , 2014 ) , dropout could prevent complex co-adaptation between units in intermediate layers , and could encourage each unit to encode useful representations itself . However , these studies only qualitatively analyzed the utility of dropout , instead of providing quantitative results . Wager et al . ( 2013 ) showed that dropout performed as an adaptive regularization , and established a connection to the algorithm AdaGrad . Konda et al . ( 2016 ) interpreted dropout as a kind of data augmentation in the input space , and Gal & Ghahramani ( 2016 ) proved that dropout was equiva- lent to a Bayesian approximation in the Gaussian process . Gao et al . ( 2019 ) disentangled the dropout operation into the forward dropout and the backward dropout , and improved the performance by setting different dropout rates for the forward dropout and the backward dropout , respectively . Gomez et al . ( 2018 ) proposed the targeted dropout , which only randomly dropped variables with low activation values , and kept variables with high activation values . In comparison , we aim to explain the utility of dropout from the view of game theory . Our interaction metric quantified the interaction between all pairs of variables considering all potential contexts , which were randomly sampled from all variables . Furthermore , we propose a method to improve the utility of dropout . Interaction . Previous studies have explored interactions between input variables . Bien et al . ( 2013 ) developed an algorithm to learn hierarchical pairwise interactions inside an additive model . Sorokina et al . ( 2008 ) detected the statistical interaction using an additive model-based ensemble of regression trees . Murdoch et al . ( 2018 ) ; Singh et al . ( 2018 ) ; Jin et al . ( 2019 ) proposed and extended the contextual decomposition to measure the interaction encoded by DNNs in NLP tasks . Tsang et al . ( 2018 ) measured the pairwise interaction based on the learned weights of the DNN . Tsang et al . ( 2020 ) proposed a method , namely GLIDER , to detect the feature interaction modeled by a recommender system . Janizek et al . ( 2020 ) proposed the Integrated-Hessian value to measure interactions , based on Integral Gradient ( Sundararajan et al. , 2017 ) . Integral Gradient measures the importance value for each input variable w.r.t . the DNN . Given an input vector x ∈ Rn , Integrated Hessians measures the interaction between input variables ( dimensions ) xi and xj as the numerical impact of xj on the importance of xi . To this end , Janizek et al . ( 2020 ) used Integral Gradient to compute Integrated Hessians . Appendix J further compares Integral Hessians with the interaction proposed in this paper . Besides interactions measured from the above views , game theory is also a typical perspective to analyze the interaction . Several studies explored the interaction based on game theory . Lundberg et al . ( 2018 ) defined the interaction between two variables based on the Shapley value for tree ensembles . Because Shapley value was considered as the unique standard method to estimate contributions of input words to the prediction score with solid theoretic foundations ( Weber , 1988 ) , this definition of interaction can be regarded to objectively reflect the collaborative/adversarial effects between variables w.r.t the prediction score . Furthermore , Grabisch & Roubens ( 1999 ) extended this definition to interactions among different numbers of input variables . Grabisch & Roubens ( 1999 ) also proposed the interaction based on the Banzhaf value . In comparison , the target interaction used in this paper is based on the Shapley value ( Shapley , 1953 ) . Since the Shapley is the unique metric that satisfies the linearity property , the dummy property , the symmetry property , and the efficiency property ( Ancona et al. , 2019 ) , the interaction based on the Shapley value is usually considered as a more standard metric than the interaction based on the Banzhaf value . In this paper , we aim to explain the utility of dropout using the interaction defined in game theory . We reveal the close relationship between the strength of interactions and the over-fitting of DNNs . We also design an interaction loss to improve the performance of DNNs . 3 GAME-THEORETIC EXPLANATIONS OF DROPOUT . Preliminaries : Shapley values . The Shapley value was initially proposed by Shapley ( 1953 ) in game theory . It is considered as a unique unbiased metric that fairly allocates the numerical contribution of each player to the total reward . Given a set of players N = { 1 , 2 , · · · , n } , 2N def= { S|S ⊆ N } denotes all possible subsets of N . A game f : 2N → R is a function that maps from a subset to a real number . f ( S ) is the score obtained by the subset S ⊆ N . Thus , f ( N ) − f ( ∅ ) denotes the reward obtained by all players in the game . The Shapley value allocates the overall reward to each player , as its numerical contribution φ ( i|N ) , as shown in Equation ( 1 ) . The Shapley value of player i in the game f , φ ( i|N ) , is computed as follows.∑n i=1 φ ( i|N ) = f ( N ) − f ( ∅ ) , φ ( i|N ) = ∑ S⊆N\ { i } PShapley ( S ∣∣N \ { i } ) [ f ( S ∪ { i } ) − f ( S ) ] ( 1 ) where PShapley ( S|M ) = ( |M|−|S| ) ! |S| ! ( |M|+1 ) ! is the likelihood of S being sampled , S⊆M . The Shapley value is the unique metric that satisfies the linearity property , the dummy property , the symmetry property , and the efficiency property ( Ancona et al. , 2019 ) . We summarize these properties in Appendix A . Understanding DNNs via game theory . In game theory , some players may form a coalition to compete with other players , and win a reward ( Grabisch & Roubens , 1999 ) . Accordingly , a DNN f can be considered as a game , and the output of the DNN corresponds to the score f ( · ) in Equation ( 1 ) . For example , if the DNN has a scalar output , we can take this output as the score . If the DNN outputs a vector for multi-category classification , we select the classification score corresponding to the true class as the score f ( · ) . Alternatively , f can also be set as the loss value of the DNN . The set of players N corresponds to the set of input variables . We can analyze the interaction and the element-wise contribution at two different levels . ( 1 ) We can consider input variables ( players ) as the input of the entire DNN , e.g . pixels in images and words in sentences . In this case , the game f is considered as the entire DNN . ( 2 ) Alternatively , we can also consider input variables as a set of activation units before the dropout operation . In this case , the game f is considered as consequent modules of the DNN . S⊆N in Equation ( 1 ) denotes the context of the input variable i , which consists of a subset of input variables . In order to compute the network output f ( S ) , we replace variables in N \S with the baseline value ( e.g . mask them ) , and we do not change the variables in S. In particular , when we consider neural activations before dropout as input variables , such activations are usually non-negative after ReLU . Thus , their baseline values are set to 0 . Both ( Ancona et al. , 2019 ) and Appendix G introduce details about the baseline value . In this way , f ( ∅ ) measures the output score when all input variables are masked , and f ( S ) −f ( ∅ ) measures the entire reward obtained by all input variables in S. Interactions encoded by DNNs . In this section , we introduce how to use the interaction defined in game theory to explain DNNs . Two input variables may interact with each other to contribute to the output of a DNN . Let us suppose input variables i and j have an interaction . In other words , the contribution of i and j when they work jointly is different with the case when they work individually . For example , in the sentence he is a green hand , the word green and the word hand have a strong interaction , because the words green and hand contribute to the person ’ s identity jointly , rather than independently . In this case , we can consider these two input variables to form a certain inference pattern as a singleton player Sij = { i , j } . Thus , this DNN can be considered to have only ( n−1 ) input variables , N ′ = N \ { i , j } ∪Sij , i.e . Sij is supposed to be always absent or present simultaneously as a constituent . In this way , the interaction I ( i , j ) between input variables i and j is defined by Grabisch & Roubens ( 1999 ) , as the contribution increase of Sij when input variables i and j cooperate with each other w.r.t . the case when i and j work individually , as follows . I ( i , j ) def = φ ( Sij |N ′ ) − [ φ ( i|N \ { j } ) + φ ( j|N \ { i } ) ] = ∑ S⊆N\ { i , j } PShapley ( S ∣∣N \ { i , j } ) ∆f ( S , i , j ) ( 2 ) where ∆f ( S , i , j ) def= f ( S ∪ { i , j } ) − f ( S ∪ { j } ) − f ( S ∪ { i } ) + f ( S ) . φ ( i|N \ { j } ) and φ ( j|N \ { i } ) correspond to the contribution to the DNN output when i and j work individually . Theoretically , I ( i , j ) is also equal to the change of the variable i ’ s Shapley value when we mask another input variable j w.r.t . the case when we do not mask j . If I ( i , j ) > 0 , input variables i and j cooperate with each other for a higher output value . Whereas , if I ( i , j ) < 0 , i and j have a negative/adversarial effect . The strength of the interaction can be computed as the absolute value of the interaction , i.e . |I ( i , j ) | . We find that the overall interaction I ( i , j ) can be decomposed into interaction components with different orders s. We use the multi-order interaction defined in ( Zhang et al. , 2020 ) , as follows . I ( i , j ) = n−2∑ s=0 [ I ( s ) ( i , j ) n− 1 ] , I ( s ) ( i , j ) def= ES⊆N\ { i , j } , |S|=s [ ∆f ( S , i , j ) ] ( 3 ) where s denote the size of the context S for the interaction . We use I ( s ) ( i , j ) to represent the s-order interaction between input variables i and j. I ( s ) ( i , j ) reflects the average interaction between input variables i and j among all contexts S with s input variables . For example , when s is small , I ( s ) ( i , j ) measures the interaction relying on inference patterns consisting of very few input variables , i.e . the interaction depends on a small context . When s is large , I ( s ) ( i , j ) corresponds to the interaction relying on inference patterns consisting of a large number of input variables , i.e . the interaction depends on the context of a large scale . Visualization of interactions in each sample , which are encoded by the DNN : There exists a specific interaction between each pair of pixels , which boosts the difficulty of visualization . In order to simplify the visualization , we divide the original image into 16 × 16 grids , and we only visualize the strength of interactions between each grid g and its neighboring grids g′ as Color ( g ) = Eg′∈neighbor ( g ) [ |I ( g , g′ ) | ] . Figure 1 visualizes the interaction strength within the image in the CelebA 0 , 17 , 11 , 20 , 21 , 25 , 31 , 49 , 43 , 46 , 6 dataset ( Liu et al. , 2015 ) , which are normalized to the range [ 0 , 1 ] . Grids on the face usually contain more significant interactions with neighboring grids than grids in the background . Proof of the relationship between dropout and the interaction . In this section , we aim to mathematically prove that dropout is an effective method to suppress the interaction strength encoded by DNNs . Given the context S , let us consider its subset T ⊆ S , which forms a coalition to represent a specific inference pattern T ∪ { i , j } . Note that for dropout , the context refers to activation units in the intermediate-layer feature without semantic meanings . Nevertheless , we just consider i , j as pixels as a toy example to illustrate the basic idea , in order to simplify the introduction . For example , let S represent the face , and let T ∪ { i , j } represent pixels of an eye in the face . Let RT ( i , j ) quantify the marginal reward obtained from the inference pattern of an eye . All interaction effects from smaller coalitions T ′ $ T are removed from RT ( i , j ) . According to the above example , Let T ′ $ T correspond to the pupil inside the eye . Then , RT ′ ( i , j ) measures the marginal reward benefited from the existence of the pupil T ′ ∪ { i , j } , while RT ( i , j ) represents the marginal benefit from the existence of the entire eye , in which the reward from the pupil has been removed . I.e . the inference pattern T ∪ { i , j } can be exclusively triggered by the co-occurrence of all pixels in the eye , but can not be triggered by a subset of pixels in the pupil T ′ ∪ { i , j } . The benefit from the pupil pattern has been removed from RT ( i , j ) . Thus , the s-order interaction can be decomposed into components w.r.t . all inference patterns T ∪ { i , j } , T ⊆ S. I ( s ) ( i , j ) = ES⊆N\ { i , j } , |S|=s [ ∑ T⊆S RT ( i , j ) ] = ∑ 0≤q≤s ( s q ) J ( q ) ( i , j ) = ∑ 0≤q≤s Γ ( q ) ( i , j|s ) ( 4 ) where Jq ( i , j ) def= ET⊆N\ { i , j } , |T |=q [ RT ( i , j ) ] denotes the average interaction between i and j given all potential inference patterns T ∪ { i , j } with a fixed inference pattern size |T | = q ; Γ ( q ) ( i , j|s ) def= ( s q ) J ( q ) ( i , j ) . The computation of RT ( i , j ) and the proof of Equation ( 4 ) are provided in Appendices D and B , respectively . However , when input variables in N are randomly removed by the dropout operation , the computation of I ( s ) dropout ( i , j ) only involves a subset of inference patterns consisting of variables that are not dropped . Let the dropout rate be ( 1 − p ) , p ∈ [ 0 , 1 ] , and let S′ ⊆ S denote the input variables that remain in the context S after the dropout operation . Let us consider cases when r activation units remain after the dropout operation , i.e . |S′| = r. Then , the average interaction I ( s ) dropout ( i , j ) in these cases can be computed as follows . I ( s ) dropout ( i , j ) = E S⊆N\ { i , j } , |S|=s [ E S′⊆S , |S′|=r ( ∑ T⊆S′ RT ( i , j ) ) ] = ∑ 0≤q≤r Γ ( q ) ( i , j|r ) ( 5 ) The interaction only comprises the marginal reward from the inference patterns consisting of at most r ∼ B ( s , p ) variables , where B ( s , p ) is the binomial distribution with the sample number s and the sample rate p. Since r = |S′| ≤ s , we have 1 ≥ Γ ( 1 ) ( i , j|r ) Γ ( 1 ) ( i , j|s ) ≥ · · · ≥ Γ ( r ) ( i , j|r ) Γ ( r ) ( i , j|s ) ≥ 0 , I ( s ) dropout ( i , j ) I ( s ) ( i , j ) = ∑ 0≤q≤r Γ ( q ) ( i , j|r ) ∑ 0≤q≤s Γ ( q ) ( i , j|s ) ≤ 1 . ( 6 ) We assume that most Γ ( q ) ( i , j|s ) , 0 ≤ q ≤ s share the same signal . In this way , we can get Equation ( 6 , right ) based on law of large numbers , which shows that the number of inference patterns usually significantly decreases when we use dropout to remove s − r , r ∼ B ( s , p ) , activation units . Please see Appendix C for the proof . Experimental verification 1 : Inference patterns with more activation units are less likely to be sampled when we use dropout , thereby being more vulnerable to the dropout operation , according to Equation ( 6 , left ) . Inspired by this , we conducted experiments to explore the following two terms , ( 1 ) which component { I ( s ) ( i , j ) } ( s = 0 , ... , n− 2 ) took the main part of the overall interaction strength among all interaction components with different orders ; ( 2 ) which interaction component was mainly penalized by dropout . To this end , the strength of the s-order interaction component was averaged over images , i.e . I ( s ) = Eimage [ E ( i , j ) ∈image ( |I ( s ) ( i , j ) | ) ] . Specifically , when we analyzed the strength of interaction components , we selected the following orders , s = 0.1n , 0.3n , ... , 0.9n . Note that we randomly sampled the value of s from [ 0.0 , 0.2n ] for each context S to approximate the interaction component with the order s = 0.1n . We also used the similar approximation for s = 0.3n , ... , 0.9n . Figure 2 shows curves of different interaction components1 within VGG-11/19 ( Simonyan & Zisserman , 2015 ) learned on the CIFAR-10 ( Krizhevsky & Hinton , 2009 ) dataset with and without the dropout operation . We found that the interaction components with low orders took the main part of the interaction . Experimental verification 2 : We also conducted experiments to illustrate how dropout suppressed the interaction modeled by DNNs , which was a verification of Equation ( 6 , right ) . In experiments , we trained AlexNet ( Krizhevsky et al. , 2012 ) , and VGG-11/16/19 on CIFAR-10 with and without the dropout . Figure 3 ( left ) compares the strength of interactions1 encoded by DNNs , which were learned with or without the dropout operation . When we learned DNNs with dropout , we set the dropout rate as 0.5 . Please see Appendix H.1 for experiments on different dropout rates . We averaged the strength of interactions over images , i.e . I = Eimage [ E ( i , j ) ∈image ( |I ( i , j ) | ) ] , where I ( i , j ) was obtained according to Equation ( 2 ) . Note that accurately computing the interaction of two input variables was an NP-hard problem . Thus , we applied a sampling-based method ( Castro et al. , 2009 ) to approximate the strength of interactions . Furthermore , we conducted an experiment to explore the accuracy of the interactions approximated via the sampling-based method . Please see Appendix K for details . Castro et al . ( 2009 ) proposed a method to approximate the Shapley value , which can be extended to the approximation of the interaction . We found that dropout could effectively suppress the strength of the interaction , which verified the above proof . Property : The sampling process in dropout is the same as the computation in the Banzhaf value . In this section , we aim to show that the sampling process in dropout ( when the dropout rate is 0.5 ) is similar to the sampling in the computation of the Banzhaf value . Just like the Shapley value , the Banzhaf value ( Banzhaf III , 1964 ) is another typical metric to measure importance of each input variable in game theory . Unlike the Shapley value , the Banzhaf value is computed under the assumption that each input variable independently participates in the game with the probability 0.5 . The Banzhaf value is computed as ψ ( i|N ) = ∑ S⊆N\ { i } PBanzhaf ( S|N\ { i } ) [ f ( S ∪ { i } ) − f ( S ) ] , where PBanzhaf ( S|N \ { i } ) = 0.5n−1 is the likelihood of S being sampled . The form of the Banzhaf value is similar to that of the Shapley value in Equation ( 1 ) , but the sampling weight of the Banzhaf value 1For fair comparison , we normalize the value of interaction using the range of output scores of DNNs . Please see Appendix E for more details . PBanzhaf ( S|N\ { i } ) is different from PShapley ( S|N\ { i } ) of the Shapley value . For dropout with the dropout rate 0.5 , let n be the number of input variables , and let S be the units not dropped inN \ { i } . Then , the likelihood of S not being dropped is given as Pdropout ( S|N\ { i } ) = 0.5|S|0.5n−|S|−1 = PBanzhaf ( S|N\ { i } ) . In this way , dropout usually generates activation units S following PBanzhaf ( S|N\ { i } ) . Therefore , the frequent inference patterns in the computation of the Banzhaf value are also frequently generated by dropout , thereby being reliably learned by the DNN . This makes that f ( S ∪ { i } ) − f ( S ) in the computation of the Banzhaf value can be sophisticatedly modeled without lots of outlier values . Thus , the dropout can be considered as a smooth factor in terms of the Banzhaf value . Grabisch & Roubens ( 1999 ) defined the interaction based on the Banzhaf value as IBanzhaf ( i , j ) =∑ S⊆N\ { i , j } PBanzhaf ( S ∣∣N\ { i , j } ) ∆f ( S , i , j ) , just like Equation ( 2 ) . Thus , dropout can also be considered as a smooth factor in the computation of the Banzhaf interaction . Figure 3 ( right ) shows that the target interaction used in this study is closely related to the Banzhaf interaction , which potentially connects the target interaction based on the Shapley value to the dropout operation .
This paper aims to explain dropout from the lens of game theoretic interactions. Let x denote the input of a deep neural net (DNN), intuitively, the interaction between two variables x_i and x_j quantifies how much the presence/absence of the j-th variable affects the contribution of the i-th variable to the output of the DNN. With the above definition in place, the authors show theoretically and empirically that dropout reduces the interactions between input variables of DNNs. As this type of interactions turn out to be strongly correlated with overfitting, the authors suggest that dropout alleviates overfitting by reducing interactions between input variables (or activation units) of DNNs. Based on this understanding of dropout, an alternative regularization technique is proposed, which explicitly penalizes pairwise interactions between variables.
SP:5f5615d414a232aeaec93033053471ce6bb09fc4
Once Quantized for All: Progressively Searching for Quantized Compact Models
Automatic search of Quantized Neural Networks ( QNN ) has attracted a lot of attention . However , the existing quantization-aware Neural Architecture Search ( NAS ) approaches inherit a two-stage search-retrain schema , which is not only time-consuming but also adversely affected by the unreliable ranking of architectures during the search . To avoid the undesirable effect of the search-retrain schema , we present Once Quantized for All ( OQA ) , a novel framework that searches for quantized compact models and deploys their quantized weights at the same time without additional post-process . While supporting a huge architecture search space , our OQA can produce a series of quantized compact models under ultra-low bit-widths ( e.g . 4/3/2 bit ) . A progressive bit inheritance procedure is introduced to support ultra-low bit-width . Our searched model family , OQANets , achieves a new state-of-the-art ( SOTA ) on quantized compact models compared with various quantization methods and bit-widths . In particular , OQA2bit-L achieves 64.0 % ImageNet Top-1 accuracy , outperforming its 2 bit counterpart EfficientNet-B0 @ QKD by a large margin of 14 % using 30 % less computation cost . 1 INTRODUCTION . Compact architecture design ( Sandler et al. , 2018 ; Ma et al. , 2018 ) and network quantization methods ( Choi et al. , 2018 ; Kim et al. , 2019 ; Esser et al. , 2019 ) are two promising research directions to deploy deep neural networks on mobile devices . Network quantization aims at reducing the number of bits for representing network parameters and features . On the other hand , Neural Architecture Search ( NAS ) ( Howard et al. , 2019 ; Cai et al. , 2019 ; Yu et al. , 2020 ) is proposed to automatically search for compact architectures , which avoids expert efforts and design trials . In this work , we explore the ability of NAS in finding quantized compact models and thus enjoy merits from two sides . Traditional combination of NAS and quantization methods could either be classified to NASthen-Quantize or Quantization-aware NAS as shown in Figure 1 . Conventional quantization methods merely compress the off-the-shelf networks , regardless of whether it is searched ( EfficientNet ( Tan & Le , 2019 ) ) or handcrafted ( MobileNetV2 ( Sandler et al. , 2018 ) ) . These methods correspond to NAS-then-Quantize approach as shown in Figure 1 ( a ) . However , it is not optimal because the accuracy rank among the searched floating-point models would change after they are quantized . Thus , this traditional routine may fail to get a good quantized model . Directly search with quantized models ’ performance seems to be a solution . Existing quantization-aware NAS methods ( Wang et al. , 2019 ; Shen et al. , 2019 ; Bulat et al. , 2020 ; Guo et al. , 2019 ; Wang et al. , 2020 ) utilize a two-stage search-retrain schema as shown in Figure 1 ( b ) . Specifically , they first search for one architecture under one bit-width setting1 , and then retrain the model under the given bit-width . This two-stage procedure undesirably increases the search and retrain cost if we have multiple deployment constraints and hardware bit-widths . Furthermore , due to the instability brought by quantization-aware training , simply combining quantization and NAS results in unreliable ranking ( Li et al. , 2019a ; Guo et al. , 2019 ) and sub-optimal 1One bit-width setting refers to a specific bit-width for each layer , where different layers could have different bit-widths . quantized models ( Bulat et al. , 2020 ) . Moreover , when the quantization bit-width is lower than 3 , the traditional training process is highly unstable and introduces very large accuracy degradation . To alleviate the aforementioned problems , we present Once Quantized for All ( OQA ) , a novel framework that : 1 ) searches for quantized network architectures and deploys their quantized weights immediately without retraining , 2 ) progressively produces a series of quantized models under ultra-low bits ( e.g . 4/3/2 bit ) . Our approach leverages the recent NAS approaches which do not require retraining ( Yu & Huang , 2019 ; Cai et al. , 2019 ; Yu et al. , 2020 ) . We adopt the search for kernel size , depth , width , and resolution in our search space . To provide a better initialization and transfer the knowledge of higher bit-width QNN to the lower bit-width QNN , we propose bit inheritance mechanism , which reduces the bit-width progressively to enable searching for QNN under different quantization bit-widths . Benefiting from the no retraining property and large search space under different bit-widths , we can evaluate the effect of network factors . Extensive experiments show the effectiveness of our approach . Our searched quantized model family , OQANets , achieves state-of-the-art ( SOTA ) results on the ImageNet dataset under 4/3/2 bitwidths . In particular , our OQA2bit-L far exceeds the accuracy of 2 bit Efficient-B0 @ QKD ( Kim et al. , 2019 ) by a large 14 % margin using 30 % less computation budget . Compared with the quantization-aware NAS method APQ ( Wang et al. , 2020 ) , our OQA4bit-L-MBV2 uses 43.7 % less computation cost while maintaining the same accuracy as APQ-B . To summarize , the contributions of our paper are three-fold : • Our OQA is the first quantization-aware NAS framework to search for the architecture of quantized compact models and deploy their quantized weights without retraining . • We present the bit inheritance mechanism to reduce the bit-width progressively so that the higher bit-width models can guide the search and training of lower bit-width models . • We provide some insights into quantization-friendly architecture design . Our systematical analysis reveals that shallow-fat models are more likely to be quantization-friendly than deep-slim models under low bit-widths . 2 RELATED WORK . Network Architecture Search without retraining Slimmable neural networks ( Yu et al. , 2018 ) first propose to train a model to support multiple width multipliers ( e.g . 4 different global width multipliers for MobileNetV2 ) . OFA ( Cai et al. , 2019 ) and BigNAS ( Yu et al. , 2020 ) push the envelope forward in network architecture search ( NAS ) by introducing diverse architecture space ( stage depth , channel width , kernel size , and input resolution ) . These methods propose to train a single over-parameterized supernet from which we can directly sample or slice different candidate architectures for instant inference and deployment . However , all of the aforementioned methods are tailored towards searching for floating-point compact models . Converting the best floating-point architecture to quantization tends to result in sub-optimum quantized models . In the quantization area , recent papers ( Jin et al. , 2019a ; Yu et al. , 2019 ) propose to train a single model that can support different bit-widths . But they only quantize manually design networks ( e.g . ResNet , MobileNetV2 ) under relatively high bit-widths ( e.g . 4 bit for MobileNetV2 ) , our OQA can search for architectures and produce quantized compact models with lower bit-width ( e.g . 2 bit ) . Quantization-aware Network Architecture Search Recent studies combine network quantization and NAS to automatically search for layer bit-width with given architecture or search for operations with given bit-width . HAQ ( Wang et al. , 2019 ) focuses on searching for different numbers of bits for different layers in a given network structure and shows that some layers , which can be quantized to low bits , are more robust for quantization than others . AutoBNN ( Shen et al. , 2019 ) utilizes the genetic algorithm to search for network channels and BMobi ( Phan et al. , 2020 ) searches for the group number of different convolution layer under a certain 1 bit . SPOS ( Guo et al. , 2019 ) trains a quantized one-shot supernet to search for bit-width and network channels for heavy ResNet ( He et al. , 2016 ) . BATS ( Bulat et al. , 2020 ) devises a binary search space and incorporates it within the DARTS framework ( Liu et al. , 2018a ) . The aforementioned methods concentrate on the quantization of heavy networks , like ResNet ( He et al. , 2016 ) , or replace the depthwise convolution with group convolution . Moreover , they inherit a two-stage search-retrain schema : once the best-quantized architectures have been identified , they need to be retrained for deployment . This procedure significantly increases the computational cost for the search if we have different deployment constraints and hardware bit-widths . Compared with all these methods , our OQA can search for quantized compact models and learn their quantized weights at the same time without additional retraining . Without our bit inheritance mechanism , these approaches also suffer from a significant drop of accuracy when a network is quantized to ultra-low bit-widths like 2 . 3 METHOD . 3.1 OVERVIEW . Our OQA aims to obtain compact quantized models that can be directly sampled from quantized supernet without retraining . As shown in Figure 1 ( c ) , the overall procedure of OQA is as follows : Step 1 , Quantized Supernet Training ( Section 3.3 ) : Train a K-bit supernet by learning the weight parameters and quantization parameters jointly . Step 2 : given a constraint on computational complexity , search for the architecture with the highest quantization performance on the validation dataset . If K = N , the whole process is finished . Step 3 , Bit Inheritance ( Section 3.4 ) : Use the weight and quantization parameters of the K bit supernet to initialize the weight and quantization parameters of the K − 1 bit supernet . Step 4 : K ← K − 1 and Go to step 1 . The starting bit-width K and the ending bit-width N of the bit-inheritance procedure can be arbitrary . In this paper , we focus on quantized compact models under one fixed low bit-width quantization strategy , thus the starting bit-width and ending bit-width is 4 and 2 . 3.2 PRELIMINARIES . Neural Architecture Search without Retraining . Recently , several NAS methods ( Yu et al. , 2018 ; Cai et al. , 2019 ; Yu et al. , 2020 ) are proposed to directly deploy subnets from a well-trained supernet without retraining . Specifically , a supernet with the largest possible depth ( number of blocks ) , width ( number of channels ) , kernel size , and input resolution is trained . Then the subnet with top accuracy is selected as the searched network among the set of subnets satisfying a given computational complexity requirement . A subnet is obtained from parts of the supernet with depth , width , and kernel size smaller than the supernet . The subnet uses the well-trained parameters of the supernet for direct deployment without further retraining . Quantization Neural Network Learning . To enable the training of quantized supernets , we choose a learnable quantization function following the recent state-of-the-art quantization method LSQ ( Esser et al. , 2019 ) . In the forward pass , the quantization function turns the floating-point weights and activation into integers under the given bit-width . Given the bit-width K , the activation is quantized into unsigned integers in the range of [ 0 , 2K − 1 ] and weights are quantized into signed integers in the range of [ −2K−1 , 2K−1 − 1 ] . Given floating-point weights or activation v , and learnable scale s , the quantization function Q and its corresponding approximate gradient using the straight-through estimator ( Bengio et al. , 2013 ) is defined as follows : Quantization function : vq = Q ( v , s ) = bclip ( v |s| , Qmin , Qmax ) e × |s| , Approximate gradient : ∂Q ( v ) ∂v ≈ I ( v |s| , Qmin , Qmax ) , ( 1 ) where all operations for v are element-wise operations , clip ( z , r1 , r2 ) returns z with values below r1 set to r1 and values above r2 set to r2 , bze rounds z to the nearest integer , Qmin and Qmax are , respectively minimum and maximum integers for the given bit-width K , I ( v|s| , Qmin , Qmax ) means the gradient of v in the range of ( Qmin × |s| , Qmax × |s| ) is approximated by 1 , otherwise 0 . |s| returns the absolute value of s , which ensures that the semantics of scale s is only interval , without inverting the signs of weights or activation . The scale s is learned by back-propagation and initialized as 2√ Qmax × ¯|v| , where ¯|v| denotes the mean of |v| .
This paper presents a new method to search for quantized neural networks. This method is different from others that it results in quantized weights which can be deployed without post-process such as fine-tuning. Proposed method first trains a 4-bit quantized supernet, and search for the best performance sub-net using the validation dataset. Then, the method initialize the 3-bit supernet using the 4-bit supernet, and trains 3-bit supernet using the knowledge distilation method. Proposed method iterates the initialization, training, and search process until the goal bit resolution is achieved.
SP:51a5349be44696d07c4bb9c6f94f2447022ceca3
Once Quantized for All: Progressively Searching for Quantized Compact Models
Automatic search of Quantized Neural Networks ( QNN ) has attracted a lot of attention . However , the existing quantization-aware Neural Architecture Search ( NAS ) approaches inherit a two-stage search-retrain schema , which is not only time-consuming but also adversely affected by the unreliable ranking of architectures during the search . To avoid the undesirable effect of the search-retrain schema , we present Once Quantized for All ( OQA ) , a novel framework that searches for quantized compact models and deploys their quantized weights at the same time without additional post-process . While supporting a huge architecture search space , our OQA can produce a series of quantized compact models under ultra-low bit-widths ( e.g . 4/3/2 bit ) . A progressive bit inheritance procedure is introduced to support ultra-low bit-width . Our searched model family , OQANets , achieves a new state-of-the-art ( SOTA ) on quantized compact models compared with various quantization methods and bit-widths . In particular , OQA2bit-L achieves 64.0 % ImageNet Top-1 accuracy , outperforming its 2 bit counterpart EfficientNet-B0 @ QKD by a large margin of 14 % using 30 % less computation cost . 1 INTRODUCTION . Compact architecture design ( Sandler et al. , 2018 ; Ma et al. , 2018 ) and network quantization methods ( Choi et al. , 2018 ; Kim et al. , 2019 ; Esser et al. , 2019 ) are two promising research directions to deploy deep neural networks on mobile devices . Network quantization aims at reducing the number of bits for representing network parameters and features . On the other hand , Neural Architecture Search ( NAS ) ( Howard et al. , 2019 ; Cai et al. , 2019 ; Yu et al. , 2020 ) is proposed to automatically search for compact architectures , which avoids expert efforts and design trials . In this work , we explore the ability of NAS in finding quantized compact models and thus enjoy merits from two sides . Traditional combination of NAS and quantization methods could either be classified to NASthen-Quantize or Quantization-aware NAS as shown in Figure 1 . Conventional quantization methods merely compress the off-the-shelf networks , regardless of whether it is searched ( EfficientNet ( Tan & Le , 2019 ) ) or handcrafted ( MobileNetV2 ( Sandler et al. , 2018 ) ) . These methods correspond to NAS-then-Quantize approach as shown in Figure 1 ( a ) . However , it is not optimal because the accuracy rank among the searched floating-point models would change after they are quantized . Thus , this traditional routine may fail to get a good quantized model . Directly search with quantized models ’ performance seems to be a solution . Existing quantization-aware NAS methods ( Wang et al. , 2019 ; Shen et al. , 2019 ; Bulat et al. , 2020 ; Guo et al. , 2019 ; Wang et al. , 2020 ) utilize a two-stage search-retrain schema as shown in Figure 1 ( b ) . Specifically , they first search for one architecture under one bit-width setting1 , and then retrain the model under the given bit-width . This two-stage procedure undesirably increases the search and retrain cost if we have multiple deployment constraints and hardware bit-widths . Furthermore , due to the instability brought by quantization-aware training , simply combining quantization and NAS results in unreliable ranking ( Li et al. , 2019a ; Guo et al. , 2019 ) and sub-optimal 1One bit-width setting refers to a specific bit-width for each layer , where different layers could have different bit-widths . quantized models ( Bulat et al. , 2020 ) . Moreover , when the quantization bit-width is lower than 3 , the traditional training process is highly unstable and introduces very large accuracy degradation . To alleviate the aforementioned problems , we present Once Quantized for All ( OQA ) , a novel framework that : 1 ) searches for quantized network architectures and deploys their quantized weights immediately without retraining , 2 ) progressively produces a series of quantized models under ultra-low bits ( e.g . 4/3/2 bit ) . Our approach leverages the recent NAS approaches which do not require retraining ( Yu & Huang , 2019 ; Cai et al. , 2019 ; Yu et al. , 2020 ) . We adopt the search for kernel size , depth , width , and resolution in our search space . To provide a better initialization and transfer the knowledge of higher bit-width QNN to the lower bit-width QNN , we propose bit inheritance mechanism , which reduces the bit-width progressively to enable searching for QNN under different quantization bit-widths . Benefiting from the no retraining property and large search space under different bit-widths , we can evaluate the effect of network factors . Extensive experiments show the effectiveness of our approach . Our searched quantized model family , OQANets , achieves state-of-the-art ( SOTA ) results on the ImageNet dataset under 4/3/2 bitwidths . In particular , our OQA2bit-L far exceeds the accuracy of 2 bit Efficient-B0 @ QKD ( Kim et al. , 2019 ) by a large 14 % margin using 30 % less computation budget . Compared with the quantization-aware NAS method APQ ( Wang et al. , 2020 ) , our OQA4bit-L-MBV2 uses 43.7 % less computation cost while maintaining the same accuracy as APQ-B . To summarize , the contributions of our paper are three-fold : • Our OQA is the first quantization-aware NAS framework to search for the architecture of quantized compact models and deploy their quantized weights without retraining . • We present the bit inheritance mechanism to reduce the bit-width progressively so that the higher bit-width models can guide the search and training of lower bit-width models . • We provide some insights into quantization-friendly architecture design . Our systematical analysis reveals that shallow-fat models are more likely to be quantization-friendly than deep-slim models under low bit-widths . 2 RELATED WORK . Network Architecture Search without retraining Slimmable neural networks ( Yu et al. , 2018 ) first propose to train a model to support multiple width multipliers ( e.g . 4 different global width multipliers for MobileNetV2 ) . OFA ( Cai et al. , 2019 ) and BigNAS ( Yu et al. , 2020 ) push the envelope forward in network architecture search ( NAS ) by introducing diverse architecture space ( stage depth , channel width , kernel size , and input resolution ) . These methods propose to train a single over-parameterized supernet from which we can directly sample or slice different candidate architectures for instant inference and deployment . However , all of the aforementioned methods are tailored towards searching for floating-point compact models . Converting the best floating-point architecture to quantization tends to result in sub-optimum quantized models . In the quantization area , recent papers ( Jin et al. , 2019a ; Yu et al. , 2019 ) propose to train a single model that can support different bit-widths . But they only quantize manually design networks ( e.g . ResNet , MobileNetV2 ) under relatively high bit-widths ( e.g . 4 bit for MobileNetV2 ) , our OQA can search for architectures and produce quantized compact models with lower bit-width ( e.g . 2 bit ) . Quantization-aware Network Architecture Search Recent studies combine network quantization and NAS to automatically search for layer bit-width with given architecture or search for operations with given bit-width . HAQ ( Wang et al. , 2019 ) focuses on searching for different numbers of bits for different layers in a given network structure and shows that some layers , which can be quantized to low bits , are more robust for quantization than others . AutoBNN ( Shen et al. , 2019 ) utilizes the genetic algorithm to search for network channels and BMobi ( Phan et al. , 2020 ) searches for the group number of different convolution layer under a certain 1 bit . SPOS ( Guo et al. , 2019 ) trains a quantized one-shot supernet to search for bit-width and network channels for heavy ResNet ( He et al. , 2016 ) . BATS ( Bulat et al. , 2020 ) devises a binary search space and incorporates it within the DARTS framework ( Liu et al. , 2018a ) . The aforementioned methods concentrate on the quantization of heavy networks , like ResNet ( He et al. , 2016 ) , or replace the depthwise convolution with group convolution . Moreover , they inherit a two-stage search-retrain schema : once the best-quantized architectures have been identified , they need to be retrained for deployment . This procedure significantly increases the computational cost for the search if we have different deployment constraints and hardware bit-widths . Compared with all these methods , our OQA can search for quantized compact models and learn their quantized weights at the same time without additional retraining . Without our bit inheritance mechanism , these approaches also suffer from a significant drop of accuracy when a network is quantized to ultra-low bit-widths like 2 . 3 METHOD . 3.1 OVERVIEW . Our OQA aims to obtain compact quantized models that can be directly sampled from quantized supernet without retraining . As shown in Figure 1 ( c ) , the overall procedure of OQA is as follows : Step 1 , Quantized Supernet Training ( Section 3.3 ) : Train a K-bit supernet by learning the weight parameters and quantization parameters jointly . Step 2 : given a constraint on computational complexity , search for the architecture with the highest quantization performance on the validation dataset . If K = N , the whole process is finished . Step 3 , Bit Inheritance ( Section 3.4 ) : Use the weight and quantization parameters of the K bit supernet to initialize the weight and quantization parameters of the K − 1 bit supernet . Step 4 : K ← K − 1 and Go to step 1 . The starting bit-width K and the ending bit-width N of the bit-inheritance procedure can be arbitrary . In this paper , we focus on quantized compact models under one fixed low bit-width quantization strategy , thus the starting bit-width and ending bit-width is 4 and 2 . 3.2 PRELIMINARIES . Neural Architecture Search without Retraining . Recently , several NAS methods ( Yu et al. , 2018 ; Cai et al. , 2019 ; Yu et al. , 2020 ) are proposed to directly deploy subnets from a well-trained supernet without retraining . Specifically , a supernet with the largest possible depth ( number of blocks ) , width ( number of channels ) , kernel size , and input resolution is trained . Then the subnet with top accuracy is selected as the searched network among the set of subnets satisfying a given computational complexity requirement . A subnet is obtained from parts of the supernet with depth , width , and kernel size smaller than the supernet . The subnet uses the well-trained parameters of the supernet for direct deployment without further retraining . Quantization Neural Network Learning . To enable the training of quantized supernets , we choose a learnable quantization function following the recent state-of-the-art quantization method LSQ ( Esser et al. , 2019 ) . In the forward pass , the quantization function turns the floating-point weights and activation into integers under the given bit-width . Given the bit-width K , the activation is quantized into unsigned integers in the range of [ 0 , 2K − 1 ] and weights are quantized into signed integers in the range of [ −2K−1 , 2K−1 − 1 ] . Given floating-point weights or activation v , and learnable scale s , the quantization function Q and its corresponding approximate gradient using the straight-through estimator ( Bengio et al. , 2013 ) is defined as follows : Quantization function : vq = Q ( v , s ) = bclip ( v |s| , Qmin , Qmax ) e × |s| , Approximate gradient : ∂Q ( v ) ∂v ≈ I ( v |s| , Qmin , Qmax ) , ( 1 ) where all operations for v are element-wise operations , clip ( z , r1 , r2 ) returns z with values below r1 set to r1 and values above r2 set to r2 , bze rounds z to the nearest integer , Qmin and Qmax are , respectively minimum and maximum integers for the given bit-width K , I ( v|s| , Qmin , Qmax ) means the gradient of v in the range of ( Qmin × |s| , Qmax × |s| ) is approximated by 1 , otherwise 0 . |s| returns the absolute value of s , which ensures that the semantics of scale s is only interval , without inverting the signs of weights or activation . The scale s is learned by back-propagation and initialized as 2√ Qmax × ¯|v| , where ¯|v| denotes the mean of |v| .
This paper proposed a method to train quantized supernets which can be directly deployed without retraining. The motivation is to have a supernet with a given quantization bit-width which only train once and can be deployed with different architectures (under different FLOPs budget). This paper made a bunch of experiments showing that the proposed once quantized for all method can find DNN architectures which have SOTA performance with low bit-width. The paper also shows that when training lower-bits supernet, it is helpful to use the weights from the trained higher-bits supernet.
SP:51a5349be44696d07c4bb9c6f94f2447022ceca3
Coping with Label Shift via Distributionally Robust Optimisation
1 INTRODUCTION . Classical supervised learning involves learning a model from a training distribution that generalises well on test samples drawn from the same distribution . While the assumption of identical train and test distributions has given rise to useful methods , it is often violated in many practical settings ( Kouw & Loog , 2018 ) . The label shift problem is one such important setting , wherein the training distribution over the labels does not reflect what is observed during testing ( Saerens et al. , 2002 ) . For example , consider the problem of object detection in self-driving cars : a model trained in one city may see a vastly different distribution of pedestrians and cars when deployed in a different city . Such shifts in label distribution can significantly degrade model performance . As a concrete example , consider the performance of a ResNet-50 model on ImageNet . While the overall error rate is ∼ 24 % , Figure 1 reveals that certain classes suffer an error as high as ∼ 80 % . Consequently , a label shift that increases the prevalence of the more erroneous classes in the test set can significantly degrade performance . Most existing work on label shift operates in the setting where one has an unlabelled test sample that can be used to estimate the shifted label probabilities ( du Plessis & Sugiyama , 2014 ; Lipton et al. , 2018 ; Azizzadenesheli et al. , 2019 ) . Subsequently , one can retrain a classifier using these probabilities in place of the training label probabilities . While such techniques have proven effective , it is not always feasible to access an unlabelled set . Further , one may wish to deploy a learned model in multiple test environments , each one of which has its own label distribution . For example , the label distribution for a vehicle detection camera may change continuously while driving across the city . Instead of simply deploying a separate model for each scenario , deploying a single model that is robust to shifts may be more efficient and practical . Hence , we address the following question in this work : can we learn a single classifier that is robust to a family of arbitrary shifts ? We answer the above question by modeling label shift via distributionally robust optimisation ( DRO ) ( Shapiro et al. , 2014 ; Rahimian & Mehrotra , 2019 ) . DRO offers a convenient way of coping with distribution shift , and have lead to successful applications ( e.g . Faury et al . ( 2020 ) ) . Intuitively , by seeking a model that performs well on all label distributions that are “ close ” to the training data label distribution , this task can be cast as a game between the learner and an adversary , with the latter allowed to pick label distributions that maximise the learner ’ s loss . We remark that while adversarial perspectives have informed popular paradigms such as GANs , these pursue fundamentally different objectives from DRO ( see Appendix A for details ) . Although several previous works have explored DRO for tackling the problem of example shift ( e.g. , adversarial examples ) ( Namkoong & Duchi , 2016 ; 2017 ; Duchi & Namkoong , 2018 ) , an application of DRO to the label shift setting poses several challenges : ( a ) updating the adversary ’ s distribution naı̈vely requires solving a nontrivial convex optimisation subproblem with limited tractability , and also needs careful parameter tuning ; and ( b ) naı̈vely estimating gradients under the adversarial distribution on a randomly sampled minibatch can lead to unstable behaviour ( see §3.1 ) . We overcome these challenges by proposing the first algorithm that successfully optimises a DRO objective for label shift on a large scale dataset ( i.e. , ImageNet ) . Our objective encourages robustness to arbitrary label distribution shifts within a KL-divergence ball of the empirical label distribution . Importantly , we show that this choice of robustness set admits an efficient and stable update step . Summary of contributions ( 1 ) We design a gradient descent-proximal mirror ascent algorithm tailored for optimising large-scale problems with minimal computational overhead , and prove its theoretical convergence . ( 2 ) With the proposed algorithm , we implement a practical procedure to successfully optimise the robust objective on ImageNet scale for the label shift application . ( 3 ) We show through experiments on ImageNet and CIFAR-100 that our technique significantly improves over baselines when the label distribution is adversarially varied . 2 BACKGROUND AND PROBLEM FORMULATION . In this section we formalise the label shift problem and motivate its formulation as an adversarial optimisation problem . Consider a multiclass classification problem with distribution ptr over instances X and labels Y = [ L ] . The goal is to learn a classifier hθ : X → Y parameterised by θ ∈ Θ , with the aim of ensuring good predictive performance on future samples drawn from ptr . More formally , the goal is to minimise the objective minθ E ( x , y ) ∼ptr [ ` ( x , y , θ ) ] , where ` : X × Y × Θ → R+ is a loss function . In practice , we only have access to a finite sample S = { ( xi , yi ) } ni=1 ∼ pntr , which motivates us to use the empirical distribution pemp ( x , y ) = 1n ∑n i=1 1 ( x = xi , y = yi ) in place of ptr . Doing so , we arrive at the objective of minimising the empirical risk : min θ Epemp [ ` ( x , y , θ ) ] : = 1 n ∑n i=1 ` ( xi , yi , θ ) . ( 1 ) The assumption underlying the above formulation is that test samples are drawn from the same distribution ptr that is used during training . However , this assumption is violated in many practical settings . The problem of learning from a training distribution ptr , while attempting to perform well on a test distribution pte 6= ptr is referred to as domain adaptation ( Ben-David et al. , 2007 ) . In Label distribution Reference Train distribution Standard ERM Specified a-priori ( e.g. , balanced ) ( Elkan , 2001 ; Xie & Manski , 1989 ; Cao et al. , 2019 ) Estimated test distribution ( du Plessis & Sugiyama , 2014 ; Lipton et al. , 2018 ; Azizzadenesheli et al. , 2019 ; Garg et al. , 2020 ; Combes et al. , 2020 ) Worst-performing class ( Hashimoto et al. , 2018 ; Mohri et al. , 2019 ; Sagawa et al. , 2020 ) Worst k-performing classes ( Fan et al. , 2017 ; Williamson & Menon , 2019 ; Curi et al. , 2019 ; Duchi the special case of label shift , one posits that pte ( x | y ) = ptr ( x | y ) , but the label distribution pte ( y ) 6= ptr ( y ) ( Saerens et al. , 2002 ) ; i.e. , the test distribution satisfies pte ( x , y ) = pte ( y ) ptr ( x | y ) . The label shift problem admits the following three distinct settings ( see Table 1 for a summary ) : ( 1 ) Fixed label shift . Here , one assumes a-priori knowledge of pte ( y ) . One may then adjust the outputs of a probabilistic classifier post-hoc to improve test performance ( Elkan , 2001 ) . Even when the precise distribution is unknown , it is common to posit a uniform pte ( y ) . Minimising the resulting balanced error has been the subject of a large body of work ( He & Garcia , 2009 ) , with recent developments including Cui et al . ( 2019 ) ; Cao et al . ( 2019 ) ; Kang et al . ( 2020 ) ; Guo et al . ( 2020 ) . ( 2 ) Estimated label shift . Here , we assume that pte ( y ) is unknown , but that we have access to an unlabelled test sample . This sample may be used to estimate pte ( y ) , e.g. , via kernel meanmatching ( Zhang et al. , 2013 ) , minimisation of a suitable KL divergence ( du Plessis & Sugiyama , 2014 ) , or using black-box classifier outputs ( Lipton et al. , 2018 ; Azizzadenesheli et al. , 2019 ; Garg et al. , 2020 ) . One may then use these estimates to minimise a suitably re-weighted empirical risk . ( 3 ) Adversarial label shift . Here , we assume that pte ( y ) is unknown , and guard against a suitably defined worst-case choice . Observe that an extreme case of label shift involves placing all probability mass on a single y∗ ∈ Y . This choice can be problematic , as ( 1 ) may be rewritten as min θ ∑ y∈ [ L ] pemp ( y ) · { 1 ny ∑ i : yi=y ` ( xi , yi , θ ) } , where ny is the number of training samples with label y . The empirical risk is thus a weighted average of the per-class losses . Observe that if some y∗ ∈ Y has a large per-class loss , then an adversary could degrade performance by choosing a pte with pte ( y∗ ) being large . One means of guarding against such adversarial label shifts is to minimise the minimax risk ( Alaiz-Rodrı́guez et al. , 2007 ; Davenport et al. , 2010 ; Hashimoto et al. , 2018 ; Mohri et al. , 2019 ; Sagawa et al. , 2020 ) min θ max π∈∆L ∑ y∈ [ L ] π ( y ) · { 1 ny ∑ i : yi=y ` ( xi , yi , θ ) } , ( 2 ) where ∆L denotes the simplex . In ( 2 ) , we combine the per-label risks according to the worst-case label distribution . In practice , focusing on the worst-case label distribution may be overly pessimistic . One may temper this by instead constraining the label distribution . A popular choice is to enforce that ‖π‖∞ ≤ 1k for suitable k , which corresponds to minimising the average of the top-k largest per-class losses for integer k ( Williamson & Menon , 2019 ; Curi et al. , 2019 ; Duchi et al. , 2020 ) . We focus on the adversarial label shift setting , as it meets the desiderata of training a single model that is robust to multiple label distributions , and not requiring access to test samples . Adversarial robustness has been widely studied ( see Appendix A for more related work ) , but its application to label shift is much less explored . Amongst techniques in this area , Mohri et al . ( 2019 ) ; Sagawa et al . ( 2020 ) are most closely related to our work . These works optimise the worst-case loss over subgroups induced by the labels . However , both works consider settings with a relatively small ( ≤ 10 ) number of subgroups ; the resultant algorithms face many challenges when trained with many labels ( see Section 4 ) . We now detail how a suitably constrained DRO formulation , coupled with optimisation choices , can overcome this limitation . Algorithm 1 ADVSHIFT ( θ0 , γc , λ , NNOpt , pemp , ηπ ) 1 : Initialise adversary distribution as π1 = ( 1L , ... , 1 L ) . 2 : for t = 1 , . . . , T do 3 : Sample mini-batch of b examples { ( xi , yi ) } bi=1 . 4 : Evaluate stochastic gradient gθ = 1b ∑b i=1 πt ( yi ) pemp ( yi ) · ∇θ ` ( xi , yi , θt ) 5 : Update neural network parameters θt+1 = NNOpt ( gθ ) 6 : Update Lagrangian variable α = 0 if r > KL ( πt , pemp ) , α = 2γcλ if r < KL ( πt , pemp ) . 7 : Evaluate adversarial gradient gπ ( i ) = 1b ∑b j=1 1 { yj=i } pemp ( i ) · ∇π ` ( xj , yj , θt+1 ) . 8 : Update adversarial distribution πt+1 = ( πt · pαemp ) 1/ ( 1+α ) · exp ( ηπgπ ) /C
This paper attacks the issue of mismatch in distribution of labels between train and test samples. The authors propose a DRO-based approach which amounts to solving a modified ERM problem. Compared to classical approaches, the proposed method doesn’t entail fitting many different models: just a single model. The method builds on recent progress on solving nonconvex-concave games for approximate stationary points. The resulting algorithm is a mirror-descent scheme with explicit convergence rates, under Setting : train and test label distribution do not match
SP:f43350cd5ad80de839715bb0def9efaf1bed39fb
Coping with Label Shift via Distributionally Robust Optimisation
1 INTRODUCTION . Classical supervised learning involves learning a model from a training distribution that generalises well on test samples drawn from the same distribution . While the assumption of identical train and test distributions has given rise to useful methods , it is often violated in many practical settings ( Kouw & Loog , 2018 ) . The label shift problem is one such important setting , wherein the training distribution over the labels does not reflect what is observed during testing ( Saerens et al. , 2002 ) . For example , consider the problem of object detection in self-driving cars : a model trained in one city may see a vastly different distribution of pedestrians and cars when deployed in a different city . Such shifts in label distribution can significantly degrade model performance . As a concrete example , consider the performance of a ResNet-50 model on ImageNet . While the overall error rate is ∼ 24 % , Figure 1 reveals that certain classes suffer an error as high as ∼ 80 % . Consequently , a label shift that increases the prevalence of the more erroneous classes in the test set can significantly degrade performance . Most existing work on label shift operates in the setting where one has an unlabelled test sample that can be used to estimate the shifted label probabilities ( du Plessis & Sugiyama , 2014 ; Lipton et al. , 2018 ; Azizzadenesheli et al. , 2019 ) . Subsequently , one can retrain a classifier using these probabilities in place of the training label probabilities . While such techniques have proven effective , it is not always feasible to access an unlabelled set . Further , one may wish to deploy a learned model in multiple test environments , each one of which has its own label distribution . For example , the label distribution for a vehicle detection camera may change continuously while driving across the city . Instead of simply deploying a separate model for each scenario , deploying a single model that is robust to shifts may be more efficient and practical . Hence , we address the following question in this work : can we learn a single classifier that is robust to a family of arbitrary shifts ? We answer the above question by modeling label shift via distributionally robust optimisation ( DRO ) ( Shapiro et al. , 2014 ; Rahimian & Mehrotra , 2019 ) . DRO offers a convenient way of coping with distribution shift , and have lead to successful applications ( e.g . Faury et al . ( 2020 ) ) . Intuitively , by seeking a model that performs well on all label distributions that are “ close ” to the training data label distribution , this task can be cast as a game between the learner and an adversary , with the latter allowed to pick label distributions that maximise the learner ’ s loss . We remark that while adversarial perspectives have informed popular paradigms such as GANs , these pursue fundamentally different objectives from DRO ( see Appendix A for details ) . Although several previous works have explored DRO for tackling the problem of example shift ( e.g. , adversarial examples ) ( Namkoong & Duchi , 2016 ; 2017 ; Duchi & Namkoong , 2018 ) , an application of DRO to the label shift setting poses several challenges : ( a ) updating the adversary ’ s distribution naı̈vely requires solving a nontrivial convex optimisation subproblem with limited tractability , and also needs careful parameter tuning ; and ( b ) naı̈vely estimating gradients under the adversarial distribution on a randomly sampled minibatch can lead to unstable behaviour ( see §3.1 ) . We overcome these challenges by proposing the first algorithm that successfully optimises a DRO objective for label shift on a large scale dataset ( i.e. , ImageNet ) . Our objective encourages robustness to arbitrary label distribution shifts within a KL-divergence ball of the empirical label distribution . Importantly , we show that this choice of robustness set admits an efficient and stable update step . Summary of contributions ( 1 ) We design a gradient descent-proximal mirror ascent algorithm tailored for optimising large-scale problems with minimal computational overhead , and prove its theoretical convergence . ( 2 ) With the proposed algorithm , we implement a practical procedure to successfully optimise the robust objective on ImageNet scale for the label shift application . ( 3 ) We show through experiments on ImageNet and CIFAR-100 that our technique significantly improves over baselines when the label distribution is adversarially varied . 2 BACKGROUND AND PROBLEM FORMULATION . In this section we formalise the label shift problem and motivate its formulation as an adversarial optimisation problem . Consider a multiclass classification problem with distribution ptr over instances X and labels Y = [ L ] . The goal is to learn a classifier hθ : X → Y parameterised by θ ∈ Θ , with the aim of ensuring good predictive performance on future samples drawn from ptr . More formally , the goal is to minimise the objective minθ E ( x , y ) ∼ptr [ ` ( x , y , θ ) ] , where ` : X × Y × Θ → R+ is a loss function . In practice , we only have access to a finite sample S = { ( xi , yi ) } ni=1 ∼ pntr , which motivates us to use the empirical distribution pemp ( x , y ) = 1n ∑n i=1 1 ( x = xi , y = yi ) in place of ptr . Doing so , we arrive at the objective of minimising the empirical risk : min θ Epemp [ ` ( x , y , θ ) ] : = 1 n ∑n i=1 ` ( xi , yi , θ ) . ( 1 ) The assumption underlying the above formulation is that test samples are drawn from the same distribution ptr that is used during training . However , this assumption is violated in many practical settings . The problem of learning from a training distribution ptr , while attempting to perform well on a test distribution pte 6= ptr is referred to as domain adaptation ( Ben-David et al. , 2007 ) . In Label distribution Reference Train distribution Standard ERM Specified a-priori ( e.g. , balanced ) ( Elkan , 2001 ; Xie & Manski , 1989 ; Cao et al. , 2019 ) Estimated test distribution ( du Plessis & Sugiyama , 2014 ; Lipton et al. , 2018 ; Azizzadenesheli et al. , 2019 ; Garg et al. , 2020 ; Combes et al. , 2020 ) Worst-performing class ( Hashimoto et al. , 2018 ; Mohri et al. , 2019 ; Sagawa et al. , 2020 ) Worst k-performing classes ( Fan et al. , 2017 ; Williamson & Menon , 2019 ; Curi et al. , 2019 ; Duchi the special case of label shift , one posits that pte ( x | y ) = ptr ( x | y ) , but the label distribution pte ( y ) 6= ptr ( y ) ( Saerens et al. , 2002 ) ; i.e. , the test distribution satisfies pte ( x , y ) = pte ( y ) ptr ( x | y ) . The label shift problem admits the following three distinct settings ( see Table 1 for a summary ) : ( 1 ) Fixed label shift . Here , one assumes a-priori knowledge of pte ( y ) . One may then adjust the outputs of a probabilistic classifier post-hoc to improve test performance ( Elkan , 2001 ) . Even when the precise distribution is unknown , it is common to posit a uniform pte ( y ) . Minimising the resulting balanced error has been the subject of a large body of work ( He & Garcia , 2009 ) , with recent developments including Cui et al . ( 2019 ) ; Cao et al . ( 2019 ) ; Kang et al . ( 2020 ) ; Guo et al . ( 2020 ) . ( 2 ) Estimated label shift . Here , we assume that pte ( y ) is unknown , but that we have access to an unlabelled test sample . This sample may be used to estimate pte ( y ) , e.g. , via kernel meanmatching ( Zhang et al. , 2013 ) , minimisation of a suitable KL divergence ( du Plessis & Sugiyama , 2014 ) , or using black-box classifier outputs ( Lipton et al. , 2018 ; Azizzadenesheli et al. , 2019 ; Garg et al. , 2020 ) . One may then use these estimates to minimise a suitably re-weighted empirical risk . ( 3 ) Adversarial label shift . Here , we assume that pte ( y ) is unknown , and guard against a suitably defined worst-case choice . Observe that an extreme case of label shift involves placing all probability mass on a single y∗ ∈ Y . This choice can be problematic , as ( 1 ) may be rewritten as min θ ∑ y∈ [ L ] pemp ( y ) · { 1 ny ∑ i : yi=y ` ( xi , yi , θ ) } , where ny is the number of training samples with label y . The empirical risk is thus a weighted average of the per-class losses . Observe that if some y∗ ∈ Y has a large per-class loss , then an adversary could degrade performance by choosing a pte with pte ( y∗ ) being large . One means of guarding against such adversarial label shifts is to minimise the minimax risk ( Alaiz-Rodrı́guez et al. , 2007 ; Davenport et al. , 2010 ; Hashimoto et al. , 2018 ; Mohri et al. , 2019 ; Sagawa et al. , 2020 ) min θ max π∈∆L ∑ y∈ [ L ] π ( y ) · { 1 ny ∑ i : yi=y ` ( xi , yi , θ ) } , ( 2 ) where ∆L denotes the simplex . In ( 2 ) , we combine the per-label risks according to the worst-case label distribution . In practice , focusing on the worst-case label distribution may be overly pessimistic . One may temper this by instead constraining the label distribution . A popular choice is to enforce that ‖π‖∞ ≤ 1k for suitable k , which corresponds to minimising the average of the top-k largest per-class losses for integer k ( Williamson & Menon , 2019 ; Curi et al. , 2019 ; Duchi et al. , 2020 ) . We focus on the adversarial label shift setting , as it meets the desiderata of training a single model that is robust to multiple label distributions , and not requiring access to test samples . Adversarial robustness has been widely studied ( see Appendix A for more related work ) , but its application to label shift is much less explored . Amongst techniques in this area , Mohri et al . ( 2019 ) ; Sagawa et al . ( 2020 ) are most closely related to our work . These works optimise the worst-case loss over subgroups induced by the labels . However , both works consider settings with a relatively small ( ≤ 10 ) number of subgroups ; the resultant algorithms face many challenges when trained with many labels ( see Section 4 ) . We now detail how a suitably constrained DRO formulation , coupled with optimisation choices , can overcome this limitation . Algorithm 1 ADVSHIFT ( θ0 , γc , λ , NNOpt , pemp , ηπ ) 1 : Initialise adversary distribution as π1 = ( 1L , ... , 1 L ) . 2 : for t = 1 , . . . , T do 3 : Sample mini-batch of b examples { ( xi , yi ) } bi=1 . 4 : Evaluate stochastic gradient gθ = 1b ∑b i=1 πt ( yi ) pemp ( yi ) · ∇θ ` ( xi , yi , θt ) 5 : Update neural network parameters θt+1 = NNOpt ( gθ ) 6 : Update Lagrangian variable α = 0 if r > KL ( πt , pemp ) , α = 2γcλ if r < KL ( πt , pemp ) . 7 : Evaluate adversarial gradient gπ ( i ) = 1b ∑b j=1 1 { yj=i } pemp ( i ) · ∇π ` ( xj , yj , θt+1 ) . 8 : Update adversarial distribution πt+1 = ( πt · pαemp ) 1/ ( 1+α ) · exp ( ηπgπ ) /C
This paper tackles label shift in supervised learning via distributionally robust optimization. The main idea is to train by solving a min-max problem, where the max problem searches for the worst-case label shift in an Kullback-Leibler divergence ambiguity set. The KL ambiguity set will generate some form of adversarial reweighing of the sample training points, which gives us hope that the learned parameters will perform better in the test data (with shift). The paper proposes to solve the Lagrangian version instead of the constrained version of the DRO problem, and proposes a gradient descent-ascent type of algorithm.
SP:f43350cd5ad80de839715bb0def9efaf1bed39fb
Beyond COVID-19 Diagnosis: Prognosis with Hierarchical Graph Representation Learning
1 INTRODUCTION . Coronavirus disease 2019 ( COVID-19 ) has resulted in an ongoing pandemic in the world . To control the sources of infection and cut off the channels of transmission , rapid testing and detection are of vital importance . The reverse transcription polymerase chain reaction ( RT-PCR ) is a widely-used screening technology and viewed as the standard method for suspected cases . However , this method highly relies upon the required lab facilities and the diagnostic kits . In addition , the sensitivity of RT-PCR is not high enough for early diagnosis ( Ai et al. , 2020 ; Fang et al. , 2020 ) . To mitigate the limitations of RT-PCR , the computed tomography ( CT ) has been widely used as an effective complementary method , which can provide medical images of the lung area to reveal the details of the disease and its prognosis ( Huang et al. , 2020 ; Chung et al. , 2020 ) , for which RT-PCR can not . Additionally , CT has also been proved to be useful in monitoring the COVID-19 disease progression and the therapeutic efficacy evaluation ( Rodriguez-Morales et al. , 2020 ; Liechti et al. , 2020 ) . The chest CT slices of a patient have a sequential and hierarchical data structure . The relationship between slices possesses more information than the order of the slices . The adjacent ones with the same abnormality could be considered as one lesion . The slices containing the same type of lesions may not be continuous as the lesions are distributed in various lung parts . We propose a diagnosis and prognosis system that combines graph convolutional networks ( GCNs ) and a distance aware pooling , which integrates the information from all slices in the chest CT scans for optimal decision making . Our major contributions are three-fold : ( 1 ) Owing to the sequential structure of CT images , this is the first work to utilize GCNs to extract node information hierarchically , and conduct both diagnosis and prognosis for COVID-19 . The prognosis can help facilitate medical resources , e.g. , ventilators or admission to Intensive Care Units ( ICUs ) , more efficiently by triaging mild or severe patients . ( 2 ) A novel pooling method called distance aware pooling , is proposed to aggregate the graph , i.e. , the patient ’ s CT scan , effectively . The new pooling method integrated with GCNs can aggregate a densely connected graph efficiently . ( 3 ) The new model can localize the most informative slices within a chest CT scan , which significantly reduces the amount of work for radiologists . 2 RELATED WORK . AI-assisted and CT-based COVID-19 Diagnosis and Prognosis . Although RT-PCR is the standard way for COVID-19 diagnosis , there are many limitations using RT-PCR along , e.g. , time delay in receiving an RT-PCR test , occurrence of false negatives , and no prognostic information provided , etc . CT images are often recommended as an alternative for precise lesion detection ( Alizadehsani et al. , 2020 ) . However , as each CT scan includes a large number of ( up to several hundreds of ) image slices , it requires much time and labor of the radiologists ( Shoeibi et al. , 2020 ) . Furthermore , since the radiological appearances of COVID-19 are similar to other types of pneumonia , radiologists need to go through extensive training before they can achieve high diagnostic accuracy ( Shi et al. , 2020 ) . Recently , several AI-assisted and CT-based COVID-19 diagnostic systems have been developed . Chen et al . ( 2020 ) use Unet++ ( Zhou et al. , 2018 ) to segment infectious areas in the lung . Butt et al . ( 2020 ) develop a deep learning model to detect lesions from CT images , and then use 3D ResNet to classify the images into COVID-19 , influenza-A viral pneumonia or healthy groups . Song et al . ( 2020 ) use the whole lung for diagnosis instead of only extracting the lesions . Wang et al . ( 2020 ) propose an AI system that can diagnose COVID-19 patients as well as conducting the prognostic analysis . Graph Neural Networks . Various types of graph neural networks ( GNNs ) have been proposed , which can be divided into spectral or non-spectral domains . In the spectral domain , the Fourier transformation and graph Laplacian define the convolutional filters ( Bruna et al. , 2013 ) . By utilizing Chebyshev polynomials , Kipf & Welling ( 2016 ) simplify the filters of graph convolution , rendering a layer-wise propagation method . However , the generalization of spectral methods may not be ideal due to the variety of graphs ( Bronstein et al. , 2017 ) . Non-spectral methods focus on the local topology of nodes , directly working on graphs instead of the Fourier domain . Methods proposed by Hamilton et al . ( 2017 ) , Monti et al . ( 2017 ) and Veličković et al . ( 2017 ) aggregate nodes based on adjacent nodes when the next layer is created . This aggregation process , as mentioned by Gilmer et al . ( 2017 ) , can be regarded as a message-passing process . Pooling Methods Pooling methods allow GNNs to hierarchically aggregate nodes , obtaining and assembling local information of graphs . The major purpose of the hierarchical pooling method is to use a locally based model to aggregate nodes in each layer , so that a higher level graph representation can be created ( Lee et al. , 2019 ) . The self-attention based pooling method ( Mao et al. , 2018 ) is implemented for video classification , locally obtaining weighted and fused feature sequences . Spectral pooling methods , such as the one proposed by Ma et al . ( 2019 ) , focus on the application of eigen-decomposition to capture the graph information . However , since spectral pooling methods are computationally demanding , they may not be accessible to large graphs . Non-spectral pooling methods are scalable to large graphs and pay more attention to the local structures of graphs . Adaptive Structure Aware Pooling ( ASAP ) ( Ranjan et al. , 2020 ) and DiffPool ( Ying et al. , 2018 ) resort to clustering techniques , aggregating nodes into different clusters , and then choosing top clusters based on cluster ranking scores ( Gao & Ji , 2019 ) . However , the ASAP method partially ignores the edge weight information when using hops to aggregate nodes , resulting in unstable convergence . 3 METHODOLOGY . We propose a GCN-based diagnosis and prognosis method that models the sequential slices of CT scans hierarchically . To downsample and learn graph-level representation from the input node features , a novel distance aware pooling method is proposed . In this paper , the node features refer to the slices in a CT scan . The model gradually extracts information from the slice level to the patient level by graph convolution and pooling . Eventually , a higher-level representation is learned , and further used for diagnosis , prognosis , and lesion localization . The schema of our model is illustrated in Figure 1 , which is composed of GCNs , pooling modules , a multilayer perceptron ( MLP ) classifier , and a one-drop localization module . The graph convolution-based method can integrate all slices in the chest CT scans for optimal decision making . Furthermore , we propose the one-drop localization to localize the most informative slices , so that radiologists may focus on those recommended slices with the most suspected lesion areas . Consequently , the proposed model can produce visual explanations for the diagnosis and prognosis , making the decision more transparent and explainable . We argue that this method could effectively assist radiologists by reducing redundancies in the vast amount of CT slices during diagnosis and prognosis in the clinical settings . 3.1 PROBLEM STATEMENT . Let G ( V , E ) be a patient ’ s CT scan graph , with |V| = N nodes and |E| edges , where |·| represents the cardinality of a set . For each vi ∈ V , xi is the corresponding d-dimensional vector . Let X ∈ RN×d be the node feature matrix , and Aadj ∈ RN×N be the weighted adjacency matrix . Each entry in Aadj is defined based on cosine similarity , which is Aadji , j = < xi , xj > / ( ‖xi‖ · ‖xj‖ ) . Define the distance matrix Adis as Adis = 1 − Aadj , where 1 is the matrix with all entries of 1 ’ s . The definition of Adis means that the more similar two nodes are , the shorter distance they have . In the construction of Aadj , the diagonal entries are automatically 1 ’ s since i = j is allowed . Each graph G has a label y . For diagnosis , the label represents its class from normal , common pneumonia , and COVID-19 . For prognosis , the class indicates whether a COVID-19 positive patient develops into sever/critical illness status . Thus , the diagnosis and prognosis of COVID-19 is a task of graph classification under our setting . Given a training dataset T = { ( G1 , y1 ) , . . . , ( GM , yM ) } , the goal is to learn a mapping f : G → y , which classifies a graph G into the corresponding class y . Our model is composed of two modules : f1 includes node convolution and feature pooling , and f2 involves a MLP classifier which determines the class of each graph . At each layer of f1 , node embeddings and cluster membership are learnt iteratively . The first module can be written as f1 : G → Gp , where Gp is the pooled graph with fewer nodes and a hierarchical feature representation . The second module is f2 : Gp → y , which utilizes the graph-level representation learnt for patient diagnosis and prognosis . The two modules are integrated in an end-to-end fashion . That is to say , the cluster assignment is learnt merely based on the graph classification objective . 3.2 NODE CONVOLUTION AND FEATURE POOLING . 3.2.1 NODE CONVOLUTION . Node convolution applies the graph convolutional network to obtain a high-level node feature representation in the feature matrix X . Although several methods exist to construct the convolutional network , the method recommended by Kipf & Welling ( 2016 ) is effective for our case , which is given by X ( l+1 ) = σ ( √ D ( l ) Aadj ( l ) √ D ( l ) X ( l ) W ( l ) ) , where D ( l ) is the diagonal degree matrix of Aadj ( l ) − I , and W ( l ) ∈ Rd×k is learnable weight matrix at the l-th layer . Due to the application of feature pooling , the topology of the graph changes at each layer , and thus the dimension of matrices involved are reduced accordingly . 3.2.2 DISTANCE AWARE POOLING METHOD . We propose an innovative pooling method , which includes graph-based clustering and feature pooling . Below , we outline the pooling module and illustrate how it is integrated into an end-to-end GCN based model . Empirically , it is shown to be more robust for densely connected graphs . The overall structure of the pooling method is illustrated in figure 2 . Improved receptive field . The concept of the receptive field , RF , used in the convolutional neural network ( CNN ) was extended to GNNs ( Ranjan et al. , 2020 ) . They defined RFnode as the number of hops required to cover the neighborhood of a given node , such that given a chosen node , a cluster can be obtained based on a fixed receptive field h. However , this design may not be applicable to densely connected graphs , because one node may be connected to most of the nodes in the graph . Even given a small value of h , for instance , h = 1 which is the default value ( Ranjan et al. , 2020 ) , clusters formed within hop h = 1 may include most of the nodes of the graph . Hence , the clustering step may be inefficient . In addition , the value of hops can only be integers , which loses the edge weight information in the clustering process . Therefore , we define an improved receptive field for densely connected graphs RF d denoted by hd , which can be treated as a radius centered at a given node . The value of hd is not restricted to integers but can be a positive real number . Define N ( vi ) as the local neighborhood of the node vi , and Nhd ( vi ) as the RF d neighborhood of the node vi with a radius hd , and ∀vj ∈ N ( vi ) , vi and vj are connected by the edge ( vi , vj ) . Node clustering . Inspired by the clustering and ranking ideas mentioned in Ranjan et al . ( 2020 ) and Gao & Ji ( 2019 ) , we propose a local node clustering , and score ranking method . Each node is considered as a center of a cluster for a given hd . Then , we score all the clusters and choose the top k proportion of them to represent the next layer ’ s nodes with pooled feature values , where k is a hyperparameter . Clustering ranking . Given a node vi and a radius hd , Nhd ( vi ) is the corresponding RF d neighborhood . Let I1 ( vi ) be the index set of the nodes in Nhd ( vi ) . The cluster score is defined as αi = ∑ m , n∈I1 ( vi ) A dis m , n/|Nhd ( vi ) | , where m 6= n. If αi is small , nodes in Nhd ( vi ) are close to each other . The top k proportion of clusters form the next layer ’ s nodes . Selecting cluster centers . We first introduce some notation : ∀vj ∈ Nhd ( vi ) , define Vj = N [ vj ] ∩ Nhd ( vi ) as the set of nodes connected to vj in Nhd ( vi ) , where N [ vj ] is the closed neighborhood of the node vj . Let I2 ( vj ) be the index set of the nodes in Vj . The node score is defined as bj =∑ c∈ I2 ( vi ) A dis j , c /|Vj | . The node with the smallest value of bj is chosen as the center of the cluster , and is used to represent a new node in the next layer ’ s node representation . Center node feature pooling . Based on the node scores , we rank the nodes in Vj and assign a weight wr to xr , where r ∈ I2 ( vj ) . Let b̂ = 1 − b . The weight vector w is defined as wr = b̂r/ ( ∑ i∈I2 ( vj ) b̂i + ) , where 1 is a vector with all 1 ’ s , is an extremely small positive value to avoid 0 ’ s in the denominator , and b is the vector containing all the weights of nodes in Vj . The value of the pooled center feature is defined as xp = Xrw , and then xp is used as the center feature representing Nhd ( vi ) . Next layer node connectivity . Following the idea in Ying et al . ( 2018 ) , the connectivity of the nodes in the next layer is preserved as follows . According to the above ranking and pooling methods , a pooled graph Gp with the node set Vp is obtained . The next step is to decide the pooled adjacency matrix Aadj , p . Define matrix S such that the columns of S are the top k clusters ’ weight vectors w. Hence , the pooled adjacency matrix is defined as Aadj , p = STAadjS .
The paper is an application of GCN with good features on chest CT scan images for Covid-19 diagnosis and prognosis. First of all this is a relevant and appreciated effort when the world is fighting the pandemic. Hence some bonus points is directed towards that. As a whole, to the representation learning community, it adds limited research values apart from being an application of GCN which is aligned to the application track of ICLR. The paper claims that with less than 1% number of total parameters in the baseline 3D ResNet model, their method achieves 94.7% accuracy for diagnosis, which is marginally better than state of art - however whether the model was over-fitted is not clear. Prognosis information is an added claim, though automation part is not integrated.
SP:6dffc48a1e859d4fea90c951e8995ee38207819c
Beyond COVID-19 Diagnosis: Prognosis with Hierarchical Graph Representation Learning
1 INTRODUCTION . Coronavirus disease 2019 ( COVID-19 ) has resulted in an ongoing pandemic in the world . To control the sources of infection and cut off the channels of transmission , rapid testing and detection are of vital importance . The reverse transcription polymerase chain reaction ( RT-PCR ) is a widely-used screening technology and viewed as the standard method for suspected cases . However , this method highly relies upon the required lab facilities and the diagnostic kits . In addition , the sensitivity of RT-PCR is not high enough for early diagnosis ( Ai et al. , 2020 ; Fang et al. , 2020 ) . To mitigate the limitations of RT-PCR , the computed tomography ( CT ) has been widely used as an effective complementary method , which can provide medical images of the lung area to reveal the details of the disease and its prognosis ( Huang et al. , 2020 ; Chung et al. , 2020 ) , for which RT-PCR can not . Additionally , CT has also been proved to be useful in monitoring the COVID-19 disease progression and the therapeutic efficacy evaluation ( Rodriguez-Morales et al. , 2020 ; Liechti et al. , 2020 ) . The chest CT slices of a patient have a sequential and hierarchical data structure . The relationship between slices possesses more information than the order of the slices . The adjacent ones with the same abnormality could be considered as one lesion . The slices containing the same type of lesions may not be continuous as the lesions are distributed in various lung parts . We propose a diagnosis and prognosis system that combines graph convolutional networks ( GCNs ) and a distance aware pooling , which integrates the information from all slices in the chest CT scans for optimal decision making . Our major contributions are three-fold : ( 1 ) Owing to the sequential structure of CT images , this is the first work to utilize GCNs to extract node information hierarchically , and conduct both diagnosis and prognosis for COVID-19 . The prognosis can help facilitate medical resources , e.g. , ventilators or admission to Intensive Care Units ( ICUs ) , more efficiently by triaging mild or severe patients . ( 2 ) A novel pooling method called distance aware pooling , is proposed to aggregate the graph , i.e. , the patient ’ s CT scan , effectively . The new pooling method integrated with GCNs can aggregate a densely connected graph efficiently . ( 3 ) The new model can localize the most informative slices within a chest CT scan , which significantly reduces the amount of work for radiologists . 2 RELATED WORK . AI-assisted and CT-based COVID-19 Diagnosis and Prognosis . Although RT-PCR is the standard way for COVID-19 diagnosis , there are many limitations using RT-PCR along , e.g. , time delay in receiving an RT-PCR test , occurrence of false negatives , and no prognostic information provided , etc . CT images are often recommended as an alternative for precise lesion detection ( Alizadehsani et al. , 2020 ) . However , as each CT scan includes a large number of ( up to several hundreds of ) image slices , it requires much time and labor of the radiologists ( Shoeibi et al. , 2020 ) . Furthermore , since the radiological appearances of COVID-19 are similar to other types of pneumonia , radiologists need to go through extensive training before they can achieve high diagnostic accuracy ( Shi et al. , 2020 ) . Recently , several AI-assisted and CT-based COVID-19 diagnostic systems have been developed . Chen et al . ( 2020 ) use Unet++ ( Zhou et al. , 2018 ) to segment infectious areas in the lung . Butt et al . ( 2020 ) develop a deep learning model to detect lesions from CT images , and then use 3D ResNet to classify the images into COVID-19 , influenza-A viral pneumonia or healthy groups . Song et al . ( 2020 ) use the whole lung for diagnosis instead of only extracting the lesions . Wang et al . ( 2020 ) propose an AI system that can diagnose COVID-19 patients as well as conducting the prognostic analysis . Graph Neural Networks . Various types of graph neural networks ( GNNs ) have been proposed , which can be divided into spectral or non-spectral domains . In the spectral domain , the Fourier transformation and graph Laplacian define the convolutional filters ( Bruna et al. , 2013 ) . By utilizing Chebyshev polynomials , Kipf & Welling ( 2016 ) simplify the filters of graph convolution , rendering a layer-wise propagation method . However , the generalization of spectral methods may not be ideal due to the variety of graphs ( Bronstein et al. , 2017 ) . Non-spectral methods focus on the local topology of nodes , directly working on graphs instead of the Fourier domain . Methods proposed by Hamilton et al . ( 2017 ) , Monti et al . ( 2017 ) and Veličković et al . ( 2017 ) aggregate nodes based on adjacent nodes when the next layer is created . This aggregation process , as mentioned by Gilmer et al . ( 2017 ) , can be regarded as a message-passing process . Pooling Methods Pooling methods allow GNNs to hierarchically aggregate nodes , obtaining and assembling local information of graphs . The major purpose of the hierarchical pooling method is to use a locally based model to aggregate nodes in each layer , so that a higher level graph representation can be created ( Lee et al. , 2019 ) . The self-attention based pooling method ( Mao et al. , 2018 ) is implemented for video classification , locally obtaining weighted and fused feature sequences . Spectral pooling methods , such as the one proposed by Ma et al . ( 2019 ) , focus on the application of eigen-decomposition to capture the graph information . However , since spectral pooling methods are computationally demanding , they may not be accessible to large graphs . Non-spectral pooling methods are scalable to large graphs and pay more attention to the local structures of graphs . Adaptive Structure Aware Pooling ( ASAP ) ( Ranjan et al. , 2020 ) and DiffPool ( Ying et al. , 2018 ) resort to clustering techniques , aggregating nodes into different clusters , and then choosing top clusters based on cluster ranking scores ( Gao & Ji , 2019 ) . However , the ASAP method partially ignores the edge weight information when using hops to aggregate nodes , resulting in unstable convergence . 3 METHODOLOGY . We propose a GCN-based diagnosis and prognosis method that models the sequential slices of CT scans hierarchically . To downsample and learn graph-level representation from the input node features , a novel distance aware pooling method is proposed . In this paper , the node features refer to the slices in a CT scan . The model gradually extracts information from the slice level to the patient level by graph convolution and pooling . Eventually , a higher-level representation is learned , and further used for diagnosis , prognosis , and lesion localization . The schema of our model is illustrated in Figure 1 , which is composed of GCNs , pooling modules , a multilayer perceptron ( MLP ) classifier , and a one-drop localization module . The graph convolution-based method can integrate all slices in the chest CT scans for optimal decision making . Furthermore , we propose the one-drop localization to localize the most informative slices , so that radiologists may focus on those recommended slices with the most suspected lesion areas . Consequently , the proposed model can produce visual explanations for the diagnosis and prognosis , making the decision more transparent and explainable . We argue that this method could effectively assist radiologists by reducing redundancies in the vast amount of CT slices during diagnosis and prognosis in the clinical settings . 3.1 PROBLEM STATEMENT . Let G ( V , E ) be a patient ’ s CT scan graph , with |V| = N nodes and |E| edges , where |·| represents the cardinality of a set . For each vi ∈ V , xi is the corresponding d-dimensional vector . Let X ∈ RN×d be the node feature matrix , and Aadj ∈ RN×N be the weighted adjacency matrix . Each entry in Aadj is defined based on cosine similarity , which is Aadji , j = < xi , xj > / ( ‖xi‖ · ‖xj‖ ) . Define the distance matrix Adis as Adis = 1 − Aadj , where 1 is the matrix with all entries of 1 ’ s . The definition of Adis means that the more similar two nodes are , the shorter distance they have . In the construction of Aadj , the diagonal entries are automatically 1 ’ s since i = j is allowed . Each graph G has a label y . For diagnosis , the label represents its class from normal , common pneumonia , and COVID-19 . For prognosis , the class indicates whether a COVID-19 positive patient develops into sever/critical illness status . Thus , the diagnosis and prognosis of COVID-19 is a task of graph classification under our setting . Given a training dataset T = { ( G1 , y1 ) , . . . , ( GM , yM ) } , the goal is to learn a mapping f : G → y , which classifies a graph G into the corresponding class y . Our model is composed of two modules : f1 includes node convolution and feature pooling , and f2 involves a MLP classifier which determines the class of each graph . At each layer of f1 , node embeddings and cluster membership are learnt iteratively . The first module can be written as f1 : G → Gp , where Gp is the pooled graph with fewer nodes and a hierarchical feature representation . The second module is f2 : Gp → y , which utilizes the graph-level representation learnt for patient diagnosis and prognosis . The two modules are integrated in an end-to-end fashion . That is to say , the cluster assignment is learnt merely based on the graph classification objective . 3.2 NODE CONVOLUTION AND FEATURE POOLING . 3.2.1 NODE CONVOLUTION . Node convolution applies the graph convolutional network to obtain a high-level node feature representation in the feature matrix X . Although several methods exist to construct the convolutional network , the method recommended by Kipf & Welling ( 2016 ) is effective for our case , which is given by X ( l+1 ) = σ ( √ D ( l ) Aadj ( l ) √ D ( l ) X ( l ) W ( l ) ) , where D ( l ) is the diagonal degree matrix of Aadj ( l ) − I , and W ( l ) ∈ Rd×k is learnable weight matrix at the l-th layer . Due to the application of feature pooling , the topology of the graph changes at each layer , and thus the dimension of matrices involved are reduced accordingly . 3.2.2 DISTANCE AWARE POOLING METHOD . We propose an innovative pooling method , which includes graph-based clustering and feature pooling . Below , we outline the pooling module and illustrate how it is integrated into an end-to-end GCN based model . Empirically , it is shown to be more robust for densely connected graphs . The overall structure of the pooling method is illustrated in figure 2 . Improved receptive field . The concept of the receptive field , RF , used in the convolutional neural network ( CNN ) was extended to GNNs ( Ranjan et al. , 2020 ) . They defined RFnode as the number of hops required to cover the neighborhood of a given node , such that given a chosen node , a cluster can be obtained based on a fixed receptive field h. However , this design may not be applicable to densely connected graphs , because one node may be connected to most of the nodes in the graph . Even given a small value of h , for instance , h = 1 which is the default value ( Ranjan et al. , 2020 ) , clusters formed within hop h = 1 may include most of the nodes of the graph . Hence , the clustering step may be inefficient . In addition , the value of hops can only be integers , which loses the edge weight information in the clustering process . Therefore , we define an improved receptive field for densely connected graphs RF d denoted by hd , which can be treated as a radius centered at a given node . The value of hd is not restricted to integers but can be a positive real number . Define N ( vi ) as the local neighborhood of the node vi , and Nhd ( vi ) as the RF d neighborhood of the node vi with a radius hd , and ∀vj ∈ N ( vi ) , vi and vj are connected by the edge ( vi , vj ) . Node clustering . Inspired by the clustering and ranking ideas mentioned in Ranjan et al . ( 2020 ) and Gao & Ji ( 2019 ) , we propose a local node clustering , and score ranking method . Each node is considered as a center of a cluster for a given hd . Then , we score all the clusters and choose the top k proportion of them to represent the next layer ’ s nodes with pooled feature values , where k is a hyperparameter . Clustering ranking . Given a node vi and a radius hd , Nhd ( vi ) is the corresponding RF d neighborhood . Let I1 ( vi ) be the index set of the nodes in Nhd ( vi ) . The cluster score is defined as αi = ∑ m , n∈I1 ( vi ) A dis m , n/|Nhd ( vi ) | , where m 6= n. If αi is small , nodes in Nhd ( vi ) are close to each other . The top k proportion of clusters form the next layer ’ s nodes . Selecting cluster centers . We first introduce some notation : ∀vj ∈ Nhd ( vi ) , define Vj = N [ vj ] ∩ Nhd ( vi ) as the set of nodes connected to vj in Nhd ( vi ) , where N [ vj ] is the closed neighborhood of the node vj . Let I2 ( vj ) be the index set of the nodes in Vj . The node score is defined as bj =∑ c∈ I2 ( vi ) A dis j , c /|Vj | . The node with the smallest value of bj is chosen as the center of the cluster , and is used to represent a new node in the next layer ’ s node representation . Center node feature pooling . Based on the node scores , we rank the nodes in Vj and assign a weight wr to xr , where r ∈ I2 ( vj ) . Let b̂ = 1 − b . The weight vector w is defined as wr = b̂r/ ( ∑ i∈I2 ( vj ) b̂i + ) , where 1 is a vector with all 1 ’ s , is an extremely small positive value to avoid 0 ’ s in the denominator , and b is the vector containing all the weights of nodes in Vj . The value of the pooled center feature is defined as xp = Xrw , and then xp is used as the center feature representing Nhd ( vi ) . Next layer node connectivity . Following the idea in Ying et al . ( 2018 ) , the connectivity of the nodes in the next layer is preserved as follows . According to the above ranking and pooling methods , a pooled graph Gp with the node set Vp is obtained . The next step is to decide the pooled adjacency matrix Aadj , p . Define matrix S such that the columns of S are the top k clusters ’ weight vectors w. Hence , the pooled adjacency matrix is defined as Aadj , p = STAadjS .
The manuscript proposes a distance aware pooling method to use in graph convolutional neural for predicting whether a subject is infected with Covid-19 (diagnosis) and progression of the disease (prognosis). Experiments were conducted on CT images from three groups: Covid-19 group, common pneumonia group, and heathy group with about 900 samples in each group. The proposed model achieved 94.7% accuracy.
SP:6dffc48a1e859d4fea90c951e8995ee38207819c
Imitation with Neural Density Models
1 Introduction . Imitation Learning ( IL ) algorithms aim to learn optimal behavior by mimicking expert demonstrations . Perhaps the simplest IL method is Behavioral Cloning ( BC ) ( Pomerleau , 1991 ) which ignores the dynamics of the underlying Markov Decision Process ( MDP ) that generated the demonstrations , and treats IL as a supervised learning problem of predicting optimal actions given states . Prior work showed that if the learned policy incurs a small BC loss , the worst case performance gap between the expert and imitator grows quadratically with the number of decision steps ( Ross & Bagnell , 2010 ; Ross et al. , 2011a ) . The crux of their argument is that policies that are `` close '' as measured by BC loss can induce disastrously different distributions over states when deployed in the environment . One family of solutions to mitigating such compounding errors is Interactive IL ( Guo et al. , 2014 ; Ross et al. , 2011b , 2013 ) , which involves running the imitator ’ s policy and collecting corrective actions from an interactive expert . However , interactive expert queries are expensive and seldom available . Another family of approaches ( Fu et al. , 2017 ; Ho & Ermon , 2016 ; Ke et al. , 2020 ; Kim & Park , 2018 ; Kostrikov et al. , 2020 ; Wang et al. , 2017 ) that have gained much traction is to directly minimize a statistical distance between state-action distributions induced by policies of the expert and imitator , i.e the occupancy measures ⇢⇡E and ⇢⇡✓ . As ⇢⇡✓ is an implicit distribution induced by the policy and environment1 , distribution matching with ⇢⇡✓ typically requires likelihood-free methods involving sampling . Sampling from ⇢⇡✓ entails running the imitator policy in the environment , which was not required by BC . While distribution matching IL requires additional access to an environment simulator , it has been shown to drastically improve demonstration efficiency , i.e the number of demonstrations needed to succeed at IL ( Ho & Ermon , 2016 ) . A wide suite of distribution matching IL algorithms use adversarial methods to match ⇢⇡✓ and ⇢⇡E , which requires alternating between reward ( discriminator ) and policy ( generator ) updates ( Fu et al. , 2017 ; Ho & Ermon , 2016 ; Ke et al. , 2020 ; Kim et al. , 2019 ; Kostrikov et al. , 2020 ) . A key drawback to such Adversarial Imitation Learning ( AIL ) methods is that they inherit the instability of alternating min-max optimization ( Miyato et al. , 2018 ; Salimans et al. , 2016 ) which is generally not guaranteed to converge ( Jin et al. , 2019 ) . Furthermore , this instability is exacerbated in the IL setting where generator updates involve high-variance policy optimization and leads to sub-optimal demonstration efficiency . To alleviate this instability , ( Brantley et al. , 2020 ; Reddy et al. , 2017 ; Wang et al. , 2019 ) have proposed to do RL with fixed heuristic rewards . Wang et al . ( 2019 ) , for example , uses a heuristic reward that estimates the 1we assume only samples can be taken from the environment dynamics and its density is unknown 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . support of ⇢⇡E which discourages the imitator from visiting out-of-support states . While having the merit of simplicity , these approaches have no guarantee of recovering the true expert policy . In this work , we propose a new framework for IL via obtaining a density estimate q of the expert ’ s occupancy measure ⇢⇡E followed by Maximum Occupancy Entropy Reinforcement Learning ( MaxOccEntRL ) ( Islam et al. , 2019 ; Lee et al. , 2019 ) . In the MaxOccEntRL step , the density estimate q is used as a fixed reward for RL and the occupancy entropy H ( ⇢⇡✓ ) is simultaneously maximized , leading to the objective max✓ E⇢⇡✓ [ log q ( s , a ) ] +H ( ⇢⇡✓ ) . Intuitively , our approach encourages the imitator to visit high density state-action pairs under ⇢⇡E while maximally exploring the state-action space . There are two main challenges to this approach . First , we require accurate density estimation of ⇢⇡E , which is particularly challenging when the state-action space is high dimensional and the number of expert demonstrations are limited . Second , in contrast to Maximum Entropy RL ( MaxEntRL ) , MaxOccEntRL requires maximizing the entropy of an implicit density ⇢⇡✓ . We address the former challenge leveraging advances in density estimation ( Du & Mordatch , 2018 ; Germain et al. , 2015 ; Song et al. , 2019 ) . For the latter challenge , we derive a non-adversarial model-free RL objective that provably maximizes a lower bound to occupancy entropy . As a byproduct , we also obtain a model-free RL objective that lower bounds reverse Kullback-Lieber ( KL ) divergence between ⇢⇡✓ and ⇢⇡E . The contribution of our work is introducing a novel family of distribution matching IL algorithms , named Neural Density Imitation ( NDI ) , that ( 1 ) optimizes a principled lower bound to the additive inverse of reverse KL , thereby avoiding adversarial optimization and ( 2 ) . advances state-of-the-art demonstration efficiency in IL . 2 Imitation Learning via density estimation . We model an agent ’ s decision making process as a discounted infinite-horizon Markov Decision Process ( MDP ) M = ( S , A , P , P0 , r , ) . Here S , A are state-action spaces , P : S ⇥ A ! ⌦ ( S ) is a transition dynamics where ⌦ ( S ) is the set of probability measures on S , P0 : S ! R is an initial state distribution , r : S ⇥ A ! R is a reward function , and 2 [ 0 , 1 ) is a discount factor . A parameterized policy ⇡✓ : S ! ⌦ ( A ) distills the agent ’ s decision making rule and { st , at } 1t=0 is the stochastic process realized by sampling an initial state from s0 ⇠ P0 ( s ) then running ⇡✓ in the environment , i.e at ⇠ ⇡✓ ( ·|st ) , st+1 ⇠ P ( ·|st , at ) . We denote by p✓ , t : t+k the joint distribution of states { st , st+1 , ... , st+k } , where setting p✓ , t recovers the marginal of st . The ( unnormalized ) occupancy measure of ⇡✓ is defined as ⇢⇡✓ ( s , a ) = P1 t=0 t p✓ , t ( s ) ⇡✓ ( a|s ) . Intuitively , ⇢⇡✓ ( s , a ) quantifies the frequency of visiting the state-action pair ( s , a ) when running ⇡✓ for a long time , with more emphasis on earlier states . We denote policy performance as J ( ⇡✓ , r̄ ) = E⇡✓ [ P1 t=0 t r̄ ( st , at ) ] = E ( s , a ) ⇠⇢⇡✓ [ r̄ ( s , a ) ] where r̄ is a ( potentially ) augmented reward function and E denotes the generalized expectation operator extended to non-normalized densities p̂ : X ! R+ and functions f : X ! Y so that Ep̂ [ f ( x ) ] = P x p̂ ( x ) f ( x ) . The choice of r̄ depends on the RL framework . In standard RL , we simply have r̄ = r , while in Maximum Entropy RL ( MaxEntRL ) ( Haarnoja et al. , 2017 ) , we have r̄ ( s , a ) = r ( s , a ) log ⇡✓ ( a|s ) . We denote the entropy of ⇢⇡✓ ( s , a ) as H ( ⇢⇡✓ ) = E⇢⇡✓ [ log ⇢⇡✓ ( s , a ) ] and overload notation to denote the -discounted causal entropy of policy ⇡✓ as H ( ⇡✓ ) = E⇡✓ [ P1 t=0 t log ⇡✓ ( at|st ) ] = E⇢⇡✓ [ log ⇡✓ ( a|s ) ] . Note that we use a generalized notion of entropy where the domain is extended to non-normalized densities . We can then define the Maximum Occupancy Entropy RL ( MaxOccEntRL ) ( Islam et al. , 2019 ; Lee et al. , 2019 ) objective as J ( ⇡✓ , r̄ = r ) +H ( ⇢⇡✓ ) . Note the key difference between MaxOccEntRL and MaxEntRL : entropy regularization is on the occupancy measure instead of the policy , i.e seeks state diversity instead of action diversity . We will later show in section 2.2 , that a lower bound on this objective reduces to a complete model-free RL objective with an augmented reward r̄ . Let ⇡E , ⇡✓ denote an expert and imitator policy , respectively . Given only demonstrations D = { ( s , a ) i } ki=1 ⇠ ⇡E of state-action pairs sampled from the expert , Imitation Learning ( IL ) aims to learn a policy ⇡✓ which matches the expert , i.e ⇡✓ = ⇡E . Formally , IL can be recast as a distribution matching problem ( Ho & Ermon , 2016 ; Ke et al. , 2020 ) between occupancy measures ⇢⇡✓ and ⇢⇡E : maximize✓ d ( ⇢⇡✓ , ⇢⇡E ) ( 1 ) where d ( p̂ , q̂ ) is a generalized statistical distance defined on the extended domain of ( potentially ) non-normalized probability densities p̂ ( x ) , q̂ ( x ) with the same normalization factor Z > 0 , i.eR x p̂ ( x ) /Z = R x q̂ ( x ) /Z = 1 . For ⇢⇡ and ⇢⇡E , we have Z = 1 1 . As we are only able to take samples from the transition kernel and its density is unknown , ⇢⇡✓ is an implicit distribution2 . Thus , optimizing Eq . 1 typically requires likelihood-free approaches leveraging samples from ⇢⇡✓ , i.e running ⇡✓ in the environment . Current state-of-the-art IL approaches use likelihood-free adversarial methods to approximately optimize Eq . 1 for various choices of d such as reverse Kullback-Liebler ( KL ) divergence ( Fu et al. , 2017 ; Kostrikov et al. , 2020 ) and Jensen-Shannon ( JS ) divergence ( Ho & Ermon , 2016 ) . However , adversarial methods are known to suffer from optimization instability which is exacerbated in the IL setting where one step in the alternating optimization involves RL . We instead derive a non-adversarial objective for IL . In this work , we choose d to be ( generalized ) reverse-KL divergence and leave derivations for alternate choices of d to future work . DKL ( ⇢⇡✓ ||⇢⇡E ) = E⇢⇡✓ [ log ⇢⇡E ( s , a ) log ⇢⇡✓ ( s , a ) ] = J ( ⇡✓ , r̄ = log ⇢⇡E ) +H ( ⇢⇡✓ ) ( 2 ) We see that maximizing negative reverse-KL with respect to ⇡✓ is equivalent to Maximum Occupancy Entropy RL ( MaxOccEntRL ) with log ⇢⇡E as the fixed reward . Intuitively , this objective drives ⇡✓ to visit states that are most likely under ⇢⇡E while maximally spreading out probability mass so that if two state-action pairs are equally likely , the policy visits both . There are two main challenges associated with this approach which we address in the following sections . 1. log ⇢⇡E is unknown and must be estimated from the demonstrations D. Density estimation remains a challenging problem , especially when there are a limited number of samples and the data is high dimensional ( Liu et al. , 2007 ) . Note that simply extracting the conditional ⇡ ( a|s ) from an estimate of the joint ⇢⇡E ( s , a ) is an alternate way to do BC and does not resolve the compounding error problem ( Ross et al. , 2011a ) . 2 . H ( ⇢⇡✓ ) is hard to maximize as ⇢⇡✓ is an implicit density . This challenge is similar to the difficulty of entropy regularizing generators ( Belghazi et al. , 2018 ; Dieng et al. , 2019 ; Mohamed & Lakshminarayanan , 2016 ) for Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , and most existing approaches ( Dieng et al. , 2019 ; Lee et al. , 2019 ) use adversarial optimization .
This paper introduces an approach for imitation learning based on density estimation. The approach uses the previously introduced idea of minimizing some divergence between policy and expert occupancy measures, state-action distributions induced by these policies. The authors propose to first estimate expert occupancy measure either using an autoregressive density model or an energy based model. Then authors use the Donsker-Varadhan KL representation to compute a log ratio between $p(s_{t+1}|s_t)$ and $p(s_t)$ where $s_{t}$ is a state at time $t$. Then, the expert occupancy and the KL representation are used as RL reward for imitation learning.
SP:4f12fdfd15f31f1a0ff9b57a8de53319750c3eec
Imitation with Neural Density Models
1 Introduction . Imitation Learning ( IL ) algorithms aim to learn optimal behavior by mimicking expert demonstrations . Perhaps the simplest IL method is Behavioral Cloning ( BC ) ( Pomerleau , 1991 ) which ignores the dynamics of the underlying Markov Decision Process ( MDP ) that generated the demonstrations , and treats IL as a supervised learning problem of predicting optimal actions given states . Prior work showed that if the learned policy incurs a small BC loss , the worst case performance gap between the expert and imitator grows quadratically with the number of decision steps ( Ross & Bagnell , 2010 ; Ross et al. , 2011a ) . The crux of their argument is that policies that are `` close '' as measured by BC loss can induce disastrously different distributions over states when deployed in the environment . One family of solutions to mitigating such compounding errors is Interactive IL ( Guo et al. , 2014 ; Ross et al. , 2011b , 2013 ) , which involves running the imitator ’ s policy and collecting corrective actions from an interactive expert . However , interactive expert queries are expensive and seldom available . Another family of approaches ( Fu et al. , 2017 ; Ho & Ermon , 2016 ; Ke et al. , 2020 ; Kim & Park , 2018 ; Kostrikov et al. , 2020 ; Wang et al. , 2017 ) that have gained much traction is to directly minimize a statistical distance between state-action distributions induced by policies of the expert and imitator , i.e the occupancy measures ⇢⇡E and ⇢⇡✓ . As ⇢⇡✓ is an implicit distribution induced by the policy and environment1 , distribution matching with ⇢⇡✓ typically requires likelihood-free methods involving sampling . Sampling from ⇢⇡✓ entails running the imitator policy in the environment , which was not required by BC . While distribution matching IL requires additional access to an environment simulator , it has been shown to drastically improve demonstration efficiency , i.e the number of demonstrations needed to succeed at IL ( Ho & Ermon , 2016 ) . A wide suite of distribution matching IL algorithms use adversarial methods to match ⇢⇡✓ and ⇢⇡E , which requires alternating between reward ( discriminator ) and policy ( generator ) updates ( Fu et al. , 2017 ; Ho & Ermon , 2016 ; Ke et al. , 2020 ; Kim et al. , 2019 ; Kostrikov et al. , 2020 ) . A key drawback to such Adversarial Imitation Learning ( AIL ) methods is that they inherit the instability of alternating min-max optimization ( Miyato et al. , 2018 ; Salimans et al. , 2016 ) which is generally not guaranteed to converge ( Jin et al. , 2019 ) . Furthermore , this instability is exacerbated in the IL setting where generator updates involve high-variance policy optimization and leads to sub-optimal demonstration efficiency . To alleviate this instability , ( Brantley et al. , 2020 ; Reddy et al. , 2017 ; Wang et al. , 2019 ) have proposed to do RL with fixed heuristic rewards . Wang et al . ( 2019 ) , for example , uses a heuristic reward that estimates the 1we assume only samples can be taken from the environment dynamics and its density is unknown 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . support of ⇢⇡E which discourages the imitator from visiting out-of-support states . While having the merit of simplicity , these approaches have no guarantee of recovering the true expert policy . In this work , we propose a new framework for IL via obtaining a density estimate q of the expert ’ s occupancy measure ⇢⇡E followed by Maximum Occupancy Entropy Reinforcement Learning ( MaxOccEntRL ) ( Islam et al. , 2019 ; Lee et al. , 2019 ) . In the MaxOccEntRL step , the density estimate q is used as a fixed reward for RL and the occupancy entropy H ( ⇢⇡✓ ) is simultaneously maximized , leading to the objective max✓ E⇢⇡✓ [ log q ( s , a ) ] +H ( ⇢⇡✓ ) . Intuitively , our approach encourages the imitator to visit high density state-action pairs under ⇢⇡E while maximally exploring the state-action space . There are two main challenges to this approach . First , we require accurate density estimation of ⇢⇡E , which is particularly challenging when the state-action space is high dimensional and the number of expert demonstrations are limited . Second , in contrast to Maximum Entropy RL ( MaxEntRL ) , MaxOccEntRL requires maximizing the entropy of an implicit density ⇢⇡✓ . We address the former challenge leveraging advances in density estimation ( Du & Mordatch , 2018 ; Germain et al. , 2015 ; Song et al. , 2019 ) . For the latter challenge , we derive a non-adversarial model-free RL objective that provably maximizes a lower bound to occupancy entropy . As a byproduct , we also obtain a model-free RL objective that lower bounds reverse Kullback-Lieber ( KL ) divergence between ⇢⇡✓ and ⇢⇡E . The contribution of our work is introducing a novel family of distribution matching IL algorithms , named Neural Density Imitation ( NDI ) , that ( 1 ) optimizes a principled lower bound to the additive inverse of reverse KL , thereby avoiding adversarial optimization and ( 2 ) . advances state-of-the-art demonstration efficiency in IL . 2 Imitation Learning via density estimation . We model an agent ’ s decision making process as a discounted infinite-horizon Markov Decision Process ( MDP ) M = ( S , A , P , P0 , r , ) . Here S , A are state-action spaces , P : S ⇥ A ! ⌦ ( S ) is a transition dynamics where ⌦ ( S ) is the set of probability measures on S , P0 : S ! R is an initial state distribution , r : S ⇥ A ! R is a reward function , and 2 [ 0 , 1 ) is a discount factor . A parameterized policy ⇡✓ : S ! ⌦ ( A ) distills the agent ’ s decision making rule and { st , at } 1t=0 is the stochastic process realized by sampling an initial state from s0 ⇠ P0 ( s ) then running ⇡✓ in the environment , i.e at ⇠ ⇡✓ ( ·|st ) , st+1 ⇠ P ( ·|st , at ) . We denote by p✓ , t : t+k the joint distribution of states { st , st+1 , ... , st+k } , where setting p✓ , t recovers the marginal of st . The ( unnormalized ) occupancy measure of ⇡✓ is defined as ⇢⇡✓ ( s , a ) = P1 t=0 t p✓ , t ( s ) ⇡✓ ( a|s ) . Intuitively , ⇢⇡✓ ( s , a ) quantifies the frequency of visiting the state-action pair ( s , a ) when running ⇡✓ for a long time , with more emphasis on earlier states . We denote policy performance as J ( ⇡✓ , r̄ ) = E⇡✓ [ P1 t=0 t r̄ ( st , at ) ] = E ( s , a ) ⇠⇢⇡✓ [ r̄ ( s , a ) ] where r̄ is a ( potentially ) augmented reward function and E denotes the generalized expectation operator extended to non-normalized densities p̂ : X ! R+ and functions f : X ! Y so that Ep̂ [ f ( x ) ] = P x p̂ ( x ) f ( x ) . The choice of r̄ depends on the RL framework . In standard RL , we simply have r̄ = r , while in Maximum Entropy RL ( MaxEntRL ) ( Haarnoja et al. , 2017 ) , we have r̄ ( s , a ) = r ( s , a ) log ⇡✓ ( a|s ) . We denote the entropy of ⇢⇡✓ ( s , a ) as H ( ⇢⇡✓ ) = E⇢⇡✓ [ log ⇢⇡✓ ( s , a ) ] and overload notation to denote the -discounted causal entropy of policy ⇡✓ as H ( ⇡✓ ) = E⇡✓ [ P1 t=0 t log ⇡✓ ( at|st ) ] = E⇢⇡✓ [ log ⇡✓ ( a|s ) ] . Note that we use a generalized notion of entropy where the domain is extended to non-normalized densities . We can then define the Maximum Occupancy Entropy RL ( MaxOccEntRL ) ( Islam et al. , 2019 ; Lee et al. , 2019 ) objective as J ( ⇡✓ , r̄ = r ) +H ( ⇢⇡✓ ) . Note the key difference between MaxOccEntRL and MaxEntRL : entropy regularization is on the occupancy measure instead of the policy , i.e seeks state diversity instead of action diversity . We will later show in section 2.2 , that a lower bound on this objective reduces to a complete model-free RL objective with an augmented reward r̄ . Let ⇡E , ⇡✓ denote an expert and imitator policy , respectively . Given only demonstrations D = { ( s , a ) i } ki=1 ⇠ ⇡E of state-action pairs sampled from the expert , Imitation Learning ( IL ) aims to learn a policy ⇡✓ which matches the expert , i.e ⇡✓ = ⇡E . Formally , IL can be recast as a distribution matching problem ( Ho & Ermon , 2016 ; Ke et al. , 2020 ) between occupancy measures ⇢⇡✓ and ⇢⇡E : maximize✓ d ( ⇢⇡✓ , ⇢⇡E ) ( 1 ) where d ( p̂ , q̂ ) is a generalized statistical distance defined on the extended domain of ( potentially ) non-normalized probability densities p̂ ( x ) , q̂ ( x ) with the same normalization factor Z > 0 , i.eR x p̂ ( x ) /Z = R x q̂ ( x ) /Z = 1 . For ⇢⇡ and ⇢⇡E , we have Z = 1 1 . As we are only able to take samples from the transition kernel and its density is unknown , ⇢⇡✓ is an implicit distribution2 . Thus , optimizing Eq . 1 typically requires likelihood-free approaches leveraging samples from ⇢⇡✓ , i.e running ⇡✓ in the environment . Current state-of-the-art IL approaches use likelihood-free adversarial methods to approximately optimize Eq . 1 for various choices of d such as reverse Kullback-Liebler ( KL ) divergence ( Fu et al. , 2017 ; Kostrikov et al. , 2020 ) and Jensen-Shannon ( JS ) divergence ( Ho & Ermon , 2016 ) . However , adversarial methods are known to suffer from optimization instability which is exacerbated in the IL setting where one step in the alternating optimization involves RL . We instead derive a non-adversarial objective for IL . In this work , we choose d to be ( generalized ) reverse-KL divergence and leave derivations for alternate choices of d to future work . DKL ( ⇢⇡✓ ||⇢⇡E ) = E⇢⇡✓ [ log ⇢⇡E ( s , a ) log ⇢⇡✓ ( s , a ) ] = J ( ⇡✓ , r̄ = log ⇢⇡E ) +H ( ⇢⇡✓ ) ( 2 ) We see that maximizing negative reverse-KL with respect to ⇡✓ is equivalent to Maximum Occupancy Entropy RL ( MaxOccEntRL ) with log ⇢⇡E as the fixed reward . Intuitively , this objective drives ⇡✓ to visit states that are most likely under ⇢⇡E while maximally spreading out probability mass so that if two state-action pairs are equally likely , the policy visits both . There are two main challenges associated with this approach which we address in the following sections . 1. log ⇢⇡E is unknown and must be estimated from the demonstrations D. Density estimation remains a challenging problem , especially when there are a limited number of samples and the data is high dimensional ( Liu et al. , 2007 ) . Note that simply extracting the conditional ⇡ ( a|s ) from an estimate of the joint ⇢⇡E ( s , a ) is an alternate way to do BC and does not resolve the compounding error problem ( Ross et al. , 2011a ) . 2 . H ( ⇢⇡✓ ) is hard to maximize as ⇢⇡✓ is an implicit density . This challenge is similar to the difficulty of entropy regularizing generators ( Belghazi et al. , 2018 ; Dieng et al. , 2019 ; Mohamed & Lakshminarayanan , 2016 ) for Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , and most existing approaches ( Dieng et al. , 2019 ; Lee et al. , 2019 ) use adversarial optimization .
This work proposes a novel density matching method for learning from demonstration, which achieves state-of-the-art demonstration efficiency. Prior density matching methods utilize the adversarial methods suffers from the instability of optimization. To overcome this issue, this work proposes to separate the imitation process into expert density estimation phase and density matching phase, where a model-free formulation is derived and provably served as the lower bound of reverse KL divergence between $\pi_\theta$ and expert policy $\pi_E$.
SP:4f12fdfd15f31f1a0ff9b57a8de53319750c3eec
Incremental few-shot learning via vector quantization in deep embedded space
The capability of incrementally learning new tasks without forgetting old ones is a challenging problem due to catastrophic forgetting . This challenge becomes greater when novel tasks contain very few labelled training samples . Currently , most methods are dedicated to class-incremental learning and rely on sufficient training data to learn additional weights for newly added classes . Those methods can not be easily extended to incremental regression tasks and could suffer from severe overfitting when learning few-shot novel tasks . In this study , we propose a nonparametric method in deep embedded space to tackle incremental few-shot learning problems . The knowledge about the learned tasks is compressed into a small number of quantized reference vectors . The proposed method learns new tasks sequentially by adding more reference vectors to the model using few-shot samples in each novel task . For classification problems , we employ the nearest neighbor scheme to make classification on sparsely available data and incorporate intra-class variation , less forgetting regularization and calibration of reference vectors to mitigate catastrophic forgetting . In addition , the proposed learning vector quantization ( LVQ ) in deep embedded space can be customized as a kernel smoother to handle incremental few-shot regression tasks . Experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in incremental learning . 1 INTRODUCTION . Incremental learning is a learning paradigm that allows the model to continually learn new tasks on novel data , without forgetting how to perform previously learned tasks ( Cauwenberghs & Poggio , 2001 ; Kuzborskij et al. , 2013 ; Mensink et al. , 2013 ) . The capability of incremental learning becomes more important in real-world applications , in which the deployed models are exposed to possible out-of-sample data . Typically , hundreds of thousands of labelled samples in new tasks are required to re-train or fine-tune the model ( Rebuffi et al. , 2017 ) . Unfortunately , it is impractical to gather sufficient samples of new tasks in real applications . In contrast , humans can learn new concepts from just one or a few examples , without losing old knowledge . Therefore , it is desirable to develop algorithms to support incremental learning from very few samples . While a natural approach for incremental few-shot learning is to fine-tune part of the base model using novel training data ( Donahue et al. , 2014 ; Girshick et al. , 2014 ) , the model could suffer from severe over-fitting on new tasks due to a limited number of training samples . Moreover , simple fine-tuning also leads to significant performance drop on previously learned tasks , termed as catastrophic forgetting ( Goodfellow et al. , 2014 ) . Recent attempts to mitigate the catastrophic forgetting are generally categorized into two streams : memory relay of old training samples ( Rebuffi et al. , 2017 ; Shin et al. , 2017 ; Kemker & Kanan , 2018 ) and regularization on important model parameters ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) . However , those incremental learning approaches are developed and tested on unrealistic scenarios where sufficient training samples are available in novel tasks . They may not work well when the training samples in novel tasks are few ( Tao et al. , 2020b ) . To the best of our knowledge , the majority of incremental learning methodologies focus on classification problems and they can not be extended to regression problems easily . In class-incremental learning , the model has to expand output dimensions to learn N ′ novel classes while keeping the knowledge of existing N classes . Parametric models estimate additional classification weights for novel classes , while nonparametric methods compute the class centroids for novel classes . In comparison , output dimensions in regression problems do not change in incremental learning as neither additional weights nor class centroids are applicable to regression problems . Besides , we find that catastrophic forgetting in incremental few-shot classification can be attributed to three reasons . First , the model is biased towards new classes and forgets old classes because the model is fine-tuned on new data only ( Hou et al. , 2019 ; Zhao et al. , 2020 ) . Meanwhile , the prediction accuracy on novel classes is not good due to over-fitting on few-shot training samples . Second , features of novel samples could overlap with those of old classes in the feature space , leading to ambiguity among classes in the feature space . Finally , features of old classes and classification weights are no longer compatible after the model is fine-tuned with new data . In this paper , we investigate the problem of incremental few-shot learning , where only a few training samples are available in new tasks . A unified model is learned sequentially to jointly recognize all classes or regression targets that have been encountered in previous tasks ( Rebuffi et al. , 2017 ; Wu et al. , 2019 ) . To tackle aforementioned problems , we propose a nonparametric method to handle incremental few-shot learning based on learning vector quantization ( LVQ ) ( Sato & Yamada , 1996 ) in deep embedded space . As such , the adverse effects of imbalanced weights in a parametric classifier can be completely avoided ( Mensink et al. , 2013 ; Snell et al. , 2017 ; Yu et al. , 2020 ) . Our contributions are three fold . First , a unified framework is developed , termed as incremental deep learning vector quantization ( IDLVQ ) , to handle both incremental classification ( IDLVQ-C ) and regression ( IDLVQ-R ) problems . Second , we develop intra-class variance regularization , less forgetting constraints and calibration factors to mitigate catastrophic forgetting in class-incremental learning . Finally , the proposed methods achieve state-of-the-art performance on incremental fewshot classification and regression datasets . 2 RELATED WORK . Incremental learning : Some incremental learning approaches rely on memory replay of old exemplars to prevent forgetting previously learned knowledge . Old exemplars can be saved in memory ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Prabhu et al. , 2020 ) or sampled from generative models ( Shin et al. , 2017 ; Kemker & Kanan , 2018 ; van de Ven et al. , 2020 ) . However , explicit storage of training samples is not scalable if the number of classes is large . Furthermore , it is difficult to train a reliable generative model for all classes from very few training samples . In parallel , regularization approaches do not require old exemplars and impose regularization on network weights or outputs to minimize the change of parameters that are important to old tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) . To avoid quick performance deterioration after learning a sequence of novel tasks in regularization approaches , semantic drift compensation ( SDC ) is developed by learning an embedding network via triplet loss ( Schroff et al. , 2015 ) and compensates the drift of class centroids using novel data only ( Yu et al. , 2020 ) . In comparison , IDLVQ-C saves only one exemplar per class and uses saved exemplars to regularize the change in feature extractor and calibrate the change in reference vectors . Few-shot learning : Few-shot learning attempts to obtain models for classification or regression tasks with only a few labelled samples . Few-shot models are trained on widely-varying episodes of fake few-shot tasks with labelled samples drawn from a large-scale meta-training dataset ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Ravi & Larochelle , 2017 ; Snell et al. , 2017 ; Sung et al. , 2018 ) . Meanwhile , recent works attempt to handle novel few-shot tasks while retraining the knowledge of the base task . These methods are referred to as dynamic few-shot learning ( Gidaris & Komodakis , 2018 ; Ren et al. , 2019a ; Gidaris & Komodakis , 2019 ) . However , dynamic few-shot learning is different from incremental few-shot learning , because they rely on the entire base training dataset and an extra meta-training dataset during meta-training . In addition , dynamic few-shot learning does not accumulate knowledge for multiple novel tasks sequentially . Incremental few-shot learning : Prior works on incremental few-shot learning focus on classification problems by computing the weights for novel classes in parametric classifiers , without iterative gradient descent . For instance , the weights of novel classes can be imprinted by normalized prototypes of novel classes , while keeping the feature extractor fixed ( Qi et al. , 2018 ) . Since novel weights are computed only with the samples of novel classes , the fixed feature extractor may not be compatible with novel classification weights . More recently , neural gas network is employed to construct an undirected graph to represent knowledge of old classes ( Tao et al. , 2020b ; a ) . The vertices in the graph are constructed in an unsupervised manner using competitive Hebbian learning ( Fritzke , 1995 ) , while the feature embedding is fixed . In contrast , IDLVQ learns both feature extractor and reference vectors concurrently in a supervised manner . 3 BACKGROUND . 3.1 INCREMENTAL FEW-SHOT LEARNING . In this paper , incremental few-shot learning is studied for both classification and regression tasks . For classification tasks , we consider the standard class-incremental setup in literature . After the model is trained on a base task ( t = 1 ) with sufficient data , the model learns novel tasks sequentially . Each novel task contains a number of novel classes with only a few training samples per class . Learning a novel task ( t > 1 ) is referred to as an incremental learning session . In task t , we have access only to training data Dt in the current task and previously saved exemplars ( one exemplar per class in this study ) . Each task has a set of classes Ct = { ct1 , ... , ctnt } , where nt is the number of classes in task t. In addition , it is assumed that there is no overlap between classes in different tasks Ct ⋂ Cs = ∅ for t 6= s. After an incremental learning session , the performance of the model is evaluated on a test set that contains all previously seen classes C = ⋂ i C i . Note that our focus is not multi-task scenario , where a task ID is exposed to the model during test phase and the model is only required to perform a given task one time ( van de Ven & Tolias , 2019 ) . Our model is evaluated in a task-agnostic setting , where task ID is not exposed to the model at test time . For regression tasks , we follow a similar setting with a notable difference that the target is realvalued y ∈ R. In addition , the target values in different tasks do not have to be mutually exclusive , unlike the class-incremental setup . 3.2 LEARNING VECTOR QUANTIZATION . Traditional nonparametric methods , such as nearest neighbors , represent knowledge and make predictions by storing the entire training set . Despite the simplicity and effectiveness , they are not scalable to a large-scale base dataset . Typically , incremental learning methods are only allowed to store a small number of exemplars to preserve the knowledge of previously learned tasks . However , randomly selected exemplars may not well present the knowledge in old tasks . LVQ is a classical data compression method that represents the knowledge through a few learned reference vectors ( Sato & Yamada , 1996 ; Seo & Obermayer , 2003 ; Biehl et al. , 2007 ) . A new sample is classified to the same label as the nearest reference vector in the input space . LVQ has been combined with deep feature extractors as an alternative to standard neural networks for better interpretability ( De Vries et al. , 2016 ; Villmann et al. , 2017 ; Saralajew et al. , 2018 ) . The combinations of LVQ and deep feature extractors have been applied to natural language processing ( NLP ) , facial recognition and biometrics ( Variani et al. , 2015 ; Wang et al. , 2016 ; Ren et al. , 2019b ; Leng et al. , 2015 ) . We notice that LVQ is a nonparametric method which is well suited for incremental few-shot learning because the model capacity grows by incorporating more reference vectors to learn new knowledge . For example , incremental learning vector quantization ( ILVQ ) has been developed to learn classification models adaptively from raw features ( Xu et al. , 2012 ) . In this study , we present the knowledge by learning reference vectors in the feature space through LVQ and adapt them in incremental few-shot learning . Compared with ILVQ by Xu et al . ( 2012 ) , our method does not rely on predefined rules to update reference vectors and can be learned along with deep neural networks in an end-to-end fashion . Besides , our method uses a single reference vector for each class , while ILVQ automatically assigns different numbers of prototypes for different classes .
This paper proposes a nonparametric method in deep embedded space to address incremental few-shot learning problems. By compressing the learned tasks into a small number of reference vectors, the method could add more reference vectors to the model for each novel task, which could alleviate catastrophic forgetting and improve the performance of related tasks. Finally, this paper evaluates the proposed method on the classification and regression problem, respectively.
SP:adf60fe287e9fcb67401f89701ba88b199df8700
Incremental few-shot learning via vector quantization in deep embedded space
The capability of incrementally learning new tasks without forgetting old ones is a challenging problem due to catastrophic forgetting . This challenge becomes greater when novel tasks contain very few labelled training samples . Currently , most methods are dedicated to class-incremental learning and rely on sufficient training data to learn additional weights for newly added classes . Those methods can not be easily extended to incremental regression tasks and could suffer from severe overfitting when learning few-shot novel tasks . In this study , we propose a nonparametric method in deep embedded space to tackle incremental few-shot learning problems . The knowledge about the learned tasks is compressed into a small number of quantized reference vectors . The proposed method learns new tasks sequentially by adding more reference vectors to the model using few-shot samples in each novel task . For classification problems , we employ the nearest neighbor scheme to make classification on sparsely available data and incorporate intra-class variation , less forgetting regularization and calibration of reference vectors to mitigate catastrophic forgetting . In addition , the proposed learning vector quantization ( LVQ ) in deep embedded space can be customized as a kernel smoother to handle incremental few-shot regression tasks . Experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in incremental learning . 1 INTRODUCTION . Incremental learning is a learning paradigm that allows the model to continually learn new tasks on novel data , without forgetting how to perform previously learned tasks ( Cauwenberghs & Poggio , 2001 ; Kuzborskij et al. , 2013 ; Mensink et al. , 2013 ) . The capability of incremental learning becomes more important in real-world applications , in which the deployed models are exposed to possible out-of-sample data . Typically , hundreds of thousands of labelled samples in new tasks are required to re-train or fine-tune the model ( Rebuffi et al. , 2017 ) . Unfortunately , it is impractical to gather sufficient samples of new tasks in real applications . In contrast , humans can learn new concepts from just one or a few examples , without losing old knowledge . Therefore , it is desirable to develop algorithms to support incremental learning from very few samples . While a natural approach for incremental few-shot learning is to fine-tune part of the base model using novel training data ( Donahue et al. , 2014 ; Girshick et al. , 2014 ) , the model could suffer from severe over-fitting on new tasks due to a limited number of training samples . Moreover , simple fine-tuning also leads to significant performance drop on previously learned tasks , termed as catastrophic forgetting ( Goodfellow et al. , 2014 ) . Recent attempts to mitigate the catastrophic forgetting are generally categorized into two streams : memory relay of old training samples ( Rebuffi et al. , 2017 ; Shin et al. , 2017 ; Kemker & Kanan , 2018 ) and regularization on important model parameters ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) . However , those incremental learning approaches are developed and tested on unrealistic scenarios where sufficient training samples are available in novel tasks . They may not work well when the training samples in novel tasks are few ( Tao et al. , 2020b ) . To the best of our knowledge , the majority of incremental learning methodologies focus on classification problems and they can not be extended to regression problems easily . In class-incremental learning , the model has to expand output dimensions to learn N ′ novel classes while keeping the knowledge of existing N classes . Parametric models estimate additional classification weights for novel classes , while nonparametric methods compute the class centroids for novel classes . In comparison , output dimensions in regression problems do not change in incremental learning as neither additional weights nor class centroids are applicable to regression problems . Besides , we find that catastrophic forgetting in incremental few-shot classification can be attributed to three reasons . First , the model is biased towards new classes and forgets old classes because the model is fine-tuned on new data only ( Hou et al. , 2019 ; Zhao et al. , 2020 ) . Meanwhile , the prediction accuracy on novel classes is not good due to over-fitting on few-shot training samples . Second , features of novel samples could overlap with those of old classes in the feature space , leading to ambiguity among classes in the feature space . Finally , features of old classes and classification weights are no longer compatible after the model is fine-tuned with new data . In this paper , we investigate the problem of incremental few-shot learning , where only a few training samples are available in new tasks . A unified model is learned sequentially to jointly recognize all classes or regression targets that have been encountered in previous tasks ( Rebuffi et al. , 2017 ; Wu et al. , 2019 ) . To tackle aforementioned problems , we propose a nonparametric method to handle incremental few-shot learning based on learning vector quantization ( LVQ ) ( Sato & Yamada , 1996 ) in deep embedded space . As such , the adverse effects of imbalanced weights in a parametric classifier can be completely avoided ( Mensink et al. , 2013 ; Snell et al. , 2017 ; Yu et al. , 2020 ) . Our contributions are three fold . First , a unified framework is developed , termed as incremental deep learning vector quantization ( IDLVQ ) , to handle both incremental classification ( IDLVQ-C ) and regression ( IDLVQ-R ) problems . Second , we develop intra-class variance regularization , less forgetting constraints and calibration factors to mitigate catastrophic forgetting in class-incremental learning . Finally , the proposed methods achieve state-of-the-art performance on incremental fewshot classification and regression datasets . 2 RELATED WORK . Incremental learning : Some incremental learning approaches rely on memory replay of old exemplars to prevent forgetting previously learned knowledge . Old exemplars can be saved in memory ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Prabhu et al. , 2020 ) or sampled from generative models ( Shin et al. , 2017 ; Kemker & Kanan , 2018 ; van de Ven et al. , 2020 ) . However , explicit storage of training samples is not scalable if the number of classes is large . Furthermore , it is difficult to train a reliable generative model for all classes from very few training samples . In parallel , regularization approaches do not require old exemplars and impose regularization on network weights or outputs to minimize the change of parameters that are important to old tasks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) . To avoid quick performance deterioration after learning a sequence of novel tasks in regularization approaches , semantic drift compensation ( SDC ) is developed by learning an embedding network via triplet loss ( Schroff et al. , 2015 ) and compensates the drift of class centroids using novel data only ( Yu et al. , 2020 ) . In comparison , IDLVQ-C saves only one exemplar per class and uses saved exemplars to regularize the change in feature extractor and calibrate the change in reference vectors . Few-shot learning : Few-shot learning attempts to obtain models for classification or regression tasks with only a few labelled samples . Few-shot models are trained on widely-varying episodes of fake few-shot tasks with labelled samples drawn from a large-scale meta-training dataset ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Ravi & Larochelle , 2017 ; Snell et al. , 2017 ; Sung et al. , 2018 ) . Meanwhile , recent works attempt to handle novel few-shot tasks while retraining the knowledge of the base task . These methods are referred to as dynamic few-shot learning ( Gidaris & Komodakis , 2018 ; Ren et al. , 2019a ; Gidaris & Komodakis , 2019 ) . However , dynamic few-shot learning is different from incremental few-shot learning , because they rely on the entire base training dataset and an extra meta-training dataset during meta-training . In addition , dynamic few-shot learning does not accumulate knowledge for multiple novel tasks sequentially . Incremental few-shot learning : Prior works on incremental few-shot learning focus on classification problems by computing the weights for novel classes in parametric classifiers , without iterative gradient descent . For instance , the weights of novel classes can be imprinted by normalized prototypes of novel classes , while keeping the feature extractor fixed ( Qi et al. , 2018 ) . Since novel weights are computed only with the samples of novel classes , the fixed feature extractor may not be compatible with novel classification weights . More recently , neural gas network is employed to construct an undirected graph to represent knowledge of old classes ( Tao et al. , 2020b ; a ) . The vertices in the graph are constructed in an unsupervised manner using competitive Hebbian learning ( Fritzke , 1995 ) , while the feature embedding is fixed . In contrast , IDLVQ learns both feature extractor and reference vectors concurrently in a supervised manner . 3 BACKGROUND . 3.1 INCREMENTAL FEW-SHOT LEARNING . In this paper , incremental few-shot learning is studied for both classification and regression tasks . For classification tasks , we consider the standard class-incremental setup in literature . After the model is trained on a base task ( t = 1 ) with sufficient data , the model learns novel tasks sequentially . Each novel task contains a number of novel classes with only a few training samples per class . Learning a novel task ( t > 1 ) is referred to as an incremental learning session . In task t , we have access only to training data Dt in the current task and previously saved exemplars ( one exemplar per class in this study ) . Each task has a set of classes Ct = { ct1 , ... , ctnt } , where nt is the number of classes in task t. In addition , it is assumed that there is no overlap between classes in different tasks Ct ⋂ Cs = ∅ for t 6= s. After an incremental learning session , the performance of the model is evaluated on a test set that contains all previously seen classes C = ⋂ i C i . Note that our focus is not multi-task scenario , where a task ID is exposed to the model during test phase and the model is only required to perform a given task one time ( van de Ven & Tolias , 2019 ) . Our model is evaluated in a task-agnostic setting , where task ID is not exposed to the model at test time . For regression tasks , we follow a similar setting with a notable difference that the target is realvalued y ∈ R. In addition , the target values in different tasks do not have to be mutually exclusive , unlike the class-incremental setup . 3.2 LEARNING VECTOR QUANTIZATION . Traditional nonparametric methods , such as nearest neighbors , represent knowledge and make predictions by storing the entire training set . Despite the simplicity and effectiveness , they are not scalable to a large-scale base dataset . Typically , incremental learning methods are only allowed to store a small number of exemplars to preserve the knowledge of previously learned tasks . However , randomly selected exemplars may not well present the knowledge in old tasks . LVQ is a classical data compression method that represents the knowledge through a few learned reference vectors ( Sato & Yamada , 1996 ; Seo & Obermayer , 2003 ; Biehl et al. , 2007 ) . A new sample is classified to the same label as the nearest reference vector in the input space . LVQ has been combined with deep feature extractors as an alternative to standard neural networks for better interpretability ( De Vries et al. , 2016 ; Villmann et al. , 2017 ; Saralajew et al. , 2018 ) . The combinations of LVQ and deep feature extractors have been applied to natural language processing ( NLP ) , facial recognition and biometrics ( Variani et al. , 2015 ; Wang et al. , 2016 ; Ren et al. , 2019b ; Leng et al. , 2015 ) . We notice that LVQ is a nonparametric method which is well suited for incremental few-shot learning because the model capacity grows by incorporating more reference vectors to learn new knowledge . For example , incremental learning vector quantization ( ILVQ ) has been developed to learn classification models adaptively from raw features ( Xu et al. , 2012 ) . In this study , we present the knowledge by learning reference vectors in the feature space through LVQ and adapt them in incremental few-shot learning . Compared with ILVQ by Xu et al . ( 2012 ) , our method does not rely on predefined rules to update reference vectors and can be learned along with deep neural networks in an end-to-end fashion . Besides , our method uses a single reference vector for each class , while ILVQ automatically assigns different numbers of prototypes for different classes .
This paper suggests to use a generative model to address the problem of 'few shot' incremental learning. The idea is to classify input data by maintaining a population of prototypes and measuring the distance of the examples to be classified from these prototypes. As, each prototype represents a class, an example to be classified is assigned to the class of the closest prototype. The learning of the neural network transforming input data to the prototype space is learned using 2 loss functions: the first one maximizes the margin between the distance to the prototype of the correct class and the other prototypes, the second one makes the clusters as compact as possible.
SP:adf60fe287e9fcb67401f89701ba88b199df8700
Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
1 INTRODUCTION . The standard assumption in empirical risk minimization ( ERM ) is that the data distribution at test time will match the distribution at training time . When this assumption does not hold , the performance of standard ERM methods typically deteriorates rapidly , and this setting is commonly referred to as distribution or dataset shift ( Quiñonero Candela et al. , 2009 ; Lazer et al. , 2014 ) . For instance , we can imagine a handwriting classification system that , after training on a large database of past images , is deployed to specific end users . Some new users have peculiarities in their handwriting style , leading to shift in the input distribution . This test scenario must be carefully considered when building machine learning systems for real world applications . Algorithms for handling distribution shift have been studied under a number of frameworks ( Quiñonero Candela et al. , 2009 ) . Many of these frameworks aim for zero shot generalization to shift , which requires more restrictive but realistic assumptions . For example , one popular assumption is that the training data are provided in groups and that distributions at test time will represent either new group distributions or new groups altogether . This assumption is used by , e.g. , group distributionally robust optimization ( DRO ) ( Hu et al. , 2018 ; Sagawa et al. , 2020 ) , robust federated learning ( Mohri et al. , 2019 ; Li et al. , 2020 ) , and domain generalization ( Blanchard et al. , 2011 ; Gulrajani & Lopez-Paz , 2020 ) . Constructing training groups or tasks in practice is generally accomplished by using meta-data , which exists for most commonly used datasets . This assumption allows for more tractable optimization and still permits a wide range of realistic distribution shifts . However , achieving strong zero shot generalization in this setting is still a hard problem . For example , DRO methods , which focus on achieving maximal worst case performance , can often be overly pessimistic and learn models that do not perform well on the actual test distributions ( Hu et al. , 2018 ) . In this work , we take a different approach to combating group distribution shift by learning models that are able to deal with shift by adapting to the test time distribution . To do so , we assume that we can access a batch of unlabeled data points at test time – as opposed to individual isolated inputs – which can be used to implicitly infer the test distribution . This assumption is reasonable in many standard supervised learning setups . For example , we do not access single handwritten characters from an end user , but rather collections of characters such as sentences or paragraphs . When combined with the group assumption above , we arrive at a problem setting that is similar to the standard meta-learning setting ( Vinyals et al. , 2016 ) . This allows us to extend well established tools and techniques from meta-learning to address distribution shift problems . Meta-learning typically assumes that training data are grouped into tasks and new tasks are encountered at meta-test time , however these new tasks still include labeled examples for adaptation . As illustrated in Figure 1 , we instead aim to train a model that uses unlabeled data to adapt to the test distribution , thereby not requiring the model to generalize zero shot to all test distributions as in prior approaches . The main contribution of this paper is to introduce the framework of adaptive risk minimization ( ARM ) , in which models have the opportunity to adapt to the data distribution at test time based on unlabeled data points . This contribution provides a principled approach for designing meta-learning methods to tackle distribution shift . We introduce an algorithm and instantiate a set of methods for solving ARM that , given a set of candidate distribution shifts , meta-learns a model that is adaptable to these shifts . One such method is based on meta-training a model such that simply updating batch normalization statistics ( Ioffe & Szegedy , 2015 ) provides effective adaptation at test time , and we demonstrate that this simple approach can produce surprisingly strong results . Our experiments demonstrate that the proposed methods , by leveraging the meta-training phase , are able to outperform prior methods for handling distribution shift in image classification settings exhibiting group shift , including benchmarks for federated learning ( Caldas et al. , 2019 ) and testing image classifier robustness ( Hendrycks & Dietterich , 2019 ) . 2 RELATED WORK . A number of prior works have studied distributional shift in various forms ( Quiñonero Candela et al. , 2009 ) . In this section , we review prior work in robust optimization , meta-learning , and adaptation . Robust optimization . DRO methods optimize machine learning systems to be robust to adversarial data distributions , thus optimizing for worst case performance against distribution shift ( Globerson & Roweis , 2006 ; Ben-Tal et al. , 2013 ; Liu & Ziebart , 2014 ; Esfahani & Kuhn , 2015 ; Miyato et al. , 2015 ; Duchi et al. , 2016 ; Blanchet et al. , 2016 ) . Recent work has shown that these algorithms can be utilized with deep neural networks , with additional care taken for regularization and model capacity ( Sagawa et al. , 2020 ) . Unlike DRO methods , ARM methods do not require the model to generalize zero shot to all test time distribution shifts , but instead trains it to adapt to these shifts . Also of particular interest are methods for robustness or adaptation to different users ( Horiguchi et al. , 2018 ; Chen et al. , 2018 ; Jiang et al. , 2019 ; Fallah et al. , 2020 ; Lin et al. , 2020 ) , a setting commonly referred to as robust or fair federated learning ( McMahan et al. , 2017 ; Mohri et al. , 2019 ; Li et al. , 2020 ) . Unlike these works , we consider the federated learning problem setting in which we do not assume access to any labels from any test users , as we partition users into disjoint train and test sets . We argue that this is a realistic setting for many practical machine learning systems – oftentimes , the only available information from the end user is an unlabeled batch of data . Meta-learning . Meta-learning ( Schmidhuber , 1987 ; Bengio et al. , 1992 ; Thrun & Pratt , 1998 ; Hochreiter et al. , 2001 ) has been most extensively studied in the context of few shot supervised learning methods ( Santoro et al. , 2016 ; Vinyals et al. , 2016 ; Ravi & Larochelle , 2017 ; Finn et al. , 2017 ; Snell et al. , 2017 ) , i.e. , labeled adaptation . The aim of this work is to extend meta-learning paradigms to problems requiring unlabeled adaptation , with the goal of tackling distribution shift . We demonstrate in the next section how paradigms such as contextual meta-learning ( Garnelo et al. , 2018 ; Requeima et al. , 2019 ) are readily extended using the ARM framework . Some other meta-learning methods adapt using both labeled and unlabeled data , either in the semi supervised learning setting ( Ren et al. , 2018 ; Zhang et al. , 2018 ; Li et al. , 2019 ) or the transductive learning setting ( Liu et al. , 2019 ; Antoniou & Storkey , 2019 ; Hu et al. , 2020 ) . These works do not focus on the same setting of distribution shift and all assume access to labeled data for adaptation . Prior works in meta-learning for unlabeled adaptation include Yu et al . ( 2018 ) , which adapts a policy to imitate human demonstrations in the context of robotic learning , and Metz et al . ( 2019 ) , which meta-learns an update rule for unsupervised representation learning , though they still require labels to learn a predictive model . Unlike these prior works , the ARM framework facilitates the development of meta-learning methods for quickly adapting a predictive model using unlabeled examples . Adaptation to shift . Unlabeled adaptation has primarily been studied separately from meta-learning . Domain adaptation is a prominent framework that assumes access to test examples at training time , similar to transductive learning ( Vapnik , 1998 ) . Some of these methods , such as importance weighting approaches ( Shimodaira , 2000 ) , only handle a single predefined shift and do not constitute test time adaptation ( Csurka , 2017 ; Wilson & Cook , 2020 ) . Certain domain adaptation methods , however , are applicable in the setting with training groups , such as methods for learning invariant features ( Ganin & Lempitsky , 2015 ; Li et al. , 2018 ) , and we compare to these methods in Section 4 . Several methods for adaptation at test time have been developed specifically for dealing with label shift ( Royer & Lampert , 2015 ; Lipton et al. , 2018 ; Sulc & Matas , 2019 ) . Other methods adapt using statistics of the test inputs ( Li et al. , 2017 ) or optimize self-supervised surrogate losses ( Sun et al. , 2020 ) , and these methods have been shown to perform well across a number of image classification domains . We also compare against these prior methods in Section 4 . 3 ADAPTIVE RISK MINIMIZATION . In this section , we first formally describe the ARM problem setting , which builds on the settings used in prior work for tackling distribution shift . The novel aspect of the ARM setting is that it is amenable to meta-learning solutions to shift , and we demonstrate this by proposing an objective for the ARM setting that resembles typical meta-learning objectives . The problem setting and objective together constitute the ARM problem formulation . We subsequently propose a general algorithm as well as specific meta-learning approaches for solving the ARM problem . 3.1 THE ARM PROBLEM SETTING . A key goal in machine learning is to develop methods that can go beyond the standard ERM setting and generalize in the face of distribution shift . Accomplishing this goal necessitates the use of additional assumptions beyond ERM , and we wish to carefully craft these assumptions such that they fulfill two properties : they are realistic and applicable to real world problems , and they allow for powerful and tractable methods . In this work , we choose two assumptions that are well established in the literature on distribution shift , in order to fulfill the first property , and we develop a novel meta-learning framework using these assumptions , thus fulfilling the second . The first assumption is that the training data are provided in groups , which , as discussed above , mirrors analogous assumptions made in group DRO ( Hu et al. , 2018 ) , federated learning ( McMahan et al. , 2017 ) , and meta-learning ( Vinyals et al. , 2016 ) , among other settings . The second assumption is that we observe batches of test points all together , rather than one point at a time . Assuming access to multiple test points has been standard in domain adaptation ( Csurka , 2017 ; Wilson & Cook , 2020 ) , which makes this assumption for training , as well as recent works studying test time adaptation ( Li et al. , 2017 ; Sun et al. , 2020 ; Wang et al. , 2020 ) . To our knowledge , these assumptions have not been considered simultaneously in prior work . However , as we detail in this section , it is their conjunction that allows us to develop meta-learning solutions to shift . In the ARM problem setting , we assume access to a training dataset that consists of N labeled data points ( x ( i ) , y ( i ) , z ( i ) ) sampled i.i.d . from the training distribution p. As noted , this differs from standard supervised learning in that we additionally observe the group z ( i ) associated with each point , which is a discrete value z ∈ { 1 , . . . , S } that can represent tasks , users , or other types of meta-data . The goal is to learn a model g ( · ; θ ) : X → Y that is parameterized by θ ∈ Θ and predicts the output y ∈ Y given the input x ∈ X . At test time , we are given batches of K unlabeled data points , where each batch is drawn from a distribution that may differ from both p and the other batch distributions , and we do not observe either y or z . For example , we can imagine a test scenario that separately considers each user ’ s images , as discussed in Section 1 .
This paper studies domain adaptation under the assumption that only unlabeled target data is available in training and the domain shift follows a special group shift. The main idea for the proposed method is having an adaptation model that takes only the unlabeled data in and output updated parameters. The proposed method also involves the test-time training, which means the adaptation model takes in unlabeled training data in training but takes in unlabeled target data in the adaptation phase. The method is called adaptive risk minimization and there are two meta-learning approaches provided, contextual and gradient-based, in the paper. In the experiment, the proposed method outperforms a limited set of baselines. The paper also discusses a few cases when the assumption is violated like the group indicators are unknown.
SP:3f2e132cbd2eaf710316773a3f38c84c24f23b63
Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
1 INTRODUCTION . The standard assumption in empirical risk minimization ( ERM ) is that the data distribution at test time will match the distribution at training time . When this assumption does not hold , the performance of standard ERM methods typically deteriorates rapidly , and this setting is commonly referred to as distribution or dataset shift ( Quiñonero Candela et al. , 2009 ; Lazer et al. , 2014 ) . For instance , we can imagine a handwriting classification system that , after training on a large database of past images , is deployed to specific end users . Some new users have peculiarities in their handwriting style , leading to shift in the input distribution . This test scenario must be carefully considered when building machine learning systems for real world applications . Algorithms for handling distribution shift have been studied under a number of frameworks ( Quiñonero Candela et al. , 2009 ) . Many of these frameworks aim for zero shot generalization to shift , which requires more restrictive but realistic assumptions . For example , one popular assumption is that the training data are provided in groups and that distributions at test time will represent either new group distributions or new groups altogether . This assumption is used by , e.g. , group distributionally robust optimization ( DRO ) ( Hu et al. , 2018 ; Sagawa et al. , 2020 ) , robust federated learning ( Mohri et al. , 2019 ; Li et al. , 2020 ) , and domain generalization ( Blanchard et al. , 2011 ; Gulrajani & Lopez-Paz , 2020 ) . Constructing training groups or tasks in practice is generally accomplished by using meta-data , which exists for most commonly used datasets . This assumption allows for more tractable optimization and still permits a wide range of realistic distribution shifts . However , achieving strong zero shot generalization in this setting is still a hard problem . For example , DRO methods , which focus on achieving maximal worst case performance , can often be overly pessimistic and learn models that do not perform well on the actual test distributions ( Hu et al. , 2018 ) . In this work , we take a different approach to combating group distribution shift by learning models that are able to deal with shift by adapting to the test time distribution . To do so , we assume that we can access a batch of unlabeled data points at test time – as opposed to individual isolated inputs – which can be used to implicitly infer the test distribution . This assumption is reasonable in many standard supervised learning setups . For example , we do not access single handwritten characters from an end user , but rather collections of characters such as sentences or paragraphs . When combined with the group assumption above , we arrive at a problem setting that is similar to the standard meta-learning setting ( Vinyals et al. , 2016 ) . This allows us to extend well established tools and techniques from meta-learning to address distribution shift problems . Meta-learning typically assumes that training data are grouped into tasks and new tasks are encountered at meta-test time , however these new tasks still include labeled examples for adaptation . As illustrated in Figure 1 , we instead aim to train a model that uses unlabeled data to adapt to the test distribution , thereby not requiring the model to generalize zero shot to all test distributions as in prior approaches . The main contribution of this paper is to introduce the framework of adaptive risk minimization ( ARM ) , in which models have the opportunity to adapt to the data distribution at test time based on unlabeled data points . This contribution provides a principled approach for designing meta-learning methods to tackle distribution shift . We introduce an algorithm and instantiate a set of methods for solving ARM that , given a set of candidate distribution shifts , meta-learns a model that is adaptable to these shifts . One such method is based on meta-training a model such that simply updating batch normalization statistics ( Ioffe & Szegedy , 2015 ) provides effective adaptation at test time , and we demonstrate that this simple approach can produce surprisingly strong results . Our experiments demonstrate that the proposed methods , by leveraging the meta-training phase , are able to outperform prior methods for handling distribution shift in image classification settings exhibiting group shift , including benchmarks for federated learning ( Caldas et al. , 2019 ) and testing image classifier robustness ( Hendrycks & Dietterich , 2019 ) . 2 RELATED WORK . A number of prior works have studied distributional shift in various forms ( Quiñonero Candela et al. , 2009 ) . In this section , we review prior work in robust optimization , meta-learning , and adaptation . Robust optimization . DRO methods optimize machine learning systems to be robust to adversarial data distributions , thus optimizing for worst case performance against distribution shift ( Globerson & Roweis , 2006 ; Ben-Tal et al. , 2013 ; Liu & Ziebart , 2014 ; Esfahani & Kuhn , 2015 ; Miyato et al. , 2015 ; Duchi et al. , 2016 ; Blanchet et al. , 2016 ) . Recent work has shown that these algorithms can be utilized with deep neural networks , with additional care taken for regularization and model capacity ( Sagawa et al. , 2020 ) . Unlike DRO methods , ARM methods do not require the model to generalize zero shot to all test time distribution shifts , but instead trains it to adapt to these shifts . Also of particular interest are methods for robustness or adaptation to different users ( Horiguchi et al. , 2018 ; Chen et al. , 2018 ; Jiang et al. , 2019 ; Fallah et al. , 2020 ; Lin et al. , 2020 ) , a setting commonly referred to as robust or fair federated learning ( McMahan et al. , 2017 ; Mohri et al. , 2019 ; Li et al. , 2020 ) . Unlike these works , we consider the federated learning problem setting in which we do not assume access to any labels from any test users , as we partition users into disjoint train and test sets . We argue that this is a realistic setting for many practical machine learning systems – oftentimes , the only available information from the end user is an unlabeled batch of data . Meta-learning . Meta-learning ( Schmidhuber , 1987 ; Bengio et al. , 1992 ; Thrun & Pratt , 1998 ; Hochreiter et al. , 2001 ) has been most extensively studied in the context of few shot supervised learning methods ( Santoro et al. , 2016 ; Vinyals et al. , 2016 ; Ravi & Larochelle , 2017 ; Finn et al. , 2017 ; Snell et al. , 2017 ) , i.e. , labeled adaptation . The aim of this work is to extend meta-learning paradigms to problems requiring unlabeled adaptation , with the goal of tackling distribution shift . We demonstrate in the next section how paradigms such as contextual meta-learning ( Garnelo et al. , 2018 ; Requeima et al. , 2019 ) are readily extended using the ARM framework . Some other meta-learning methods adapt using both labeled and unlabeled data , either in the semi supervised learning setting ( Ren et al. , 2018 ; Zhang et al. , 2018 ; Li et al. , 2019 ) or the transductive learning setting ( Liu et al. , 2019 ; Antoniou & Storkey , 2019 ; Hu et al. , 2020 ) . These works do not focus on the same setting of distribution shift and all assume access to labeled data for adaptation . Prior works in meta-learning for unlabeled adaptation include Yu et al . ( 2018 ) , which adapts a policy to imitate human demonstrations in the context of robotic learning , and Metz et al . ( 2019 ) , which meta-learns an update rule for unsupervised representation learning , though they still require labels to learn a predictive model . Unlike these prior works , the ARM framework facilitates the development of meta-learning methods for quickly adapting a predictive model using unlabeled examples . Adaptation to shift . Unlabeled adaptation has primarily been studied separately from meta-learning . Domain adaptation is a prominent framework that assumes access to test examples at training time , similar to transductive learning ( Vapnik , 1998 ) . Some of these methods , such as importance weighting approaches ( Shimodaira , 2000 ) , only handle a single predefined shift and do not constitute test time adaptation ( Csurka , 2017 ; Wilson & Cook , 2020 ) . Certain domain adaptation methods , however , are applicable in the setting with training groups , such as methods for learning invariant features ( Ganin & Lempitsky , 2015 ; Li et al. , 2018 ) , and we compare to these methods in Section 4 . Several methods for adaptation at test time have been developed specifically for dealing with label shift ( Royer & Lampert , 2015 ; Lipton et al. , 2018 ; Sulc & Matas , 2019 ) . Other methods adapt using statistics of the test inputs ( Li et al. , 2017 ) or optimize self-supervised surrogate losses ( Sun et al. , 2020 ) , and these methods have been shown to perform well across a number of image classification domains . We also compare against these prior methods in Section 4 . 3 ADAPTIVE RISK MINIMIZATION . In this section , we first formally describe the ARM problem setting , which builds on the settings used in prior work for tackling distribution shift . The novel aspect of the ARM setting is that it is amenable to meta-learning solutions to shift , and we demonstrate this by proposing an objective for the ARM setting that resembles typical meta-learning objectives . The problem setting and objective together constitute the ARM problem formulation . We subsequently propose a general algorithm as well as specific meta-learning approaches for solving the ARM problem . 3.1 THE ARM PROBLEM SETTING . A key goal in machine learning is to develop methods that can go beyond the standard ERM setting and generalize in the face of distribution shift . Accomplishing this goal necessitates the use of additional assumptions beyond ERM , and we wish to carefully craft these assumptions such that they fulfill two properties : they are realistic and applicable to real world problems , and they allow for powerful and tractable methods . In this work , we choose two assumptions that are well established in the literature on distribution shift , in order to fulfill the first property , and we develop a novel meta-learning framework using these assumptions , thus fulfilling the second . The first assumption is that the training data are provided in groups , which , as discussed above , mirrors analogous assumptions made in group DRO ( Hu et al. , 2018 ) , federated learning ( McMahan et al. , 2017 ) , and meta-learning ( Vinyals et al. , 2016 ) , among other settings . The second assumption is that we observe batches of test points all together , rather than one point at a time . Assuming access to multiple test points has been standard in domain adaptation ( Csurka , 2017 ; Wilson & Cook , 2020 ) , which makes this assumption for training , as well as recent works studying test time adaptation ( Li et al. , 2017 ; Sun et al. , 2020 ; Wang et al. , 2020 ) . To our knowledge , these assumptions have not been considered simultaneously in prior work . However , as we detail in this section , it is their conjunction that allows us to develop meta-learning solutions to shift . In the ARM problem setting , we assume access to a training dataset that consists of N labeled data points ( x ( i ) , y ( i ) , z ( i ) ) sampled i.i.d . from the training distribution p. As noted , this differs from standard supervised learning in that we additionally observe the group z ( i ) associated with each point , which is a discrete value z ∈ { 1 , . . . , S } that can represent tasks , users , or other types of meta-data . The goal is to learn a model g ( · ; θ ) : X → Y that is parameterized by θ ∈ Θ and predicts the output y ∈ Y given the input x ∈ X . At test time , we are given batches of K unlabeled data points , where each batch is drawn from a distribution that may differ from both p and the other batch distributions , and we do not observe either y or z . For example , we can imagine a test scenario that separately considers each user ’ s images , as discussed in Section 1 .
The authors try to tackle the *distribution shift* problem with a meta learning approach. The algorithm, namely ARM, is proposed. Following regular meta learning regime, ARM uses an updated version of parameter $\theta'$ to calculate the loss for back propagation. Several specific implementations are put forward, i.e. the contextual / gradient-based methods. Experiments are performed in small and large scale datasets to demonstrate the effectiveness of the proposed algorithm. Detailed ablation study and qualitative analysis are also conducted.
SP:3f2e132cbd2eaf710316773a3f38c84c24f23b63
Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
1 Introduction . Deep neural networks have been applied to a wide range of artificial intelligence tasks such as computer vision , natural language processing , and signal processing with remarkable performance ( Ren et al. , 2015 ; Devlin et al. , 2018 ; Oord et al. , 2016 ) . However , it has been shown that neural networks have excessive representation capability and can even fit random data ( Zhang et al. , 2016 ) . Due to these characteristics , the neural networks can easily overfit to training data and show a large generalization gap when tested on previously unseen data . To improve the generalization performance of the neural networks , a body of research has been proposed to develop regularizers based on priors or to augment the training data with task-dependent transforms ( Bishop , 2006 ; Cubuk et al. , 2019 ) . Recently , a new taskindependent data augmentation technique , called mixup , has been proposed ( Zhang et al. , 2018 ) . The original mixup , called Input Mixup , linearly interpolates a given pair of input data and can be easily applied to various data and tasks , improving the generalization performance and robustness of neural networks . Other mixup methods , such as manifold mixup ( Verma et al. , 2019 ) or CutMix ( Yun et al. , 2019 ) , have also been proposed addressing different ways to mix a given pair of input data . Puzzle Mix ( Kim et al. , 2020 ) utilizes saliency information and local statistics to ensure mixup data to have rich supervisory signals . However , these approaches only consider mixing a given random pair of input data and do not fully utilize the rich informative supervisory signal in training data including collection of object saliency , relative arrangement , etc . In this work , we simultaneously consider mixmatching different salient regions among all input data so that each generated mixup example accumulates as many salient regions from multiple input data as possible while ensuring Correspondence to : Hyun Oh Song . diversity among the generated mixup examples . To this end , we propose a novel optimization problem that maximizes the saliency measure of each individual mixup example while encouraging diversity among them collectively . This formulation results in a novel discrete submodular-supermodular objective . We also propose a practical modular approximation method for the supermodular term and present an efficient iterative submodular minimization algorithm suitable for minibatch-based mixup for neural network training . As illustrated in the Figure 1 , while the proposed method , Co-Mixup , mix-matches the collection of salient regions utilizing inter-arrangements among input data , the existing methods do not consider the saliency information ( Input Mixup & CutMix ) or disassemble salient parts ( Puzzle Mix ) . We verify the performance of the proposed method by training classifiers on CIFAR-100 , Tiny-ImageNet , ImageNet , and the Google commands dataset ( Krizhevsky et al. , 2009 ; Chrabaszcz et al. , 2017 ; Deng et al. , 2009 ; Warden , 2017 ) . Our experiments show the models trained with Co-Mixup achieve the state of the performance compared to other mixup baselines . In addition to the generalization experiment , we conduct weakly-supervised object localization and robustness tasks and confirm Co-Mixup outperforms other mixup baselines . 2 Related works . Mixup Data augmentation has been widely used to prevent deep neural networks from over-fitting to the training data ( Bishop , 1995 ) . The majority of conventional augmentation methods generate new data by applying transformations depending on the data type or the target task ( Cubuk et al. , 2019 ) . Zhang et al . ( 2018 ) proposed mixup , which can be independently applied to various data types and tasks , and improves generalization and robustness of deep neural networks . Input mixup ( Zhang et al. , 2018 ) linearly interpolates between two input data and utilizes the mixed data with the corresponding soft label for training . Following this work , manifold mixup ( Verma et al. , 2019 ) applies the mixup in the hidden feature space , and CutMix ( Yun et al. , 2019 ) suggests a spatial copy and paste based mixup strategy on images . Guo et al . ( 2019 ) trains an additional neural network to optimize a mixing ratio . Puzzle Mix ( Kim et al. , 2020 ) proposes a mixup method based on saliency and local statistics of the given data . In this paper , we propose a discrete optimization-based mixup method simultaneously finding the best combination of collections of salient regions among all input data while encouraging diversity among the generated mixup examples . Saliency The seminal work from Simonyan et al . ( 2013 ) generates a saliency map using a pre-trained neural network classifier without any additional training of the network . Following the work , measuring the saliency of data using neural networks has been studied to obtain a more precise saliency map ( Zhao et al. , 2015 ; Wang et al. , 2015 ) or to reduce the saliency computation cost ( Zhou et al. , 2016 ; Selvaraju et al. , 2017 ) . The saliency information is widely applied to the tasks in various domains , such as object segmentation or speech recognition ( Jung and Kim , 2011 ; Kalinli and Narayanan , 2007 ) . Submodular-Supermodular optimization A submodular ( supermodular ) function is a set function with diminishing ( increasing ) returns property ( Narasimhan and Bilmes , 2005 ) . It is known that any set function can be expressed as the sum of a submodular and supermodular function ( Lovász , 1983 ) , called BP function . Various problems in machine learning can be naturally formulated as BP functions ( Fujishige , 2005 ) , but it is known to be NP-hard ( Lovász , 1983 ) . Therefore , approximate algorithms based on modular approximations of submodular or supermodular terms have been developed ( Iyer and Bilmes , 2012 ) . Our formulation falls into a category of BP function consisting of smoothness function within a mixed output ( submodular ) and a diversity function among the mixup outputs ( supermodular ) . 3 Preliminary . Existing mixup methods return { h ( x1 , xi ( 1 ) ) , . . . , h ( xm , xi ( m ) ) } for given input data { x1 , . . . , xm } , where h : X × X → X is a mixup function and ( i ( 1 ) , . . . , i ( m ) ) is a random permutation of the data indices . In the case of input mixup , h ( x , x′ ) is λx+ ( 1− λ ) x′ , where λ ∈ [ 0 , 1 ] is a random mixing ratio . Manifold mixup applies input mixup in the hidden feature space , and CutMix uses h ( x , x′ ) = 1B x + ( 1 − 1B ) x′ , where 1B is a binary rectangular-shape mask for an image x and represents the element-wise product . Puzzle Mix defines h ( x , x′ ) as z Πᵀx + ( 1 − z ) Π′ᵀx′ , where Π is a transport plan and z is a discrete mask . In detail , for x ∈ Rn , Π ∈ { 0 , 1 } n and z ∈ Ln for L = { lL | l = 0 , 1 , . . . , L } . In this work , we extend the existing mixup functions as h : Xm → Xm′ which performs mixup on a collection of input data and returns another collection . Let xB ∈ Rm×n denote the batch of input data in matrix form . Then , our proposed mixup function is h ( xB ) = ( g ( z1 xB ) , . . . , g ( zm′ xB ) ) , where zj ∈ Lm×n for j = 1 , . . . , m′ with L = { lL | l = 0 , 1 , . . . , L } and g : R m×n → Rn returns a column-wise sum of a given matrix . Note that , the kth column of zj , denoted as zj , k ∈ Lm , can be interpreted as the mixing ratio among m inputs at the kth location . Also , we enforce ‖zj , k‖1 = 1 to maintain the overall statistics of the given input batch . Given the one-hot target labels yB ∈ { 0 , 1 } m×C of the input data with C classes , we generate soft target labels for mixup data as yᵀB õj for j = 1 , . . . , m ′ , where õj = 1n ∑n k=1 zj , k ∈ [ 0 , 1 ] m represents the input source ratio of the jth mixup data . We train models to estimate the soft target labels by minimizing the cross-entropy loss . 4 Method . 4.1 Objective . Saliency Our main objective is to maximize the saliency measure of mixup data while maintaining the local smoothness of data , i.e. , spatially nearby patches in a natural image look similar , temporally adjacent signals have similar spectrum in speech , etc . ( Kim et al. , 2020 ) . As we can see from CutMix in Figure 1 , disregarding saliency can give a misleading supervisory signal by generating mixup data that does not match with the target soft label . While the existing mixup methods only consider the mixup between two inputs , we generalize the number of inputs m to any positive integer . Note , each kth location of outputs has m candidate sources from the inputs . We model the unary labeling cost as the negative value of the saliency , and denote the cost vector at the kth location as ck ∈ Rm . For the saliency measure , we calculate the gradient values of training loss with respect to the input and measure ` 2 norm of the gradient values across input channels ( Simonyan et al. , 2013 ; Kim et al. , 2020 ) . Note that this method does not require any additional architecture dependent modules for saliency calculation . In addition to the unary cost , we encourage adjacent locations to have similar labels for the smoothness of each mixup data . In summary , the objective can be formulated as follows : m′∑ j=1 n∑ k=1 cᵀkzj , k + β m′∑ j=1 ∑ ( k , k′ ) ∈N ( 1− zᵀj , kzj , k′ ) − η m′∑ j=1 n∑ k=1 log p ( zj , k ) , where the prior p is given by zj , k ∼ 1LMulti ( L , λ ) with λ = ( λ1 , . . . , λm ) ∼ Dirichlet ( α , . . . , α ) , which is a generalization of the mixing ratio distribution of Zhang et al . ( 2018 ) , and N denotes a set of adjacent locations ( i.e. , neighboring image patches in vision , subsequent spectrums in speech , etc. ) . Diversity Note that the naive generalization above leads to the identical outputs because the objective is separable and identical for each output . In order to obtain diverse mixup outputs , we model a similarity penalty between outputs . First , we represent the input source information of the jth output by aggregating assigned labels as ∑n k=1 zj , k . For simplicity , let us denote ∑n k=1 zj , k as oj . Then , we measure the similarity between oj ’ s by using the inner-product on Rm . In addition to the input source similarity between outputs , we model the compatibility between input sources , represented as a symmetric matrix Ac ∈ Rm×m+ . Specifically , Ac [ i1 , i2 ] quantifies the degree to which input i1 and i2 are suitable to be mixed together . In summary , we use inner-product on A = ( 1 − ω ) I + ωAc for ω ∈ [ 0 , 1 ] , resulting in a supermodular penalty term . Note that , by minimizing 〈oj , oj′〉A = oᵀjAoj′ , ∀j 6= j′ , we penalize output mixup examples with similar input sources and encourage each individual mixup examples to have high compatibility within . In this work , we measure the distance between locations of salient objects in each input and use the distance matrix Ac [ i , j ] = ‖argmaxksi [ k ] − argmaxksj [ k ] ‖1 , where si is the saliency map of the ith input and k is a location index ( e.g. , k is a 2-D index for image data ) . From now on , we denote this inner-product term as the compatibility term . Over-penalization The conventional mixup methods perform mixup as many as the number of examples in a given mini-batch . In our setting , this is the case when m = m′ . However , the compatibility penalty between outputs is influenced by the pigeonhole principle . For example , suppose the first output consists of two inputs . Then , the inputs must be used again for the remaining m′ − 1 outputs , or only m− 2 inputs can be used . In the latter case , the number of available inputs ( m− 2 ) is less than the outputs ( m′ − 1 ) , and thus , the same input must be used more than twice . Empirically , we found that the remaining compatibility term above over-penalizes the optimization so that a substantial portion of outputs are returned as singletons without any mixup . To mitigate the over-penalization issue , we apply clipping to the compatibility penalty term . Specifically , we model the objective so that no extra penalty would occur when the compatibility among outputs is below a certain level . Now we present our main objective as following : z∗ = argmin zj , k∈Lm , ‖zj , k‖1=1 f ( z ) , where f ( z ) : = m′∑ j=1 n∑ k=1 cᵀkzj , k + β m′∑ j=1 ∑ ( k , k′ ) ∈N ( 1− zᵀj , kzj , k′ ) ( 1 ) + γmax τ , m′∑ j=1 m′∑ j′ 6=j ( n∑ k=1 zj , k ) ᵀ A ( n∑ k=1 zj′ , k ) ︸ ︷︷ ︸ =fc ( z ) −η m′∑ j=1 n∑ k=1 log p ( zj , k ) . In Figure 2 , we describe the properties of the BP optimization problem of Equation ( 1 ) and statistics of the resulting mixup data . Next , we verify the supermodularity of the compatibility term . We first extend the definition of the submodularity of a multi-label function as follows ( Windheuser et al. , 2012 ) . Definition 1 . For a given label set L , a function s : Lm × Lm → R is pairwise submodular , if ∀x , x′ ∈ Lm , s ( x , x ) +s ( x′ , x′ ) ≤ s ( x , x′ ) +s ( x′ , x ) . A function s is pairwise supermodular , if −s is pairwise submodular . Proposition 1 . The compatibility term fc in Equation ( 1 ) is pairwise supermodular for every pair of ( zj1 , k , zj2 , k ) if A is positive semi-definite . Proof . See Appendix B.1 . Finally note that , A = ( 1 − ω ) I + ωAc , where Ac is a symmetric matrix . By using spectral decomposition , Ac can be represented as UDUᵀ , where D is a diagonal matrix and UᵀU = UUᵀ = I . Then , A = U ( ( 1− ω ) I + ωD ) Uᵀ , and thus for small ω > 0 , we can guarantee A to be positive semi-definite .
This paper proposes a new batch mixup method, co-mixup, to improve the networks’ generalization performance and robustness. It formulates the construction of a batch of mixup data by maximizing the data saliency measure of each individual mixup data and the supermodular diversity among the constructed mixup data. An iterative submodular minimization algorithm is used to solve the proposed problem through approximation. Promising empirical performance is reported on several tasks.
SP:a9be89f746c794d25c46d2da1feb6d06f93eb056
Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
1 Introduction . Deep neural networks have been applied to a wide range of artificial intelligence tasks such as computer vision , natural language processing , and signal processing with remarkable performance ( Ren et al. , 2015 ; Devlin et al. , 2018 ; Oord et al. , 2016 ) . However , it has been shown that neural networks have excessive representation capability and can even fit random data ( Zhang et al. , 2016 ) . Due to these characteristics , the neural networks can easily overfit to training data and show a large generalization gap when tested on previously unseen data . To improve the generalization performance of the neural networks , a body of research has been proposed to develop regularizers based on priors or to augment the training data with task-dependent transforms ( Bishop , 2006 ; Cubuk et al. , 2019 ) . Recently , a new taskindependent data augmentation technique , called mixup , has been proposed ( Zhang et al. , 2018 ) . The original mixup , called Input Mixup , linearly interpolates a given pair of input data and can be easily applied to various data and tasks , improving the generalization performance and robustness of neural networks . Other mixup methods , such as manifold mixup ( Verma et al. , 2019 ) or CutMix ( Yun et al. , 2019 ) , have also been proposed addressing different ways to mix a given pair of input data . Puzzle Mix ( Kim et al. , 2020 ) utilizes saliency information and local statistics to ensure mixup data to have rich supervisory signals . However , these approaches only consider mixing a given random pair of input data and do not fully utilize the rich informative supervisory signal in training data including collection of object saliency , relative arrangement , etc . In this work , we simultaneously consider mixmatching different salient regions among all input data so that each generated mixup example accumulates as many salient regions from multiple input data as possible while ensuring Correspondence to : Hyun Oh Song . diversity among the generated mixup examples . To this end , we propose a novel optimization problem that maximizes the saliency measure of each individual mixup example while encouraging diversity among them collectively . This formulation results in a novel discrete submodular-supermodular objective . We also propose a practical modular approximation method for the supermodular term and present an efficient iterative submodular minimization algorithm suitable for minibatch-based mixup for neural network training . As illustrated in the Figure 1 , while the proposed method , Co-Mixup , mix-matches the collection of salient regions utilizing inter-arrangements among input data , the existing methods do not consider the saliency information ( Input Mixup & CutMix ) or disassemble salient parts ( Puzzle Mix ) . We verify the performance of the proposed method by training classifiers on CIFAR-100 , Tiny-ImageNet , ImageNet , and the Google commands dataset ( Krizhevsky et al. , 2009 ; Chrabaszcz et al. , 2017 ; Deng et al. , 2009 ; Warden , 2017 ) . Our experiments show the models trained with Co-Mixup achieve the state of the performance compared to other mixup baselines . In addition to the generalization experiment , we conduct weakly-supervised object localization and robustness tasks and confirm Co-Mixup outperforms other mixup baselines . 2 Related works . Mixup Data augmentation has been widely used to prevent deep neural networks from over-fitting to the training data ( Bishop , 1995 ) . The majority of conventional augmentation methods generate new data by applying transformations depending on the data type or the target task ( Cubuk et al. , 2019 ) . Zhang et al . ( 2018 ) proposed mixup , which can be independently applied to various data types and tasks , and improves generalization and robustness of deep neural networks . Input mixup ( Zhang et al. , 2018 ) linearly interpolates between two input data and utilizes the mixed data with the corresponding soft label for training . Following this work , manifold mixup ( Verma et al. , 2019 ) applies the mixup in the hidden feature space , and CutMix ( Yun et al. , 2019 ) suggests a spatial copy and paste based mixup strategy on images . Guo et al . ( 2019 ) trains an additional neural network to optimize a mixing ratio . Puzzle Mix ( Kim et al. , 2020 ) proposes a mixup method based on saliency and local statistics of the given data . In this paper , we propose a discrete optimization-based mixup method simultaneously finding the best combination of collections of salient regions among all input data while encouraging diversity among the generated mixup examples . Saliency The seminal work from Simonyan et al . ( 2013 ) generates a saliency map using a pre-trained neural network classifier without any additional training of the network . Following the work , measuring the saliency of data using neural networks has been studied to obtain a more precise saliency map ( Zhao et al. , 2015 ; Wang et al. , 2015 ) or to reduce the saliency computation cost ( Zhou et al. , 2016 ; Selvaraju et al. , 2017 ) . The saliency information is widely applied to the tasks in various domains , such as object segmentation or speech recognition ( Jung and Kim , 2011 ; Kalinli and Narayanan , 2007 ) . Submodular-Supermodular optimization A submodular ( supermodular ) function is a set function with diminishing ( increasing ) returns property ( Narasimhan and Bilmes , 2005 ) . It is known that any set function can be expressed as the sum of a submodular and supermodular function ( Lovász , 1983 ) , called BP function . Various problems in machine learning can be naturally formulated as BP functions ( Fujishige , 2005 ) , but it is known to be NP-hard ( Lovász , 1983 ) . Therefore , approximate algorithms based on modular approximations of submodular or supermodular terms have been developed ( Iyer and Bilmes , 2012 ) . Our formulation falls into a category of BP function consisting of smoothness function within a mixed output ( submodular ) and a diversity function among the mixup outputs ( supermodular ) . 3 Preliminary . Existing mixup methods return { h ( x1 , xi ( 1 ) ) , . . . , h ( xm , xi ( m ) ) } for given input data { x1 , . . . , xm } , where h : X × X → X is a mixup function and ( i ( 1 ) , . . . , i ( m ) ) is a random permutation of the data indices . In the case of input mixup , h ( x , x′ ) is λx+ ( 1− λ ) x′ , where λ ∈ [ 0 , 1 ] is a random mixing ratio . Manifold mixup applies input mixup in the hidden feature space , and CutMix uses h ( x , x′ ) = 1B x + ( 1 − 1B ) x′ , where 1B is a binary rectangular-shape mask for an image x and represents the element-wise product . Puzzle Mix defines h ( x , x′ ) as z Πᵀx + ( 1 − z ) Π′ᵀx′ , where Π is a transport plan and z is a discrete mask . In detail , for x ∈ Rn , Π ∈ { 0 , 1 } n and z ∈ Ln for L = { lL | l = 0 , 1 , . . . , L } . In this work , we extend the existing mixup functions as h : Xm → Xm′ which performs mixup on a collection of input data and returns another collection . Let xB ∈ Rm×n denote the batch of input data in matrix form . Then , our proposed mixup function is h ( xB ) = ( g ( z1 xB ) , . . . , g ( zm′ xB ) ) , where zj ∈ Lm×n for j = 1 , . . . , m′ with L = { lL | l = 0 , 1 , . . . , L } and g : R m×n → Rn returns a column-wise sum of a given matrix . Note that , the kth column of zj , denoted as zj , k ∈ Lm , can be interpreted as the mixing ratio among m inputs at the kth location . Also , we enforce ‖zj , k‖1 = 1 to maintain the overall statistics of the given input batch . Given the one-hot target labels yB ∈ { 0 , 1 } m×C of the input data with C classes , we generate soft target labels for mixup data as yᵀB õj for j = 1 , . . . , m ′ , where õj = 1n ∑n k=1 zj , k ∈ [ 0 , 1 ] m represents the input source ratio of the jth mixup data . We train models to estimate the soft target labels by minimizing the cross-entropy loss . 4 Method . 4.1 Objective . Saliency Our main objective is to maximize the saliency measure of mixup data while maintaining the local smoothness of data , i.e. , spatially nearby patches in a natural image look similar , temporally adjacent signals have similar spectrum in speech , etc . ( Kim et al. , 2020 ) . As we can see from CutMix in Figure 1 , disregarding saliency can give a misleading supervisory signal by generating mixup data that does not match with the target soft label . While the existing mixup methods only consider the mixup between two inputs , we generalize the number of inputs m to any positive integer . Note , each kth location of outputs has m candidate sources from the inputs . We model the unary labeling cost as the negative value of the saliency , and denote the cost vector at the kth location as ck ∈ Rm . For the saliency measure , we calculate the gradient values of training loss with respect to the input and measure ` 2 norm of the gradient values across input channels ( Simonyan et al. , 2013 ; Kim et al. , 2020 ) . Note that this method does not require any additional architecture dependent modules for saliency calculation . In addition to the unary cost , we encourage adjacent locations to have similar labels for the smoothness of each mixup data . In summary , the objective can be formulated as follows : m′∑ j=1 n∑ k=1 cᵀkzj , k + β m′∑ j=1 ∑ ( k , k′ ) ∈N ( 1− zᵀj , kzj , k′ ) − η m′∑ j=1 n∑ k=1 log p ( zj , k ) , where the prior p is given by zj , k ∼ 1LMulti ( L , λ ) with λ = ( λ1 , . . . , λm ) ∼ Dirichlet ( α , . . . , α ) , which is a generalization of the mixing ratio distribution of Zhang et al . ( 2018 ) , and N denotes a set of adjacent locations ( i.e. , neighboring image patches in vision , subsequent spectrums in speech , etc. ) . Diversity Note that the naive generalization above leads to the identical outputs because the objective is separable and identical for each output . In order to obtain diverse mixup outputs , we model a similarity penalty between outputs . First , we represent the input source information of the jth output by aggregating assigned labels as ∑n k=1 zj , k . For simplicity , let us denote ∑n k=1 zj , k as oj . Then , we measure the similarity between oj ’ s by using the inner-product on Rm . In addition to the input source similarity between outputs , we model the compatibility between input sources , represented as a symmetric matrix Ac ∈ Rm×m+ . Specifically , Ac [ i1 , i2 ] quantifies the degree to which input i1 and i2 are suitable to be mixed together . In summary , we use inner-product on A = ( 1 − ω ) I + ωAc for ω ∈ [ 0 , 1 ] , resulting in a supermodular penalty term . Note that , by minimizing 〈oj , oj′〉A = oᵀjAoj′ , ∀j 6= j′ , we penalize output mixup examples with similar input sources and encourage each individual mixup examples to have high compatibility within . In this work , we measure the distance between locations of salient objects in each input and use the distance matrix Ac [ i , j ] = ‖argmaxksi [ k ] − argmaxksj [ k ] ‖1 , where si is the saliency map of the ith input and k is a location index ( e.g. , k is a 2-D index for image data ) . From now on , we denote this inner-product term as the compatibility term . Over-penalization The conventional mixup methods perform mixup as many as the number of examples in a given mini-batch . In our setting , this is the case when m = m′ . However , the compatibility penalty between outputs is influenced by the pigeonhole principle . For example , suppose the first output consists of two inputs . Then , the inputs must be used again for the remaining m′ − 1 outputs , or only m− 2 inputs can be used . In the latter case , the number of available inputs ( m− 2 ) is less than the outputs ( m′ − 1 ) , and thus , the same input must be used more than twice . Empirically , we found that the remaining compatibility term above over-penalizes the optimization so that a substantial portion of outputs are returned as singletons without any mixup . To mitigate the over-penalization issue , we apply clipping to the compatibility penalty term . Specifically , we model the objective so that no extra penalty would occur when the compatibility among outputs is below a certain level . Now we present our main objective as following : z∗ = argmin zj , k∈Lm , ‖zj , k‖1=1 f ( z ) , where f ( z ) : = m′∑ j=1 n∑ k=1 cᵀkzj , k + β m′∑ j=1 ∑ ( k , k′ ) ∈N ( 1− zᵀj , kzj , k′ ) ( 1 ) + γmax τ , m′∑ j=1 m′∑ j′ 6=j ( n∑ k=1 zj , k ) ᵀ A ( n∑ k=1 zj′ , k ) ︸ ︷︷ ︸ =fc ( z ) −η m′∑ j=1 n∑ k=1 log p ( zj , k ) . In Figure 2 , we describe the properties of the BP optimization problem of Equation ( 1 ) and statistics of the resulting mixup data . Next , we verify the supermodularity of the compatibility term . We first extend the definition of the submodularity of a multi-label function as follows ( Windheuser et al. , 2012 ) . Definition 1 . For a given label set L , a function s : Lm × Lm → R is pairwise submodular , if ∀x , x′ ∈ Lm , s ( x , x ) +s ( x′ , x′ ) ≤ s ( x , x′ ) +s ( x′ , x ) . A function s is pairwise supermodular , if −s is pairwise submodular . Proposition 1 . The compatibility term fc in Equation ( 1 ) is pairwise supermodular for every pair of ( zj1 , k , zj2 , k ) if A is positive semi-definite . Proof . See Appendix B.1 . Finally note that , A = ( 1 − ω ) I + ωAc , where Ac is a symmetric matrix . By using spectral decomposition , Ac can be represented as UDUᵀ , where D is a diagonal matrix and UᵀU = UUᵀ = I . Then , A = U ( ( 1− ω ) I + ωD ) Uᵀ , and thus for small ω > 0 , we can guarantee A to be positive semi-definite .
This paper proposes a new mixup method that encourages diversity among the samples mixed from a minibatch of data in addition to saliency of each mixed sample. The authors formulate two objectives: 1. a BP set function (submodular + supermodular), and 2. a submodular relaxation obtained by modularizing the supermodular component. Then they solve this problem approximately with coordinate descent by modularizing wrt the update coordinate at each step. This approach outperforms mixup baselines on image classification and several other tasks (calibration, object localization, and robustness).
SP:a9be89f746c794d25c46d2da1feb6d06f93eb056
Rewriting by Generating: Learn Heuristics for Large-scale Vehicle Routing Problems
1 INTRODUCTION . The Large-Scale Vehicle Routing Problems ( VRPs ) is an important combinatorial optimization problem defined upon an enormous distribution of customer nodes , usually more than a thousand . An efficient and high-quality solution to large-scale VRPs is critical to many real-world applications . Meanwhile , most existing works focus on finding near-optimal solutions with only no more than a hundred customers because of the computational complexity ( Laporte , 1992 ; Golden et al. , 2008 ; Braekers et al. , 2016 ) . Originated from the NP-hard nature as a VRPs , the exponential expansion of solution space makes it much more difficult than solving a small-scale one . Therefore , providing effective and efficient solutions for large-scale VRPs is a challenging problem ( Fukasawa et al. , 2006 ) . Current algorithms proposed for routing problems can be divided into traditional non-learning based heuristics and reinforcement learning ( RL ) based models . Many routing solvers involve heuristics as their core algorithms , for instance , ant colony optimization ( Gambardella et al. , 1999 ) and LKH3 ( Helsgaun , 2017 ) , which can find a near optimal solution by greedy exploration . However , they become inefficient when the problem scale extends . Apart from traditional heuristics , RL based VRPs solvers have been widely studied recently to find more efficient and effective solutions ( Dai et al. , 2017 ; Nazari et al. , 2018 ; Bello et al. , 2017 ; Kool et al. , 2019 ; Chen & Tian , 2019 ; Lu et al. , 2020 ) . Thanks to the learning manner that takes every feedback from learning attempts as signals , RL based methods rely on few hand-crafted rules and thus can be widely used in different customer distributions without human intervention and expert knowledge . Besides , these RL methods benefit from a pre-training process allowing them to infer solutions for new instances much faster than traditional heuristics . However , current RL agents are still insufficient to learn a feasible policy and generate solutions directly on large-scale VRPs due to the vast solution space , which is usually N ! for N customers . More specifically , the solution space of a large-scale VRPs with 1000 customers is e2409 much larger than that of a small-scale one with only 100 customers . Consequently , the complexity makes the agent difficult to fully explore and makes the model hard to learn useful knowledge in large-scale VRPs . To avoid the explosion of solution space in large-scale VRPs , we consider leveraging the classic Divide-and-Conquer idea to decompose the enormous scale of the original problem . In particularly , 1Codes and data will be released at https : //github.com/RBG4VRPs/Rewriting-By-Generating dividing the large-scale customer distributions into small-scale ones and then generating individual regional solutions to reduce the problem complexity . However , how to obtain a refined region division where the local VRPs can be handled effectively and how to coordinate iterations between global and local optimization efficiently remain two challenges of our VRPs solvers . To tackle those two challenges above , we propose an RL-based framework , named Rewriting-byGenerating ( RBG ) , to solve large-scale VRPs . The framework adopts a hierarchical RL structure , which consists of a `` Generator '' and a `` Rewriter '' . Firstly , We divide customers into regions and use an elementary RL-based VRPs solver to solve them locally , known as the `` Generation '' process . After that , from a global perspective , a special `` Rewriting '' process is designed based on all regional generations , which rewrites the previous solution with new divisions and the corresponding new regional VRPs results . Within each rewriting step , we select and merge two regions into a hyperregion , and then further divide it into two new sub-regions according to the hyper-regional VRPs solution . By doing this , the problem scale is decomposed into pieces and the problem could be solved efficiently using regional RL-based solvers , and can still preserve the solution quality which is improved by the rewriter continuously . Extensive experiments demonstrate that our RBG framework achieves significant performance in a much more efficient manner . It has a significant advantage on solution quality to other RL-based methods , and outperforms the state-of-the-art LKH3 ( Helsgaun , 2017 ) , by 2.43 % with the problem size of N = 2000 and could infer solutions about 100 times faster . Moreover , it also has a growing superiority to other methods when the problem scale increases . Notations : We introduce some fundamental notations of large-scale VRPs , while the complete formulation is presented in the Appendix . Let G ( V , E ) denote the entire graph of all customers and the depot . Specifically , V = { v0 , v1 , ... , vi , ... , vN } , where v0 denotes the depot , and vi ( 1 ≤ i ≤ N ) denotes the i-th customer with its location ( xi , yi ) and its demand di . The edge ei , j , or E ( vi , vj ) in another manner represents the traveling distance between vi and vj . Within the RBG framework , the generated regional VRPs solution πk = { vk,0 , vk,1 , vk,2 , ... , vk , Nk } of the divided region Gk has a corresponding traveling cost C ( πk ) = ∑Nk i=0E ( vk , i , vk , i+1 ) . The entire solution of all customers is denoted by π . 2 RELATED WORK . We discuss previous works which are related to our research in the following two directions : Traditional Heuristics . Since the exact methods ( Laporte , 1992 ; Laporte & Nobert , 1987 ; Holland , 1992 ; Baldacci et al. , 2010 ) are almost impossible to solve VRPs within a reasonable time due to the high computation complexity , researchers developed heuristics , i.e. , non-exact methods , to find approximation solutions instead . Tabu search is one of the old metaheuristics ( Glover , 1990b ; a ; Gendreau et al. , 1994 ; Battiti & Tecchiolli , 1994 ) , which keeps searching for new solutions in the neighborhood of the current solution . Instead of focusing on improving merely one solution , genetic algorithms operate in a series of solutions ( Goldberg , 1989 ; Holland , 1992 ) . It constructs new structures continuously based on parent structures . Instead of treating objectives to be optimized altogether , ant colony optimizations as another widely accepted solver , utilize several ant colonies to optimize different functions : the number of vehicles , the total distance and others ( Gambardella et al. , 1999 ; Dorigo et al. , 2006 ; Dorigo & Di Caro , 1999 ) . Meanwhile , recreate search methods keep constructing the current solution and ruining the current ones to build better solutions . ( Schrimpf et al. , 2000 ) . This helps to expand the exploration space to prevent the local optimum . Among these categories , LKH3 is a state-of-the-art heuristic solver that empirically finds optimal solutions ( Helsgaun , 2017 ) . Although these heuristics , compared to exact methods , can improve searching efficiency , they are still much too time-consuming when applied to large-scale VRPs with the acceptable performance required , and may fail to respond to any real-time solution requests . RL based VRPs Solutions . Since the learning manner of reinforcement learning allows the agent model to directly infer solutions based on a pre-trained model with much shorter computation time , RL becomes a compelling direction on solving combinatorial optimizations . It has been successfully applied in VRPs particularly ( Bello et al. , 2017 ; Nazari et al. , 2018 ; Kool et al. , 2019 ) . Vinyals et al . ( 2015 ) was the first to adopt deep learning in combinatorial optimizations by a novel Pointer Network model . Inspired by this , Bello et al . ( 2017 ) proposed to use RL to learn model parameters as an optimal strategy instead of relying on ground-truth in a supervised learning way , which demonstrates the effectiveness on TSP and the knapsack problem . Nazari et al . ( 2018 ) further followed the idea to solve VRPs with attention mechanism as augmentation , and Kool et al . ( 2019 ) solved more generalized combinatorial optimization problems . Other than using the idea of PointerNetwork , Dai et al . ( 2017 ) develops their method over graphs via Q-learning ( Sutton & Barto , 2018 ) , so that the solutions could have better generalization ability . Chen & Tian ( 2019 ) proposed a local rewriting rule that keeps rewriting the local components of the current situation via a Q-Actor-Critic training process ( Sutton & Barto , 2018 ) . Lu et al . ( 2020 ) further developed a Learn-to-Iterate structure that not only improves the solution exploration but also generates perturbations to avoid local optimum . This is the first machine learning framework that outperforms LKH3 on CVRPs ( capacitated VRPs ) , both in computation time and solution quality . However , these existing RL based methods only achieve promising results without any hand-craft rules and expertise at small scales with usually no more than a hundred customers . The proposed models can not be trained for thousand-customer-level VRPs because the state space and action space extend exponentially as the number of customers increases , and it will be hard for the model to learn useful route generation policy . In contrast , we propose an RL based framework formed upon the classical idea of Divide-and-Conquer to solve the large-scale challenge . 3 REWRITING-BY-GENERATING . Figure 1 shows the overview structure of our proposed framework , named Rewriting-by-Generating ( RBG ) . Along with the fundamental idea of Divide-and-Conquer to decompose the enormous problem scale as discussed previously , we aim at dividing the total customers into separate regions and generate near-optimal solutions individually . To achieve this , we design a hierarchical RL structure including two agents which take different functions . First , to refine and obtain more reasonable division results , we design the `` Rewriting '' process which keeps up- dating new divisions by rewriting the previous ones and their corresponding regional solutions . The division quality is critical to the final solution since customers from different regions can not be scheduled upon the same route . Within each rewriting step , the agent selects and merges two regions based on their generated solutions . A new solution will be generated upon the merged hyper-region in the following step , and the rewriter will further divide the merge hyper-region back to two new regions . Since the exploration on different customer composition is complicated and it is not trivial to measure the direct influence to the final performance in terms of traveling distance , an RL-based rewriter is a wise choice to learn the selection and merging action . We will show that the model converges and achieves high performance when the rewriter agent learns a stable division result in Section 4 . Second , to reach the global solution from the regional scratches , we employ an elementary VRPs generator that generates solutions to each region , known as the `` Generating '' process . Considering the time efficiency and the ability to learn to solve certain customer distributions when the division updates continuously , we also apply an RL agent to learn to generate solutions on these smaller-scale regions . Overall , we develop a hierarchical RL framework by coordinating the rewriter and the generator in two different scales iteratively . The rewriter updates new division and brings new customer distributions to the generator , while the solutions from the generator formulate a key component of the rewriter . From the technical perspective , it is worthy to note that the merging-repartioning operation that our rewriter conducts is also adopted in previous meta-heuristics ( Baker & Ayechew , 2003 ; Bell & McMullen , 2004 ) , while we replace the handcrafted heuristic with a learning agent . The global RL based rewriter is responsible for managing inter-regional exploration while the generator optimizes local results . The combination of Operation Research ( OR ) heuristics and RL guarantees an effective exploration process as well as achieving high computation efficiency from the prospect of fast solution generation on inference instances . For brevity and clarity , we summary the pipeline as five steps , as shown in Figure 2 . First , we cluster customers into several initialized hyper-regions . Second , we generate an initial regional VRPs solution in individual hyper-regions via our elementary VRPs generator . Third , we utilize the rewriter to partition the merged graph to two sub-regions . Then our rewriter picks up two sub-regions via the attention mechanism and merges them into one hyper-region , and then generates the hyper-regional solution of the merged hyper-region . After that , we go back to the third step to re-partition the hyper-region into sub-regions in a loop . Through this process , the partition becomes more reasonable and the solution gets better and better . Finally after enough steps of rewriting we are able to reach a good solution .
The paper presents a hierarchical reinforcement learning approach to solve large-scale vehicle routing problems (VRPs). A “rewriting agent” is responsible for dividing the customers into regions while a “generating agent” is responsible for computing the vehicle routes in each region, independently. The rewriting agent learns to score pairs of regions to be merged, using as a reward the reduction of VRP cost gained by the merge. The VRP costs are computed by the generating agent, which is based on the attention model of (Kool et al 2019), that is known to perform well for smaller scale VRPs.
SP:cd167a1412b5c09594275811b7efebc358e2d121
Rewriting by Generating: Learn Heuristics for Large-scale Vehicle Routing Problems
1 INTRODUCTION . The Large-Scale Vehicle Routing Problems ( VRPs ) is an important combinatorial optimization problem defined upon an enormous distribution of customer nodes , usually more than a thousand . An efficient and high-quality solution to large-scale VRPs is critical to many real-world applications . Meanwhile , most existing works focus on finding near-optimal solutions with only no more than a hundred customers because of the computational complexity ( Laporte , 1992 ; Golden et al. , 2008 ; Braekers et al. , 2016 ) . Originated from the NP-hard nature as a VRPs , the exponential expansion of solution space makes it much more difficult than solving a small-scale one . Therefore , providing effective and efficient solutions for large-scale VRPs is a challenging problem ( Fukasawa et al. , 2006 ) . Current algorithms proposed for routing problems can be divided into traditional non-learning based heuristics and reinforcement learning ( RL ) based models . Many routing solvers involve heuristics as their core algorithms , for instance , ant colony optimization ( Gambardella et al. , 1999 ) and LKH3 ( Helsgaun , 2017 ) , which can find a near optimal solution by greedy exploration . However , they become inefficient when the problem scale extends . Apart from traditional heuristics , RL based VRPs solvers have been widely studied recently to find more efficient and effective solutions ( Dai et al. , 2017 ; Nazari et al. , 2018 ; Bello et al. , 2017 ; Kool et al. , 2019 ; Chen & Tian , 2019 ; Lu et al. , 2020 ) . Thanks to the learning manner that takes every feedback from learning attempts as signals , RL based methods rely on few hand-crafted rules and thus can be widely used in different customer distributions without human intervention and expert knowledge . Besides , these RL methods benefit from a pre-training process allowing them to infer solutions for new instances much faster than traditional heuristics . However , current RL agents are still insufficient to learn a feasible policy and generate solutions directly on large-scale VRPs due to the vast solution space , which is usually N ! for N customers . More specifically , the solution space of a large-scale VRPs with 1000 customers is e2409 much larger than that of a small-scale one with only 100 customers . Consequently , the complexity makes the agent difficult to fully explore and makes the model hard to learn useful knowledge in large-scale VRPs . To avoid the explosion of solution space in large-scale VRPs , we consider leveraging the classic Divide-and-Conquer idea to decompose the enormous scale of the original problem . In particularly , 1Codes and data will be released at https : //github.com/RBG4VRPs/Rewriting-By-Generating dividing the large-scale customer distributions into small-scale ones and then generating individual regional solutions to reduce the problem complexity . However , how to obtain a refined region division where the local VRPs can be handled effectively and how to coordinate iterations between global and local optimization efficiently remain two challenges of our VRPs solvers . To tackle those two challenges above , we propose an RL-based framework , named Rewriting-byGenerating ( RBG ) , to solve large-scale VRPs . The framework adopts a hierarchical RL structure , which consists of a `` Generator '' and a `` Rewriter '' . Firstly , We divide customers into regions and use an elementary RL-based VRPs solver to solve them locally , known as the `` Generation '' process . After that , from a global perspective , a special `` Rewriting '' process is designed based on all regional generations , which rewrites the previous solution with new divisions and the corresponding new regional VRPs results . Within each rewriting step , we select and merge two regions into a hyperregion , and then further divide it into two new sub-regions according to the hyper-regional VRPs solution . By doing this , the problem scale is decomposed into pieces and the problem could be solved efficiently using regional RL-based solvers , and can still preserve the solution quality which is improved by the rewriter continuously . Extensive experiments demonstrate that our RBG framework achieves significant performance in a much more efficient manner . It has a significant advantage on solution quality to other RL-based methods , and outperforms the state-of-the-art LKH3 ( Helsgaun , 2017 ) , by 2.43 % with the problem size of N = 2000 and could infer solutions about 100 times faster . Moreover , it also has a growing superiority to other methods when the problem scale increases . Notations : We introduce some fundamental notations of large-scale VRPs , while the complete formulation is presented in the Appendix . Let G ( V , E ) denote the entire graph of all customers and the depot . Specifically , V = { v0 , v1 , ... , vi , ... , vN } , where v0 denotes the depot , and vi ( 1 ≤ i ≤ N ) denotes the i-th customer with its location ( xi , yi ) and its demand di . The edge ei , j , or E ( vi , vj ) in another manner represents the traveling distance between vi and vj . Within the RBG framework , the generated regional VRPs solution πk = { vk,0 , vk,1 , vk,2 , ... , vk , Nk } of the divided region Gk has a corresponding traveling cost C ( πk ) = ∑Nk i=0E ( vk , i , vk , i+1 ) . The entire solution of all customers is denoted by π . 2 RELATED WORK . We discuss previous works which are related to our research in the following two directions : Traditional Heuristics . Since the exact methods ( Laporte , 1992 ; Laporte & Nobert , 1987 ; Holland , 1992 ; Baldacci et al. , 2010 ) are almost impossible to solve VRPs within a reasonable time due to the high computation complexity , researchers developed heuristics , i.e. , non-exact methods , to find approximation solutions instead . Tabu search is one of the old metaheuristics ( Glover , 1990b ; a ; Gendreau et al. , 1994 ; Battiti & Tecchiolli , 1994 ) , which keeps searching for new solutions in the neighborhood of the current solution . Instead of focusing on improving merely one solution , genetic algorithms operate in a series of solutions ( Goldberg , 1989 ; Holland , 1992 ) . It constructs new structures continuously based on parent structures . Instead of treating objectives to be optimized altogether , ant colony optimizations as another widely accepted solver , utilize several ant colonies to optimize different functions : the number of vehicles , the total distance and others ( Gambardella et al. , 1999 ; Dorigo et al. , 2006 ; Dorigo & Di Caro , 1999 ) . Meanwhile , recreate search methods keep constructing the current solution and ruining the current ones to build better solutions . ( Schrimpf et al. , 2000 ) . This helps to expand the exploration space to prevent the local optimum . Among these categories , LKH3 is a state-of-the-art heuristic solver that empirically finds optimal solutions ( Helsgaun , 2017 ) . Although these heuristics , compared to exact methods , can improve searching efficiency , they are still much too time-consuming when applied to large-scale VRPs with the acceptable performance required , and may fail to respond to any real-time solution requests . RL based VRPs Solutions . Since the learning manner of reinforcement learning allows the agent model to directly infer solutions based on a pre-trained model with much shorter computation time , RL becomes a compelling direction on solving combinatorial optimizations . It has been successfully applied in VRPs particularly ( Bello et al. , 2017 ; Nazari et al. , 2018 ; Kool et al. , 2019 ) . Vinyals et al . ( 2015 ) was the first to adopt deep learning in combinatorial optimizations by a novel Pointer Network model . Inspired by this , Bello et al . ( 2017 ) proposed to use RL to learn model parameters as an optimal strategy instead of relying on ground-truth in a supervised learning way , which demonstrates the effectiveness on TSP and the knapsack problem . Nazari et al . ( 2018 ) further followed the idea to solve VRPs with attention mechanism as augmentation , and Kool et al . ( 2019 ) solved more generalized combinatorial optimization problems . Other than using the idea of PointerNetwork , Dai et al . ( 2017 ) develops their method over graphs via Q-learning ( Sutton & Barto , 2018 ) , so that the solutions could have better generalization ability . Chen & Tian ( 2019 ) proposed a local rewriting rule that keeps rewriting the local components of the current situation via a Q-Actor-Critic training process ( Sutton & Barto , 2018 ) . Lu et al . ( 2020 ) further developed a Learn-to-Iterate structure that not only improves the solution exploration but also generates perturbations to avoid local optimum . This is the first machine learning framework that outperforms LKH3 on CVRPs ( capacitated VRPs ) , both in computation time and solution quality . However , these existing RL based methods only achieve promising results without any hand-craft rules and expertise at small scales with usually no more than a hundred customers . The proposed models can not be trained for thousand-customer-level VRPs because the state space and action space extend exponentially as the number of customers increases , and it will be hard for the model to learn useful route generation policy . In contrast , we propose an RL based framework formed upon the classical idea of Divide-and-Conquer to solve the large-scale challenge . 3 REWRITING-BY-GENERATING . Figure 1 shows the overview structure of our proposed framework , named Rewriting-by-Generating ( RBG ) . Along with the fundamental idea of Divide-and-Conquer to decompose the enormous problem scale as discussed previously , we aim at dividing the total customers into separate regions and generate near-optimal solutions individually . To achieve this , we design a hierarchical RL structure including two agents which take different functions . First , to refine and obtain more reasonable division results , we design the `` Rewriting '' process which keeps up- dating new divisions by rewriting the previous ones and their corresponding regional solutions . The division quality is critical to the final solution since customers from different regions can not be scheduled upon the same route . Within each rewriting step , the agent selects and merges two regions based on their generated solutions . A new solution will be generated upon the merged hyper-region in the following step , and the rewriter will further divide the merge hyper-region back to two new regions . Since the exploration on different customer composition is complicated and it is not trivial to measure the direct influence to the final performance in terms of traveling distance , an RL-based rewriter is a wise choice to learn the selection and merging action . We will show that the model converges and achieves high performance when the rewriter agent learns a stable division result in Section 4 . Second , to reach the global solution from the regional scratches , we employ an elementary VRPs generator that generates solutions to each region , known as the `` Generating '' process . Considering the time efficiency and the ability to learn to solve certain customer distributions when the division updates continuously , we also apply an RL agent to learn to generate solutions on these smaller-scale regions . Overall , we develop a hierarchical RL framework by coordinating the rewriter and the generator in two different scales iteratively . The rewriter updates new division and brings new customer distributions to the generator , while the solutions from the generator formulate a key component of the rewriter . From the technical perspective , it is worthy to note that the merging-repartioning operation that our rewriter conducts is also adopted in previous meta-heuristics ( Baker & Ayechew , 2003 ; Bell & McMullen , 2004 ) , while we replace the handcrafted heuristic with a learning agent . The global RL based rewriter is responsible for managing inter-regional exploration while the generator optimizes local results . The combination of Operation Research ( OR ) heuristics and RL guarantees an effective exploration process as well as achieving high computation efficiency from the prospect of fast solution generation on inference instances . For brevity and clarity , we summary the pipeline as five steps , as shown in Figure 2 . First , we cluster customers into several initialized hyper-regions . Second , we generate an initial regional VRPs solution in individual hyper-regions via our elementary VRPs generator . Third , we utilize the rewriter to partition the merged graph to two sub-regions . Then our rewriter picks up two sub-regions via the attention mechanism and merges them into one hyper-region , and then generates the hyper-regional solution of the merged hyper-region . After that , we go back to the third step to re-partition the hyper-region into sub-regions in a loop . Through this process , the partition becomes more reasonable and the solution gets better and better . Finally after enough steps of rewriting we are able to reach a good solution .
An RL based method, called Rewriting-by Generating (RBG), is proposed to solve large-scale VRPs. It borrows the idea of the hierarchical RL agent, which consists of two parts: "Generator" and "Rewriter". In the generation process, the graph is divided into several sections and in each section, an RL algorithm runs to get the best route. Then, the rewriter gets the solution of all generators and tries to connect them together with the goal of globalizing them with a smaller route. To this end, the rewriter merges each of two sub-problems together and then divides it into another two sub-problem and solves each again. Doing this helps decrease the route length. This diving and merging is learned by an RL agent (think of the outer agent in the hierarchical RL) so that the rewriter learns when and how to do this. The rewriter uses the attention mechanism to choose two parts of the merged routes, and then get a new solution for each part using the inner-agent. To get the initial sub-problems, K-mean clustering is used to get sub-problems of about 100 nodes. In the evaluations, CVRP of size 500, 1000, and 2000 are considered. The results are compared to LKH3 and google OR-Tools, along with RL algorithms. LKH3 slightly outperforms RGB in terms of the tour length in problems of 500 and 1000 nodes, though it takes a longer time to get the solution.
SP:cd167a1412b5c09594275811b7efebc358e2d121
A teacher-student framework to distill future trajectories
1 INTRODUCTION . The ability to learn models of the world has long been argued to be an important ability of intelligent agents . An open and actively researched question is how to learn world models at the right level of abstraction . This paper argues , as others have before , that model-based and model-free methods lie on a spectrum in which advantages and disadvantages of either approach can be traded off against each other , and that there is an optimal compromise for every task . Predicting future observations allows extensive use of all observations from previous experiences during training , and to swiftly transfer to a new reward if the learned model is accurate . However , due to partial observability , stochasticity , irrelevant dynamics and compounding errors in planning , model-based methods tend to be outperformed asymptotically ( Pong et al. , 2018 ; Chua et al. , 2018 ) . On the other end of the spectrum , purely model-free methods use the scalar reward as the only source of learning signal . By avoiding the potentially impossible task of explicitly modeling the environment , model-free methods can often achieve substantially better performance in complex environments ( Vinyals et al. , 2019 ; OpenAI et al. , 2019 ) . However , this comes at the cost of extreme sample inefficiency , as only predicting rewards throws away useful information contained in the sequences of future observations . What is the right way to incorporate information from trajectories that are associated with the inputs ? In this paper we take a step back : Instead of trying to answer this question ourselves by handdesigning what information should be taken into consideration and how , we let a model learn how to make use of the data . Depending on what works well within the setting , the model should learn if and how to learn from the trajectories available at training time . We will adopt a teacher-student setting : a teacher network learns to extract relevant information from the trajectories , and distills it into target activations to guide a student network.1 A sketch of our approach can be found in Figure 1 , next to prototypical computational graphs used to integrate trajectory information in most model-free and model-based methods . Future trajectories can be seen as being a form of privileged information Vapnik and Vashist ( 2009 ) , i.e . data available at training time which provides additional information but is not available at test time . 1Note that the term distillation is often used in the context of “ distilling a large model into a smaller one ” ( Hinton et al. , 2015 ) , but in this context we talk about distilling a trajectory into vectors used as target activations . Contributions The main contribution of this paper is the proposal of a generic method to extract relevant signal from privileged information , specifically trajectories of future observations . We present an instantiation of this approach called Learning to Distill Trajectories ( LDT ) and an empirical analysis of it . x h1 h2 h3 ŷy f1 f2 f3 f4 ( a ) Model-free x x̂1 x̂2 ... x̂n−1 x̂n x∗1 x∗2 x∗n−1 x∗n ŷ1 ŷ2 ŷn−1 ŷn ŷ y1 y2 yn−1 yT Σ f f f f f r r r r ( b ) Vanilla model-based x h1 h2 h3 x̂∗ x∗ŷy f1 f2 f3 f4 faux ( c ) Auxiliary task x x∗ h1 h2 h3 h∗1 h∗2 h∗3 ŷ y f1 f2 f3 f4 T ( d ) Teacher Figure 1 : Comparison of architectures . The data generator is a Markov reward process ( no actions ) with an episode length of n. x denotes the initial observation . y = ∑ i yi is the n-step return ( no bootstrapping ) . x∗ = ( x∗1 , x ∗ 2 , ... , x ∗ n ) is the trajectory of observations ( privileged data ) . Model activations and predictions are displayed boxed . Losses are displayed as red lines . Solid edges denote learned functions . Dotted edges denote fixed functions . 2 RELATED WORK . Efficiently making use of signal from trajectories is an actively researched topic . The technique of bootstrapping in TD-learning ( Sutton , 1988 ) uses future observations to reduce the variance of value function approximations . However , in its basic form , bootstrapping provides learning signal only through a scalar bottleneck , potentially missing out on rich additional sources of learning signal . Another approach to extract additional training signal from observations is the framework of Generalized Value Functions ( Sutton et al. , 2011 ) , which has been argued to be able to bridge the gap between model-free and model-based methods as well . A similar interpretation can be given to the technique of successor representations ( Dayan , 1993 ) . A number of methods have been proposed that try to leverage the strengths of both model-free and model-based methods , among them Racanière et al . ( 2017 ) , who learn generative models of the environment and fuse predicted rollouts with a model-free network path . In a different line of research , Silver et al . ( 2017 ) and Oh et al . ( 2017 ) show that value prediction can be improved by incorporating dynamical structure and planning computation into the function approximators . Guez et al . ( 2019 ) investigate to what extent agents can learn implicit dynamics models which allow them to solve planning tasks effectively , using only model-free methods . Similarly to LDT , those models can learn their own utility-based state abstractions and can even be temporally abstract to some extent . One difference of these approaches to LDT is that they use reward as their only learning signal without making direct use of future observations when training the predictor . The meta-gradient approach presented in this paper can be used more generally for problems in the framework of learning using privileged information ( LUPI , ( Vapnik and Vashist , 2009 ; Lopez-Paz et al. , 2016 ) ) , where privileged information is additional context about the data that is available at training time but not at test time . Hindsight information such as the trajectories in a value-prediction task falls into this category . There are a variety of representation learning approaches which can learn to extract learning signal from trajectories . Jaderberg et al . ( 2016 ) demonstrate that the performance of RL agents can be improved significantly by training the agent on additional prediction and control tasks in addition to the original task . Du et al . ( 2018 ) use gradient similarity as a means to determine whether an auxiliary loss is helpful or detrimental for the downstream task . Oord et al . ( 2018 ) introduce a method based on contrastive learning . They , as well as multiple follow-up studies , show that the representations learned in this way are helpful for downstream tasks in a variety of settings . Buesing et al . ( 2018 ) present ways to learn efficient dynamical models which do not need to predict future observations at inference time . Recently , Schrittwieser et al . ( 2019 ) introduced an RL agent that learns an abstract model of the environment and uses it to achieve strong performance on several challenging tasks . Similarly to our motivation , their model is not required to produce future observations . Meta-learning approaches have recently been shown to be successful as a technique to achieve fast task adaptation ( Finn et al. , 2017 ) , strong unsupervised learning ( Metz et al. , 2019 ) , and to improve RL ( Xu et al. , 2018 ) . Similar to this paper in motivation is the recent work by Guez et al . ( 2020 ) which also investigates how privileged hindsight information can be leveraged for value estimation . The difference to LDT is how the trajectory information is incorporated . Their approach has the advantage of not needing second-order gradients . At the same time , LDT naturally avoids the problem of the label being easily predictable from the hindsight data — the teacher is trained to present it to the student in such a way that it empirically improves the student ’ s performance on held-out data . Veeriah et al . ( 2019 ) use meta-gradients to derive useful auxiliary tasks in the form of generalized value functions . In contrast , we use a teacher network that learns to provide target activations for a student neural network based on privileged information . 3 META-LEARNING A DYNAMICS TEACHER . Here we describe our approach of jointly learning a teacher and a student.2 While our approach applies to the generic setting of learning using privileged information ( Vapnik and Vashist , 2009 ) , here we will focus on the special case of a prediction task with an underlying dynamical system . 3.1 LEARNING TASK . We are considering learning problems in which we have to make a prediction about some property of the future state of a dynamical system , given observations up to the current state . Our method particularly applies to systems in which both the function that relates the current observation to the label as well as the function that predicts the next observation from the current one are hard to learn , making it difficult for both model-free and model-based methods respectively . To make the explanation more concrete , we will use the practical problem of medical decision-making as a running example to which we can relate the definitions we used , similar to a motivating example from Vapnik and Vashist ( 2009 ) : given the history of measurements ( biopsies , blood-pressure , etc . ) on a given patient and the treatment assignment , we want to predict whether the patient will recover or not . The input x ∈ X of our learning task is some observation of the system state st ∈ S before and including time step3 t ∈ Z . In our running example , st can be considered the detailed physical state of the patient , which is not directly observable . The observations x include potentially multi-modal data such as x-ray images , vital sign measurements , oncologist reports , etc . The system is governed by an unknown dynamical law f : S→ S — in our example , the dynamics are physical equations that determine the evolution of all cells in the body . The prediction target y ∈ Y is some function of a future state sT = fT−t ( st ) , separated from t by T − t time steps : y = g ( sT ) . In our running example , a prediction target could be the binary indicator of whether the patient will recover within some time frame . Note that T could vary from one example to the next . In addition to the initial observation , we have access to the trajectory x∗ = ( xτ ) τ=t+1 .. T at training ( but not test ) time . In our running example , the trajectory includes all measurements from the patient after the treatment decision has been made . This information is available in a dataset of past patients ( in hindsight ) , but not in any novel situation . 2Note that unlike in some related work , the teacher in our task is not a copy of the student network , but can have a completely different architecture . 3For simplicity , our dynamical system is time-discrete , but this assumption is not important for what follows . 3.2 SUPERVISION OF INTERNAL ACTIVATIONS . A straightforward approach to solve the learning task which takes into account the trajectory information , would be to train a state-space-model ( SSM ) f̂ , consisting of a dynamical model and a decoder . The SSM is trained to maximize the likelihood of the observed trajectories in the training set , conditioned on the observed initial observation . Ideally , the induced f̂ closely resembles f , such that at test time , we can use it to generate an estimate of the rollout and infer the label from it . A potential drawback of this approach is that learning a full SSM could be more difficult than necessary . There may be many details of the dynamics that are both difficult to model and unimportant for the classification tasks . One example for this is the precise timing of events . As argued by Neitz et al . ( 2018 ) ; Jayaraman et al . ( 2019 ) , there are situations in which it is easy to predict a sequence of events where each event follows a previous one , but hard to predict the exact timing of those events . Moreover , an SSM typically requires rendering observations at training time , which may be difficult to learn and computationally expensive to execute . In the running example from Section 3.1 , it seems challenging and wasteful to predict all future observations in detail , as it would require modeling a complicated distribution over data such as X-ray images or doctor reports written in natural language . Ideally , we would like a model to learn how to extract the relevant information from these data efficiently . We propose to relax the requirement of fitting the dynamics precisely . The teacher can decide to omit properties of the observations that are not needed and omit time steps that can be skipped . It could also change the order of computation and let the student compute independently evolving sub-mechanisms sequentially , even if they evolved in parallel in the actual data generating process . In addition to potentially simplifying the learning problem , this could have the additional benefit of gaining computational efficiency . For example , modeling detailed pixel observations may be computationally wasteful , as argued by Buesing et al . ( 2018 ) and Oord et al . ( 2018 ) .
This paper proposes a learning framework for predicting the labels of dynamic systems. Unlike existing model-based approaches and model-free approaches, the proposed model takes a middle ground and uses a knowledge distillation-based framework. It uses a teacher model to learn to interpret a trajectory of the dynamic system, and distills target activations for a student model to learn to predict the system label based only on the current observation.
SP:f4ff50a3da561f589df8ba890626f51efb5dcd1d
A teacher-student framework to distill future trajectories
1 INTRODUCTION . The ability to learn models of the world has long been argued to be an important ability of intelligent agents . An open and actively researched question is how to learn world models at the right level of abstraction . This paper argues , as others have before , that model-based and model-free methods lie on a spectrum in which advantages and disadvantages of either approach can be traded off against each other , and that there is an optimal compromise for every task . Predicting future observations allows extensive use of all observations from previous experiences during training , and to swiftly transfer to a new reward if the learned model is accurate . However , due to partial observability , stochasticity , irrelevant dynamics and compounding errors in planning , model-based methods tend to be outperformed asymptotically ( Pong et al. , 2018 ; Chua et al. , 2018 ) . On the other end of the spectrum , purely model-free methods use the scalar reward as the only source of learning signal . By avoiding the potentially impossible task of explicitly modeling the environment , model-free methods can often achieve substantially better performance in complex environments ( Vinyals et al. , 2019 ; OpenAI et al. , 2019 ) . However , this comes at the cost of extreme sample inefficiency , as only predicting rewards throws away useful information contained in the sequences of future observations . What is the right way to incorporate information from trajectories that are associated with the inputs ? In this paper we take a step back : Instead of trying to answer this question ourselves by handdesigning what information should be taken into consideration and how , we let a model learn how to make use of the data . Depending on what works well within the setting , the model should learn if and how to learn from the trajectories available at training time . We will adopt a teacher-student setting : a teacher network learns to extract relevant information from the trajectories , and distills it into target activations to guide a student network.1 A sketch of our approach can be found in Figure 1 , next to prototypical computational graphs used to integrate trajectory information in most model-free and model-based methods . Future trajectories can be seen as being a form of privileged information Vapnik and Vashist ( 2009 ) , i.e . data available at training time which provides additional information but is not available at test time . 1Note that the term distillation is often used in the context of “ distilling a large model into a smaller one ” ( Hinton et al. , 2015 ) , but in this context we talk about distilling a trajectory into vectors used as target activations . Contributions The main contribution of this paper is the proposal of a generic method to extract relevant signal from privileged information , specifically trajectories of future observations . We present an instantiation of this approach called Learning to Distill Trajectories ( LDT ) and an empirical analysis of it . x h1 h2 h3 ŷy f1 f2 f3 f4 ( a ) Model-free x x̂1 x̂2 ... x̂n−1 x̂n x∗1 x∗2 x∗n−1 x∗n ŷ1 ŷ2 ŷn−1 ŷn ŷ y1 y2 yn−1 yT Σ f f f f f r r r r ( b ) Vanilla model-based x h1 h2 h3 x̂∗ x∗ŷy f1 f2 f3 f4 faux ( c ) Auxiliary task x x∗ h1 h2 h3 h∗1 h∗2 h∗3 ŷ y f1 f2 f3 f4 T ( d ) Teacher Figure 1 : Comparison of architectures . The data generator is a Markov reward process ( no actions ) with an episode length of n. x denotes the initial observation . y = ∑ i yi is the n-step return ( no bootstrapping ) . x∗ = ( x∗1 , x ∗ 2 , ... , x ∗ n ) is the trajectory of observations ( privileged data ) . Model activations and predictions are displayed boxed . Losses are displayed as red lines . Solid edges denote learned functions . Dotted edges denote fixed functions . 2 RELATED WORK . Efficiently making use of signal from trajectories is an actively researched topic . The technique of bootstrapping in TD-learning ( Sutton , 1988 ) uses future observations to reduce the variance of value function approximations . However , in its basic form , bootstrapping provides learning signal only through a scalar bottleneck , potentially missing out on rich additional sources of learning signal . Another approach to extract additional training signal from observations is the framework of Generalized Value Functions ( Sutton et al. , 2011 ) , which has been argued to be able to bridge the gap between model-free and model-based methods as well . A similar interpretation can be given to the technique of successor representations ( Dayan , 1993 ) . A number of methods have been proposed that try to leverage the strengths of both model-free and model-based methods , among them Racanière et al . ( 2017 ) , who learn generative models of the environment and fuse predicted rollouts with a model-free network path . In a different line of research , Silver et al . ( 2017 ) and Oh et al . ( 2017 ) show that value prediction can be improved by incorporating dynamical structure and planning computation into the function approximators . Guez et al . ( 2019 ) investigate to what extent agents can learn implicit dynamics models which allow them to solve planning tasks effectively , using only model-free methods . Similarly to LDT , those models can learn their own utility-based state abstractions and can even be temporally abstract to some extent . One difference of these approaches to LDT is that they use reward as their only learning signal without making direct use of future observations when training the predictor . The meta-gradient approach presented in this paper can be used more generally for problems in the framework of learning using privileged information ( LUPI , ( Vapnik and Vashist , 2009 ; Lopez-Paz et al. , 2016 ) ) , where privileged information is additional context about the data that is available at training time but not at test time . Hindsight information such as the trajectories in a value-prediction task falls into this category . There are a variety of representation learning approaches which can learn to extract learning signal from trajectories . Jaderberg et al . ( 2016 ) demonstrate that the performance of RL agents can be improved significantly by training the agent on additional prediction and control tasks in addition to the original task . Du et al . ( 2018 ) use gradient similarity as a means to determine whether an auxiliary loss is helpful or detrimental for the downstream task . Oord et al . ( 2018 ) introduce a method based on contrastive learning . They , as well as multiple follow-up studies , show that the representations learned in this way are helpful for downstream tasks in a variety of settings . Buesing et al . ( 2018 ) present ways to learn efficient dynamical models which do not need to predict future observations at inference time . Recently , Schrittwieser et al . ( 2019 ) introduced an RL agent that learns an abstract model of the environment and uses it to achieve strong performance on several challenging tasks . Similarly to our motivation , their model is not required to produce future observations . Meta-learning approaches have recently been shown to be successful as a technique to achieve fast task adaptation ( Finn et al. , 2017 ) , strong unsupervised learning ( Metz et al. , 2019 ) , and to improve RL ( Xu et al. , 2018 ) . Similar to this paper in motivation is the recent work by Guez et al . ( 2020 ) which also investigates how privileged hindsight information can be leveraged for value estimation . The difference to LDT is how the trajectory information is incorporated . Their approach has the advantage of not needing second-order gradients . At the same time , LDT naturally avoids the problem of the label being easily predictable from the hindsight data — the teacher is trained to present it to the student in such a way that it empirically improves the student ’ s performance on held-out data . Veeriah et al . ( 2019 ) use meta-gradients to derive useful auxiliary tasks in the form of generalized value functions . In contrast , we use a teacher network that learns to provide target activations for a student neural network based on privileged information . 3 META-LEARNING A DYNAMICS TEACHER . Here we describe our approach of jointly learning a teacher and a student.2 While our approach applies to the generic setting of learning using privileged information ( Vapnik and Vashist , 2009 ) , here we will focus on the special case of a prediction task with an underlying dynamical system . 3.1 LEARNING TASK . We are considering learning problems in which we have to make a prediction about some property of the future state of a dynamical system , given observations up to the current state . Our method particularly applies to systems in which both the function that relates the current observation to the label as well as the function that predicts the next observation from the current one are hard to learn , making it difficult for both model-free and model-based methods respectively . To make the explanation more concrete , we will use the practical problem of medical decision-making as a running example to which we can relate the definitions we used , similar to a motivating example from Vapnik and Vashist ( 2009 ) : given the history of measurements ( biopsies , blood-pressure , etc . ) on a given patient and the treatment assignment , we want to predict whether the patient will recover or not . The input x ∈ X of our learning task is some observation of the system state st ∈ S before and including time step3 t ∈ Z . In our running example , st can be considered the detailed physical state of the patient , which is not directly observable . The observations x include potentially multi-modal data such as x-ray images , vital sign measurements , oncologist reports , etc . The system is governed by an unknown dynamical law f : S→ S — in our example , the dynamics are physical equations that determine the evolution of all cells in the body . The prediction target y ∈ Y is some function of a future state sT = fT−t ( st ) , separated from t by T − t time steps : y = g ( sT ) . In our running example , a prediction target could be the binary indicator of whether the patient will recover within some time frame . Note that T could vary from one example to the next . In addition to the initial observation , we have access to the trajectory x∗ = ( xτ ) τ=t+1 .. T at training ( but not test ) time . In our running example , the trajectory includes all measurements from the patient after the treatment decision has been made . This information is available in a dataset of past patients ( in hindsight ) , but not in any novel situation . 2Note that unlike in some related work , the teacher in our task is not a copy of the student network , but can have a completely different architecture . 3For simplicity , our dynamical system is time-discrete , but this assumption is not important for what follows . 3.2 SUPERVISION OF INTERNAL ACTIVATIONS . A straightforward approach to solve the learning task which takes into account the trajectory information , would be to train a state-space-model ( SSM ) f̂ , consisting of a dynamical model and a decoder . The SSM is trained to maximize the likelihood of the observed trajectories in the training set , conditioned on the observed initial observation . Ideally , the induced f̂ closely resembles f , such that at test time , we can use it to generate an estimate of the rollout and infer the label from it . A potential drawback of this approach is that learning a full SSM could be more difficult than necessary . There may be many details of the dynamics that are both difficult to model and unimportant for the classification tasks . One example for this is the precise timing of events . As argued by Neitz et al . ( 2018 ) ; Jayaraman et al . ( 2019 ) , there are situations in which it is easy to predict a sequence of events where each event follows a previous one , but hard to predict the exact timing of those events . Moreover , an SSM typically requires rendering observations at training time , which may be difficult to learn and computationally expensive to execute . In the running example from Section 3.1 , it seems challenging and wasteful to predict all future observations in detail , as it would require modeling a complicated distribution over data such as X-ray images or doctor reports written in natural language . Ideally , we would like a model to learn how to extract the relevant information from these data efficiently . We propose to relax the requirement of fitting the dynamics precisely . The teacher can decide to omit properties of the observations that are not needed and omit time steps that can be skipped . It could also change the order of computation and let the student compute independently evolving sub-mechanisms sequentially , even if they evolved in parallel in the actual data generating process . In addition to potentially simplifying the learning problem , this could have the additional benefit of gaining computational efficiency . For example , modeling detailed pixel observations may be computationally wasteful , as argued by Buesing et al . ( 2018 ) and Oord et al . ( 2018 ) .
This paper proposes a teacher-student training scheme to incorporate the useful information of trajectory to improve the predictive performance of model-free methods. The teacher network tries to "guide" the student network at the training stage by presenting an interpretation of the trajectory. The guidance is implemented by adding to the loss function a regularization term that penalizes the "distance" between the teacher's output and the hidden states of the student. The proposed method was tested and compared to other model-free methods.
SP:f4ff50a3da561f589df8ba890626f51efb5dcd1d
Multi-Level Local SGD: Distributed SGD for Heterogeneous Hierarchical Networks
1 INTRODUCTION . Stochastic Gradient Descent ( SGD ) is a key algorithm in modern Machine Learning and optimization ( Amari , 1993 ) . To support distributed data as well as reduce training time , Zinkevich et al . ( 2010 ) introduced a distributed form of SGD . Traditionally , distributed SGD is run within a huband-spoke network model : a central parameter server ( hub ) coordinates with worker nodes . At each iteration , the hub sends a model to the workers . The workers each train on their local data , taking a gradient step , then return their locally trained model to the hub to be averaged . Distributed SGD can be an efficient training mechanism when message latency is low between the hub and workers , allowing gradient updates to be transmitted quickly at each iteration . However , as noted in Moritz et al . ( 2016 ) , message transmission latency is often high in distributed settings , which causes a large increase in overall training time . A practical way to reduce this communication overhead is to allow the workers to take multiple local gradient steps before communicating their local models to the hub . This form of distributed SGD is referred to as Local SGD ( Lin et al. , 2018 ; Stich , 2019 ) . There is a large body of work that analyzes the convergence of Local SGD and the benefits of multiple local training rounds ( McMahan et al. , 2017 ; Wang & Joshi , 2018 ; Li et al. , 2019 ) . Local SGD is not applicable to all scenarios . Workers may be heterogeneous in terms of their computing capabilities , and thus the time required for local training is not uniform . For this reason , it can be either costly or impossible for workers to train in a fully synchronous manner , as stragglers may hold up global computation . However , the vast majority of previous work uses a synchronous model , where all clients train for the same number of rounds before sending updates to the hub ( Dean et al. , 2012 ; Ho et al. , 2013 ; Cipar et al. , 2013 ) . Further , most works assume a hub-and-spoke model , but this does not capture many real world settings . For example , devices in an ad-hoc network may not all be able to communicate to a central hub in a single hop due to network or communication range limitations . In such settings , a multi-level communication network model may be beneficial . In flying ad-hoc networks ( FANETs ) , a network architecture has been proposed to improve scalability by partitioning the UAVs into mission areas ( Bekmezci et al. , 2013 ) . Here , clusters of UAVs have their own clusterheads , or hubs , and these hubs communicate through an upper level network , e.g. , via satellite . Multi-level networks have also been utilized in Fog and Edge computing , a paradigm de- ∗T . Castiglia , A. Das , and S. Patterson are with the Department of Computer Science , Rensselaer Polytechnic Institute , 110 8th St , Troy , NY 12180 , castit @ rpi.edu , dasa2 @ rpi.edu , sep @ cs.rpi.edu . signed to improve data aggregation and analysis in wireless sensor networks , autonomous vehicles , power systems , and more ( Bonomi et al. , 2012 ; Laboratory , 2017 ; Satyanarayanan , 2017 ) . Motivated by these observations , we propose Multi-Level Local SGD ( MLL-SGD ) , a distributed learning algorithm for heterogeneous multi-level networks . Specifically , we consider a two-level network structure . The lower level consists of a disjoint set of hub-and-spoke sub-networks , each with a single hub server and a set of workers . The upper level network consists of a connected , but not necessarily complete , hub network by which the hubs communicate . For example , in a Fog Computing application , the sub-network workers may be edge devices connected to their local data center , and the data centers act as hubs communicating over a decentralized network . Each subnetwork runs one or more Local SGD rounds , in which its workers train for a local training period , followed by model averaging at the sub-network ’ s hub . Periodically , the hubs average their models with neighbors in the hub network . We model heterogeneous workers using a stochastic approach ; each worker executes a local training iteration in each time step with a probability proportional to its computational resources . Thus , different workers may take different numbers of gradient steps within each local training period . Note since MLL-SGD averages every local training period , regardless of how many gradient steps each worker takes , slow workers do not slow algorithm execution . We prove the convergence of MLL-SGD for smooth and potentially non-convex loss functions . We assume data is distributed in an IID manner to all workers . Further , we analyze the relationship between the convergence error and algorithm parameters and find that , for a fixed step size , the error is quadratic in the number of local training iterations and the number of sub-network training iterations , and linear in the average worker operating rate . Our algorithm and analysis are general enough to encompass several variations of SGD as special cases , including classical SGD ( Amari , 1993 ) , SGD with weighted workers ( McMahan et al. , 2017 ) , and Decentralized Local SGD with an arbitrary hub communication network ( Wang & Joshi , 2018 ) . Our work provides novel analysis of a distributed learning algorithm in a multi-level network model with heterogeneous workers . The specific contributions of this paper are as follows . 1 ) We formalize the multi-level network model with heterogeneous workers , and we define the MLL-SGD algorithm for training models in such a network . 2 ) We provide theoretical analysis of the convergence guarantees of MLLSGD with heterogeneous workers . 3 ) We present an experimental evaluation that highlights our theoretical convergence guarantees . The experiments show that in multi-level networks , MLL-SGD achieves a marked improvement in convergence rate over algorithms that do not exploit the network hierarchy . Further , when workers have heterogeneous operating rates , MLL-SGD converges more quickly than algorithms that require all workers to execute the same number of training steps in each local training period . The rest of the paper is structured as follows . In Section 2 , we discuss related work . Section 3 introduces the system model and problem formulation . We describe MLL-SGD in Section 4 , and we present our main theoretical results in Section 5 . Proofs of these results are deferred to the appendix . We provide experimental results in Section 6 . Finally , we conclude in Section 7 . 2 RELATED WORK . Distributed SGD is a well studied subject in Machine Learning . Zinkevich et al . ( 2010 ) introduced parallel SGD in a hub-and-spoke model . Variations on Local SGD in the hub-and-spoke model have been studied in several works ( Moritz et al. , 2016 ; Zhang et al. , 2016 ; McMahan et al. , 2017 ) . Many works have provided convergence bounds of SGD within this model ( Wang et al. , 2019b ; Li et al. , 2019 ) . There is also a large body of work on decentralized approaches for optimization using gradient based methods , dual averaging , and deep learning ( Tsitsiklis et al. , 1986 ; Jin et al. , 2016 ; Wang et al. , 2019a ) . These previous works , however , do not address a multi-level network structure . In practice , workers may be heterogeneous in nature , which means that they may execute training iterations at different rates . Lian et al . ( 2017 ) addressed this heterogeneity by defining a gossipbased asynchronous SGD algorithm . In Stich ( 2019 ) , workers are modeled to take gradient steps at an arbitrary subset of all iterations . However , neither of these works address a multi-level network model . Grouping-SGD ( Jiang et al. , 2019 ) considers a scenario where workers can be clustered into groups , for example , based on their operating rates . Workers within a group train in a synchronous manner , while the training across different groups may be asynchronous . The system model differs significantly from that in MLL-SGD in that as the model parameters are partitioned vertically across multiple hubs , and workers communicate with every hub . Several recent works analyze Hierarchical Local SGD ( HL-SGD ) , an algorithm for training a model in a hierarchical network . Different from MLL-SGD , HL-SGD assumes the hub network topology is a hub-and-spoke and also that workers are homogeneous . Zhou & Cong ( 2019 ) and Liu et al . ( 2020 ) analyze the convergence error of HL-SGD , while Abad et al . ( 2020 ) analyzes convergence time . Unlike HL-SGD , MLL-SGD accounts for an arbitrary hub communication graph , and MLL-SGD algorithm execution does not slow down in the presence of heterogeneous worker operating rates . Several other works seek to encapsulate many variations of SGD under a single framework . Koloskova et al . ( 2020 ) created a generalized model that considers a gossip-based decentralized SGD algorithm where the communication network is time-varying . However , this work does not account for a multi-level network model nor worker heterogeneity . Wang et al . introduced the Cooperative SGD framework ( Wang & Joshi , 2018 ) , a model that includes communication reduction through local SGD steps and decentralized mixing between homogeneous workers . Cooperative SGD also allows for auxiliary variables . These auxiliary variables can be used to model SGD in a multi-level network , but only when sub-network averaging is immediately followed by hubs averaging with their neighbors in the hub network . Our model is more general ; it considers heterogeneous workers and it allows for an arbitrary number of averaging rounds within each sub-network between averaging rounds across sub-networks , which is more practical in multi-level networks where inter-hub communication is slow or costly . 3 SYSTEM MODEL AND PROBLEM FORMULATION . In this section , we introduce our system model , the objective function that we seek to minimize , and the assumptions we make about the function . We consider a set of D sub-networks D = { 1 , . . . , D } . Each sub-network d ∈ D has a single hub and a set of workersM ( d ) , with |M ( d ) | = N ( d ) . Workers inM ( d ) only communicate with their own hub and not with any other workers or hubs . We define the set of all workers in the system as M = ⋃D d=1M ( d ) . Let |M | = N . Each worker i holds a set S ( i ) of local training data . Let S = ⋃N i=1 S ( i ) . The set of all D hubs is denoted C. The hubs communicate with one another via an undirected , connected communication graph G = ( C , E ) . Let Nd = { j | ed , j ∈ E } denote the set of neighbors of the hub in sub-network d in the hub graph G. Let the model parameters be denoted by x ∈ Rn . Our goal is to find an x that minimizes the following objective function over the training set : F ( x ) = 1 | S | ∑ s∈S f ( x ; s ) ( 1 ) where f ( · ) is the loss function . The workers collaboratively minimize this loss function , in part by executing local iterations of SGD over their training sets . For each executed local iteration , a worker samples a mini-batch of data uniformly at random from its local data . Let ξ be a randomly sampled mini-batch of data and let g ( x ; ξ ) = 1|ξ| ∑ s∈ξ∇f ( x ; s ) be the mini-batch gradient . For simplicity , we use g ( x ) instead of g ( x ; ξ ) from here on . Assumption 1 . The objective function and the mini-batch gradients satisfy the following : 1a The objective function F : Rn → R is continuously differentiable , and the gradient is Lipschitz with constant L > 0 , i.e. , ‖∇F ( x ) −∇F ( y ) ‖2 ≤ L‖x− y‖2 for all x , y ∈ Rn . 1b The function F is lower bounded , i.e. , F ( x ) ≥ Finf > −∞ for all x ∈ Rn . 1c The mini-batch gradients are unbiased , i.e. , Eξ|x [ g ( x ) ] = ∇F ( x ) for all x ∈ Rn . 1d There exist scalars β ≥ 0 and σ ≥ 0 such that Eξ|x‖g ( x ) −∇F ( x ) ‖22 ≤ β||∇F ( x ) ||22+σ2 for all x ∈ Rn . Assumption 1a requires that the gradients do not change too rapidly , and Assumption 1b requires that our objective function is lower bounded by some Finf . Assumptions 1c and 1d assume that Algorithm 1 Multi-Level Local SGD 1 : Initialize : y ( d ) 1 for hubs d = 1 , . . . , D 2 : for k = 1 , . . . , K do 3 : parallel for d ∈ D do 4 : parallel for i ∈M ( d ) do 5 : x ( i ) k ← y ( d ) k . Workers receive updated model from hub 6 : for j = k , . . . , k + τ − 1 do 7 : x ( i ) k+1 ← x ( i ) k − ηg ( i ) k . Local iteration ( probabilistic ) 8 : end for 9 : end parallel for 10 : z ( d ) ← ∑ i∈M ( d ) v ( i ) x ( i ) k+1 . Hub d computes average of its workers ’ models 11 : if k mod q · τ = 0 then 12 : y ( d ) k+1 ← ∑ j∈N ( d ) Hj , dz ( j ) . Hub d averages its model with neighboring hubs 13 : else 14 : y ( d ) k+1 ← z ( d ) 15 : end if 16 : end parallel for 17 : end for the local data at each worker can be used as an unbiased estimate for the full dataset with the same bounded variance . These assumptions are common in convergence analysis of SGD algorithms ( e.g. , Bottou et al . ( 2018 ) ) .
This paper proposes a new variant of local SGD algorithm to make it be more realistic. In particular, (1) it allows workers to perform different number of local steps, depending on their computational resources; (2) workers are organized in a multi-level structure. Workers connected to one central hub can synchronize frequently and hubs are communicated in an infrequent and decentralized manor.
SP:d2a8d90ecc5c406db6ffcd61e45dba647295a898
Multi-Level Local SGD: Distributed SGD for Heterogeneous Hierarchical Networks
1 INTRODUCTION . Stochastic Gradient Descent ( SGD ) is a key algorithm in modern Machine Learning and optimization ( Amari , 1993 ) . To support distributed data as well as reduce training time , Zinkevich et al . ( 2010 ) introduced a distributed form of SGD . Traditionally , distributed SGD is run within a huband-spoke network model : a central parameter server ( hub ) coordinates with worker nodes . At each iteration , the hub sends a model to the workers . The workers each train on their local data , taking a gradient step , then return their locally trained model to the hub to be averaged . Distributed SGD can be an efficient training mechanism when message latency is low between the hub and workers , allowing gradient updates to be transmitted quickly at each iteration . However , as noted in Moritz et al . ( 2016 ) , message transmission latency is often high in distributed settings , which causes a large increase in overall training time . A practical way to reduce this communication overhead is to allow the workers to take multiple local gradient steps before communicating their local models to the hub . This form of distributed SGD is referred to as Local SGD ( Lin et al. , 2018 ; Stich , 2019 ) . There is a large body of work that analyzes the convergence of Local SGD and the benefits of multiple local training rounds ( McMahan et al. , 2017 ; Wang & Joshi , 2018 ; Li et al. , 2019 ) . Local SGD is not applicable to all scenarios . Workers may be heterogeneous in terms of their computing capabilities , and thus the time required for local training is not uniform . For this reason , it can be either costly or impossible for workers to train in a fully synchronous manner , as stragglers may hold up global computation . However , the vast majority of previous work uses a synchronous model , where all clients train for the same number of rounds before sending updates to the hub ( Dean et al. , 2012 ; Ho et al. , 2013 ; Cipar et al. , 2013 ) . Further , most works assume a hub-and-spoke model , but this does not capture many real world settings . For example , devices in an ad-hoc network may not all be able to communicate to a central hub in a single hop due to network or communication range limitations . In such settings , a multi-level communication network model may be beneficial . In flying ad-hoc networks ( FANETs ) , a network architecture has been proposed to improve scalability by partitioning the UAVs into mission areas ( Bekmezci et al. , 2013 ) . Here , clusters of UAVs have their own clusterheads , or hubs , and these hubs communicate through an upper level network , e.g. , via satellite . Multi-level networks have also been utilized in Fog and Edge computing , a paradigm de- ∗T . Castiglia , A. Das , and S. Patterson are with the Department of Computer Science , Rensselaer Polytechnic Institute , 110 8th St , Troy , NY 12180 , castit @ rpi.edu , dasa2 @ rpi.edu , sep @ cs.rpi.edu . signed to improve data aggregation and analysis in wireless sensor networks , autonomous vehicles , power systems , and more ( Bonomi et al. , 2012 ; Laboratory , 2017 ; Satyanarayanan , 2017 ) . Motivated by these observations , we propose Multi-Level Local SGD ( MLL-SGD ) , a distributed learning algorithm for heterogeneous multi-level networks . Specifically , we consider a two-level network structure . The lower level consists of a disjoint set of hub-and-spoke sub-networks , each with a single hub server and a set of workers . The upper level network consists of a connected , but not necessarily complete , hub network by which the hubs communicate . For example , in a Fog Computing application , the sub-network workers may be edge devices connected to their local data center , and the data centers act as hubs communicating over a decentralized network . Each subnetwork runs one or more Local SGD rounds , in which its workers train for a local training period , followed by model averaging at the sub-network ’ s hub . Periodically , the hubs average their models with neighbors in the hub network . We model heterogeneous workers using a stochastic approach ; each worker executes a local training iteration in each time step with a probability proportional to its computational resources . Thus , different workers may take different numbers of gradient steps within each local training period . Note since MLL-SGD averages every local training period , regardless of how many gradient steps each worker takes , slow workers do not slow algorithm execution . We prove the convergence of MLL-SGD for smooth and potentially non-convex loss functions . We assume data is distributed in an IID manner to all workers . Further , we analyze the relationship between the convergence error and algorithm parameters and find that , for a fixed step size , the error is quadratic in the number of local training iterations and the number of sub-network training iterations , and linear in the average worker operating rate . Our algorithm and analysis are general enough to encompass several variations of SGD as special cases , including classical SGD ( Amari , 1993 ) , SGD with weighted workers ( McMahan et al. , 2017 ) , and Decentralized Local SGD with an arbitrary hub communication network ( Wang & Joshi , 2018 ) . Our work provides novel analysis of a distributed learning algorithm in a multi-level network model with heterogeneous workers . The specific contributions of this paper are as follows . 1 ) We formalize the multi-level network model with heterogeneous workers , and we define the MLL-SGD algorithm for training models in such a network . 2 ) We provide theoretical analysis of the convergence guarantees of MLLSGD with heterogeneous workers . 3 ) We present an experimental evaluation that highlights our theoretical convergence guarantees . The experiments show that in multi-level networks , MLL-SGD achieves a marked improvement in convergence rate over algorithms that do not exploit the network hierarchy . Further , when workers have heterogeneous operating rates , MLL-SGD converges more quickly than algorithms that require all workers to execute the same number of training steps in each local training period . The rest of the paper is structured as follows . In Section 2 , we discuss related work . Section 3 introduces the system model and problem formulation . We describe MLL-SGD in Section 4 , and we present our main theoretical results in Section 5 . Proofs of these results are deferred to the appendix . We provide experimental results in Section 6 . Finally , we conclude in Section 7 . 2 RELATED WORK . Distributed SGD is a well studied subject in Machine Learning . Zinkevich et al . ( 2010 ) introduced parallel SGD in a hub-and-spoke model . Variations on Local SGD in the hub-and-spoke model have been studied in several works ( Moritz et al. , 2016 ; Zhang et al. , 2016 ; McMahan et al. , 2017 ) . Many works have provided convergence bounds of SGD within this model ( Wang et al. , 2019b ; Li et al. , 2019 ) . There is also a large body of work on decentralized approaches for optimization using gradient based methods , dual averaging , and deep learning ( Tsitsiklis et al. , 1986 ; Jin et al. , 2016 ; Wang et al. , 2019a ) . These previous works , however , do not address a multi-level network structure . In practice , workers may be heterogeneous in nature , which means that they may execute training iterations at different rates . Lian et al . ( 2017 ) addressed this heterogeneity by defining a gossipbased asynchronous SGD algorithm . In Stich ( 2019 ) , workers are modeled to take gradient steps at an arbitrary subset of all iterations . However , neither of these works address a multi-level network model . Grouping-SGD ( Jiang et al. , 2019 ) considers a scenario where workers can be clustered into groups , for example , based on their operating rates . Workers within a group train in a synchronous manner , while the training across different groups may be asynchronous . The system model differs significantly from that in MLL-SGD in that as the model parameters are partitioned vertically across multiple hubs , and workers communicate with every hub . Several recent works analyze Hierarchical Local SGD ( HL-SGD ) , an algorithm for training a model in a hierarchical network . Different from MLL-SGD , HL-SGD assumes the hub network topology is a hub-and-spoke and also that workers are homogeneous . Zhou & Cong ( 2019 ) and Liu et al . ( 2020 ) analyze the convergence error of HL-SGD , while Abad et al . ( 2020 ) analyzes convergence time . Unlike HL-SGD , MLL-SGD accounts for an arbitrary hub communication graph , and MLL-SGD algorithm execution does not slow down in the presence of heterogeneous worker operating rates . Several other works seek to encapsulate many variations of SGD under a single framework . Koloskova et al . ( 2020 ) created a generalized model that considers a gossip-based decentralized SGD algorithm where the communication network is time-varying . However , this work does not account for a multi-level network model nor worker heterogeneity . Wang et al . introduced the Cooperative SGD framework ( Wang & Joshi , 2018 ) , a model that includes communication reduction through local SGD steps and decentralized mixing between homogeneous workers . Cooperative SGD also allows for auxiliary variables . These auxiliary variables can be used to model SGD in a multi-level network , but only when sub-network averaging is immediately followed by hubs averaging with their neighbors in the hub network . Our model is more general ; it considers heterogeneous workers and it allows for an arbitrary number of averaging rounds within each sub-network between averaging rounds across sub-networks , which is more practical in multi-level networks where inter-hub communication is slow or costly . 3 SYSTEM MODEL AND PROBLEM FORMULATION . In this section , we introduce our system model , the objective function that we seek to minimize , and the assumptions we make about the function . We consider a set of D sub-networks D = { 1 , . . . , D } . Each sub-network d ∈ D has a single hub and a set of workersM ( d ) , with |M ( d ) | = N ( d ) . Workers inM ( d ) only communicate with their own hub and not with any other workers or hubs . We define the set of all workers in the system as M = ⋃D d=1M ( d ) . Let |M | = N . Each worker i holds a set S ( i ) of local training data . Let S = ⋃N i=1 S ( i ) . The set of all D hubs is denoted C. The hubs communicate with one another via an undirected , connected communication graph G = ( C , E ) . Let Nd = { j | ed , j ∈ E } denote the set of neighbors of the hub in sub-network d in the hub graph G. Let the model parameters be denoted by x ∈ Rn . Our goal is to find an x that minimizes the following objective function over the training set : F ( x ) = 1 | S | ∑ s∈S f ( x ; s ) ( 1 ) where f ( · ) is the loss function . The workers collaboratively minimize this loss function , in part by executing local iterations of SGD over their training sets . For each executed local iteration , a worker samples a mini-batch of data uniformly at random from its local data . Let ξ be a randomly sampled mini-batch of data and let g ( x ; ξ ) = 1|ξ| ∑ s∈ξ∇f ( x ; s ) be the mini-batch gradient . For simplicity , we use g ( x ) instead of g ( x ; ξ ) from here on . Assumption 1 . The objective function and the mini-batch gradients satisfy the following : 1a The objective function F : Rn → R is continuously differentiable , and the gradient is Lipschitz with constant L > 0 , i.e. , ‖∇F ( x ) −∇F ( y ) ‖2 ≤ L‖x− y‖2 for all x , y ∈ Rn . 1b The function F is lower bounded , i.e. , F ( x ) ≥ Finf > −∞ for all x ∈ Rn . 1c The mini-batch gradients are unbiased , i.e. , Eξ|x [ g ( x ) ] = ∇F ( x ) for all x ∈ Rn . 1d There exist scalars β ≥ 0 and σ ≥ 0 such that Eξ|x‖g ( x ) −∇F ( x ) ‖22 ≤ β||∇F ( x ) ||22+σ2 for all x ∈ Rn . Assumption 1a requires that the gradients do not change too rapidly , and Assumption 1b requires that our objective function is lower bounded by some Finf . Assumptions 1c and 1d assume that Algorithm 1 Multi-Level Local SGD 1 : Initialize : y ( d ) 1 for hubs d = 1 , . . . , D 2 : for k = 1 , . . . , K do 3 : parallel for d ∈ D do 4 : parallel for i ∈M ( d ) do 5 : x ( i ) k ← y ( d ) k . Workers receive updated model from hub 6 : for j = k , . . . , k + τ − 1 do 7 : x ( i ) k+1 ← x ( i ) k − ηg ( i ) k . Local iteration ( probabilistic ) 8 : end for 9 : end parallel for 10 : z ( d ) ← ∑ i∈M ( d ) v ( i ) x ( i ) k+1 . Hub d computes average of its workers ’ models 11 : if k mod q · τ = 0 then 12 : y ( d ) k+1 ← ∑ j∈N ( d ) Hj , dz ( j ) . Hub d averages its model with neighboring hubs 13 : else 14 : y ( d ) k+1 ← z ( d ) 15 : end if 16 : end parallel for 17 : end for the local data at each worker can be used as an unbiased estimate for the full dataset with the same bounded variance . These assumptions are common in convergence analysis of SGD algorithms ( e.g. , Bottou et al . ( 2018 ) ) .
This paper extends (Wang & Joshi, 2018) and proposes MLL-SGD for training models in hierarchic networks, where the network consists multiple sub-networks, and each sub-network contains multiple workers. In the level of sub-networks, models can be averaged. In the level of workers, the local copies of models can be averaged within a sub-network; however, workers cannot communicate directly with those from a different sub-networks. In such setting, MLL-SGD is proved to enjoy certain convergence property.
SP:d2a8d90ecc5c406db6ffcd61e45dba647295a898
Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
1 INTRODUCTION . In real-world machine learning applications , even well-curated training datasets have various types of heterogeneity . Two main types of heterogeneity are : ( 1 ) data imbalance : the input or label distribution often has a long-tailed density , and ( 2 ) heteroskedasticity : the labels given inputs have varying levels of uncertainties across subsets of data stemming from various sources such as the intrinsic ambiguity of the data or annotation errors . Many deep learning algorithms have been proposed for imbalanced datasets ( e.g. , see ( Wang et al. , 2017 ; Cao et al. , 2019 ; Cui et al. , 2019 ; Liu et al. , 2019 ) and the reference therein ) . However , heteroskedasticity , a classical notion studied extensively in the statistical community ( Pintore et al. , 2006 ; Wang et al. , 2013 ; Tibshirani et al. , 2014 ) , has so far been under-explored in deep learning . This paper focuses on addressing heteroskedasticity and its interaction with data imbalance in deep learning . Heteroskedasticity is often studied in regression analysis and refers to the property that the distribution of the error varies across inputs . In this work , we mostly focus on classification , though the developed technique also applies to regression . Here , heteroskedasticity reflects how the uncertainty in the conditional distribution y | x , or the entropy of y | x , varies as a function of x . Real-world datasets are often heteroskedastic . For example , Li et al . ( 2017 ) shows that the WebVision dataset has a varying number of ambiguous or true noisy examples across classes.2 Conversely , we consider a dataset to be homoscedastic if every example is mislabeled with a fixed probably , as assumed by many prior theoretical and empirical works on label corruption ( Ghosh et al. , 2017 ; Han et al. , 2018 ; Jiang et al. , 2018 ; Mirzasoleiman et al. , 2020 ) . We note that varying uncertainty in y | x can come from at least two sources : the intrinsic semantic ambiguity of the input , and the ( data-dependent ) mislabeling introduced by the annotation process . Our approach can handle both types of noisy examples in a unified way , but for the sake of comparisons with past methods , we call them “ ambiguous examples ” and “ mislabeled examples ” respectively , and refer to both of them as “ noisy examples ” . 1Code available at https : //github.com/kaidic/HAR . 2See Figure 4 of ( Li et al. , 2017 ) , the number of votes for each example indicates the level of uncertainty of that example . Overparameterized deep learning models tend to overfit more to the noisy examples ( Arpit et al. , 2017 ; Frénay & Verleysen , 2013 ; Zhang et al. , 2016 ) . To address this issue , a common approach is to detect noisy examples by selecting those with large training losses , and then remove them from the ( re- ) training process . However , an input ’ s training loss can also be big because it is rare or ambiguous ( Hacohen & Weinshall , 2019 ; Wang et al. , 2019 ) , as shown in Figure 1 . Noise-cleaning methods could fail to distinguish mislabeled from rare/ambiguous examples ( see Section 3.1 for empirical proofs ) . Though dropping the former is desirable , dropping the latter loses important information . Another popular approach is reweighting methods that reduce the contribution of noisy examples in optimization . However , failing to distinguish between mislabeled and rare/ambiguous examples makes the decision of the weights tricky – mislabeled examples require small weights , whereas rare / ambiguous examples benefit from larger weights ( Cao et al. , 2019 ; Shu et al. , 2019 ) . We propose a regularization method that deals with noisy and rare examples in a unified way . We observe that mislabeled , ambiguous , and rare examples all benefit from stronger regularization ( Hu et al. , 2020 ; Cao et al. , 2019 ) . We apply a Lipschitz regularizer ( Wei & Ma , 2019a ; b ) with varying regularization strength depending on the particular data point . Through theoretical analysis in the one-dimensional setting , we derive the optimal regularization strength for each training example . The optimal strength is larger for rarer and noisier examples . Our proposed algorithm , heteroskedastic adaptive regularization ( HAR ) , first estimates the noise level and density of each example , and then optimizes a Lipschitz-regularized objective with input-dependent regularization with strength provided by the theoretical formula . In summary , our main contributions are : ( i ) we propose to learn heteroskedastic imbalanced datasets under a unified framework , and theoretically study the optimal regularization strength on onedimensional data . ( ii ) we propose an algorithm , heteroskedastic adaptive regularization ( HAR ) , which applies stronger regularization to data points with high uncertainty and low density . ( iii ) we experimentally show that HAR achieves significant improvements over other noise-robust deep learning methods on simulated vision and language datasets with controllable degrees of data noise and data imbalance , as well as a real-world heteroskedastic and imbalanced dataset , WebVision . 2 ADAPTIVE REGULARIZATION FOR HETEROSKEDASTIC DATASETS . 2.1 BACKGROUNDS . We first introduce general nonparametric tools that we use in our analysis , and review the dependency of optimal regularization strength on the sample size and noise level . Over-parameterized neural networks as nonparametric methods . We use nonparametric method as a surrogate for neural networks because they have been shown to be closely related . Recent work ( Savarese et al. , 2019 ) shows that the minimum norm two-layer ReLU network that fits Ground Truth Weak Unif-reg Strong Unif-reg Adapt-reg ( HAR ) Figure 3 : A one-dimensional example with a three-layer neural network in heteroskedastic and imbalanced regression setting . The curve in blue is the underlying ground truth and the dots are observations with heteroskedastic noise . This example shows that uniformly weak regularization overfits on noisy and rare data ( on the right half ) , whereas uniformly strong regularization causes underfitting on the frequent and oscillating data ( on the left half ) . The adaptive regularization does not underfit the oscillating data but still denoise the noisy data . We note that standard nonparametric methods such as cubic spline do not work here because they also use uniform regularization . the training data is in fact a linear spline interpolation . Parhi & Nowak ( 2019 ) extend this result to a broader family of neural networks with a broader family of activations . Given a training dataset { ( xi , yi ) } ni=1 , nonparametric method with penalty works as follows . Let F : R→ R be a twice-differentiable model family . We aim to fit the data with smoothness penalty minf 1 n n∑ i=1 ` ( f ( xi ) , yi ) + λ ∫ ( f ′ ( x ) ) 2dx ( 1 ) Lipschitz regularization for neural networks . Lipschitz regularization has been shown to be effective for deep neural networks as well . Wei & Ma ( 2019a ) proves a generalization bound of neural networks dependent on the Lipschitzness of each layer with respect to all intermediate layers on the training data , and show that , empirically , regularizing the Lipschitzness improve the generalization . Sokolić et al . ( 2017 ) shows similar results in data-limited settings . In Section 2.3 , we extend the Lipschitz regularization technique to heteroskedastic setting . Regularization strength as a function of noise level and sample size . Finally , we briefly review existing theoretical insights on the optimal choice of regularization strength . Generally , the optimal regularization strength for a given model family increases with the label noise level and decreases in the sample size . As a simple example , consider linear ridge regression minθ 1n ∑n i=1 ( x > i θ − yi ) 2 + λ‖θ‖2 , where xi , θ ∈ Rd and yi ∈ R. We assume yi = x > i θ∗+ ξ for some ground truth parameter θ∗ , and ξ ∼ N ( 0 , σ2 ) . Then the optimal regularization strength λopt = dσ2/n‖θ∗‖22 . Results of similar nature can also be found in nonparametric statistics ( Wang et al. , 2013 ; Tibshirani et al. , 2014 ) . 2.2 HETEROSKEDASTIC NONPARAMETRIC CLASSIFICATION ON ONE-DIMENSIONAL DATA . We consider a one-dimensional binary classification problem whereX = [ 0 , 1 ] ⊂ R andY = { −1 , 1 } . We assume Y given X follows a logistic model with ground-truth function f ? , i.e . Pr [ Y = y|X = x ] = 1 1 + exp ( −yf ? ( x ) ) . ( 2 ) The training objective is cross-entropy loss plus Lipschitz regularization , i.e . f̂ = argminf L̂ ( f ) , 1 n n∑ i=1 ` ( f ( xi ) , yi ) + λ ∫ 1 0 ρ ( x ) ( f ′ ( x ) ) 2dx , ( 3 ) where ` ( a , y ) = − log ( 1 + exp ( −ya ) ) , and ρ ( x ) is a smoothing parameter as a function of the noise level and density of x . Let I ( x ) be the fisher information matrix conditioned on the input , i.e . I ( x ) , E [ ∇2a ` ( a , Y ) |a=f ? ( X ) |X = x ] . When ( X , Y ) follows the logistic model in equation 2 , I ( x ) = 1 ( 1 + exp ( f ? ( x ) ) ( 1 + exp ( −f ? ( x ) ) = Var ( Y |X = x ) . Therefore , I ( x ) captures the aleatoric uncertainty of x . For example , when Y is deterministic conditioned on X = x , we have I ( x ) = 0 , indicating perfect certainty . Define the test metric as the mean-squared-error on the test set { ( xi , yi ) } ni=1 , i.e.,1 MSE ( f̂ ) , E { ( xi , yi ) } ni=1 ∫ 1 0 ( f̂ ( t ) − f ? ( t ) ) 2dt ( 4 ) Our main goal is to derive the optimal choice of ρ ( x ) that minimizes the MSE . We start with an analytical characterization of the test error . LetW 22 = { f ′ is absolute continuous and f ′′ ∈ L2 [ 0 , 1 ] } . We denote the density of X as q ( x ) . The following theorem analytically computes the MSE under the regularization strength ρ ( · ) , building upon ( Wang et al. , 2013 ) for regression problems . The proof of the Theorem is deferred to Appendix A. Theorem 1 . Assume that f ? , q , I ∈W 22 . Let r ( t ) = −1/ ( q ( t ) I ( t ) ) and L0 = ∫∞ −∞ 1 4 exp ( −2|t| ) dt . If we choose λ = C0n−2/5 for some constant C0 > 0 , the asymptotic mean squared error is lim n→∞ MSE ( f̂ ) = Cn ∫ 1 0 λ2r2 ( t ) [ d dt ( ρ ( t ) ( f∗ ) ′ ( t ) ) ] 2 + L0r ( t ) 1/2ρ ( t ) −1/2dt in probability , where Cn is a scalar that only depends on n. Using the analytical formula of the test error above , we want to derive an approximately optimal choice of ρ ( x ) . A precise computation is infeasible , so we restrict ourselves to consider only ρ ( x ) that is constant within groups of examples . We introduce an additional structure – we assume the data can be divided into k groups [ a0 , a1 ) , [ a1 , a2 ) , · · · , [ ak−1 , ak ) . Each group [ aj , aj+1 ) consists of an interval of data with approximately the same aleatoric uncertainty . We approximate ρ ( t ) is constant on each of the group [ ai , ai+1 ) with value ρi . Plugging this piece-wise constant ρ into the asymptotic MSE in Theorem 1 , we obtain lim n→∞ MSE ( f̂ ) = ∑ j [ ρ2j ∫ aj+1 aj r2 ( t ) [ d2 dt2 f ? ( t ) ] 2 dt+ ρ −1/2 j L0 ∫ aj+1 aj r1/2 ( t ) dt ] . Minimizing the above formula over ρ1 , . . . , ρk separately , we derive the optimal weights , ρj = [ L0 ∫ aj+1 aj r ( t ) 1/2dt 4 ∫ aj+1 aj r2 ( t ) [ d2 dt2 f ? ( t ) ] 2 dt ] 2/5 . In practice , we do not know f ? and q ( x ) , so we make the following simplifications . We assume that q ( t ) and I ( t ) are constant on each interval [ aj , aj+1 ] . In other words , we assume that q ( t ) = qj and I ( t ) = Ij for all t ∈ [ aj , aj+1 ] . We further assume that d 2 dt2 f ? ( t ) is close to a constant on the entire space , because estimating the curvature in high dimension is difficult . This simplification yields ρj ∝ [ q −1/2 j I −1/2 j q−2j I −2 j ] 2/5 = q 3/5 j I 3/5 j . We find the simplification works well in practice . Adaptive regularization with importance sampling . It is practically infeasible to implement the integration in equation 3 for high-dimensional data . We use importance sampling to approximate the integral : minimizef L ( f ) , 1 n n∑ i=1 ` ( f ( xi ) , yi ) + λ n∑ i=1 τif ′ ( xi ) 2 ( 5 ) Suppose xi ∈ [ aj , aj+1 ) , we have that τi should satisfy that τiqj = ρj so that the expectation of the regularization term in equation 5 is equal to that in equation 3 . Hence , τi = I 3/5 j q −2/5 j = I ( xi ) 3/5q ( xi ) −2/5 . Adaptive regularization for multi-class classification and regression . In fact , the proof of Theorem 1 is proved for general loss ` ( a , y ) . Therefore , we can directly generalize it to multiclass classification and regression problems . For a regression problem , ` ( a , y ) is the square loss : ` ( y , a ) = 0.5 ( y − a ) 2 , the Fisher information I ( x ) = 1 . Therefore , for a regression problem , we can choose regularization weight τi = q ( xi ) −2/5 . 1Note that we integrate the error without weighting because we are interested in the balanced test performance .
This paper proposed an adaptive regularization method to handle heteroskedastic and imbalanced datasets, which are closer to real-world large-scale settings. The framework applies a Lipschitz regularizer with varying regularization strength depending on the particular data point. The authors first theoretically study the optimal regularization strength on a one-dimensional binary classification task. By applying some simplification, the result can be extended to high-dimensional multi-class tasks and finally HAR algorithm is proposed. Experiments show that HAR achieves significant improvements over other noise-robust deep learning methods on simulated vision and language datasets with controllable degrees of data noise and data imbalance, as well as a real-world heteroskedastic and imbalanced dataset. The experiments show great improvement. However, since the derivations involve many approximations, the reliability needs to be confirmed by more experiments.
SP:7be27202a84037a62bdc651fc24a8450325e0fd6
Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
1 INTRODUCTION . In real-world machine learning applications , even well-curated training datasets have various types of heterogeneity . Two main types of heterogeneity are : ( 1 ) data imbalance : the input or label distribution often has a long-tailed density , and ( 2 ) heteroskedasticity : the labels given inputs have varying levels of uncertainties across subsets of data stemming from various sources such as the intrinsic ambiguity of the data or annotation errors . Many deep learning algorithms have been proposed for imbalanced datasets ( e.g. , see ( Wang et al. , 2017 ; Cao et al. , 2019 ; Cui et al. , 2019 ; Liu et al. , 2019 ) and the reference therein ) . However , heteroskedasticity , a classical notion studied extensively in the statistical community ( Pintore et al. , 2006 ; Wang et al. , 2013 ; Tibshirani et al. , 2014 ) , has so far been under-explored in deep learning . This paper focuses on addressing heteroskedasticity and its interaction with data imbalance in deep learning . Heteroskedasticity is often studied in regression analysis and refers to the property that the distribution of the error varies across inputs . In this work , we mostly focus on classification , though the developed technique also applies to regression . Here , heteroskedasticity reflects how the uncertainty in the conditional distribution y | x , or the entropy of y | x , varies as a function of x . Real-world datasets are often heteroskedastic . For example , Li et al . ( 2017 ) shows that the WebVision dataset has a varying number of ambiguous or true noisy examples across classes.2 Conversely , we consider a dataset to be homoscedastic if every example is mislabeled with a fixed probably , as assumed by many prior theoretical and empirical works on label corruption ( Ghosh et al. , 2017 ; Han et al. , 2018 ; Jiang et al. , 2018 ; Mirzasoleiman et al. , 2020 ) . We note that varying uncertainty in y | x can come from at least two sources : the intrinsic semantic ambiguity of the input , and the ( data-dependent ) mislabeling introduced by the annotation process . Our approach can handle both types of noisy examples in a unified way , but for the sake of comparisons with past methods , we call them “ ambiguous examples ” and “ mislabeled examples ” respectively , and refer to both of them as “ noisy examples ” . 1Code available at https : //github.com/kaidic/HAR . 2See Figure 4 of ( Li et al. , 2017 ) , the number of votes for each example indicates the level of uncertainty of that example . Overparameterized deep learning models tend to overfit more to the noisy examples ( Arpit et al. , 2017 ; Frénay & Verleysen , 2013 ; Zhang et al. , 2016 ) . To address this issue , a common approach is to detect noisy examples by selecting those with large training losses , and then remove them from the ( re- ) training process . However , an input ’ s training loss can also be big because it is rare or ambiguous ( Hacohen & Weinshall , 2019 ; Wang et al. , 2019 ) , as shown in Figure 1 . Noise-cleaning methods could fail to distinguish mislabeled from rare/ambiguous examples ( see Section 3.1 for empirical proofs ) . Though dropping the former is desirable , dropping the latter loses important information . Another popular approach is reweighting methods that reduce the contribution of noisy examples in optimization . However , failing to distinguish between mislabeled and rare/ambiguous examples makes the decision of the weights tricky – mislabeled examples require small weights , whereas rare / ambiguous examples benefit from larger weights ( Cao et al. , 2019 ; Shu et al. , 2019 ) . We propose a regularization method that deals with noisy and rare examples in a unified way . We observe that mislabeled , ambiguous , and rare examples all benefit from stronger regularization ( Hu et al. , 2020 ; Cao et al. , 2019 ) . We apply a Lipschitz regularizer ( Wei & Ma , 2019a ; b ) with varying regularization strength depending on the particular data point . Through theoretical analysis in the one-dimensional setting , we derive the optimal regularization strength for each training example . The optimal strength is larger for rarer and noisier examples . Our proposed algorithm , heteroskedastic adaptive regularization ( HAR ) , first estimates the noise level and density of each example , and then optimizes a Lipschitz-regularized objective with input-dependent regularization with strength provided by the theoretical formula . In summary , our main contributions are : ( i ) we propose to learn heteroskedastic imbalanced datasets under a unified framework , and theoretically study the optimal regularization strength on onedimensional data . ( ii ) we propose an algorithm , heteroskedastic adaptive regularization ( HAR ) , which applies stronger regularization to data points with high uncertainty and low density . ( iii ) we experimentally show that HAR achieves significant improvements over other noise-robust deep learning methods on simulated vision and language datasets with controllable degrees of data noise and data imbalance , as well as a real-world heteroskedastic and imbalanced dataset , WebVision . 2 ADAPTIVE REGULARIZATION FOR HETEROSKEDASTIC DATASETS . 2.1 BACKGROUNDS . We first introduce general nonparametric tools that we use in our analysis , and review the dependency of optimal regularization strength on the sample size and noise level . Over-parameterized neural networks as nonparametric methods . We use nonparametric method as a surrogate for neural networks because they have been shown to be closely related . Recent work ( Savarese et al. , 2019 ) shows that the minimum norm two-layer ReLU network that fits Ground Truth Weak Unif-reg Strong Unif-reg Adapt-reg ( HAR ) Figure 3 : A one-dimensional example with a three-layer neural network in heteroskedastic and imbalanced regression setting . The curve in blue is the underlying ground truth and the dots are observations with heteroskedastic noise . This example shows that uniformly weak regularization overfits on noisy and rare data ( on the right half ) , whereas uniformly strong regularization causes underfitting on the frequent and oscillating data ( on the left half ) . The adaptive regularization does not underfit the oscillating data but still denoise the noisy data . We note that standard nonparametric methods such as cubic spline do not work here because they also use uniform regularization . the training data is in fact a linear spline interpolation . Parhi & Nowak ( 2019 ) extend this result to a broader family of neural networks with a broader family of activations . Given a training dataset { ( xi , yi ) } ni=1 , nonparametric method with penalty works as follows . Let F : R→ R be a twice-differentiable model family . We aim to fit the data with smoothness penalty minf 1 n n∑ i=1 ` ( f ( xi ) , yi ) + λ ∫ ( f ′ ( x ) ) 2dx ( 1 ) Lipschitz regularization for neural networks . Lipschitz regularization has been shown to be effective for deep neural networks as well . Wei & Ma ( 2019a ) proves a generalization bound of neural networks dependent on the Lipschitzness of each layer with respect to all intermediate layers on the training data , and show that , empirically , regularizing the Lipschitzness improve the generalization . Sokolić et al . ( 2017 ) shows similar results in data-limited settings . In Section 2.3 , we extend the Lipschitz regularization technique to heteroskedastic setting . Regularization strength as a function of noise level and sample size . Finally , we briefly review existing theoretical insights on the optimal choice of regularization strength . Generally , the optimal regularization strength for a given model family increases with the label noise level and decreases in the sample size . As a simple example , consider linear ridge regression minθ 1n ∑n i=1 ( x > i θ − yi ) 2 + λ‖θ‖2 , where xi , θ ∈ Rd and yi ∈ R. We assume yi = x > i θ∗+ ξ for some ground truth parameter θ∗ , and ξ ∼ N ( 0 , σ2 ) . Then the optimal regularization strength λopt = dσ2/n‖θ∗‖22 . Results of similar nature can also be found in nonparametric statistics ( Wang et al. , 2013 ; Tibshirani et al. , 2014 ) . 2.2 HETEROSKEDASTIC NONPARAMETRIC CLASSIFICATION ON ONE-DIMENSIONAL DATA . We consider a one-dimensional binary classification problem whereX = [ 0 , 1 ] ⊂ R andY = { −1 , 1 } . We assume Y given X follows a logistic model with ground-truth function f ? , i.e . Pr [ Y = y|X = x ] = 1 1 + exp ( −yf ? ( x ) ) . ( 2 ) The training objective is cross-entropy loss plus Lipschitz regularization , i.e . f̂ = argminf L̂ ( f ) , 1 n n∑ i=1 ` ( f ( xi ) , yi ) + λ ∫ 1 0 ρ ( x ) ( f ′ ( x ) ) 2dx , ( 3 ) where ` ( a , y ) = − log ( 1 + exp ( −ya ) ) , and ρ ( x ) is a smoothing parameter as a function of the noise level and density of x . Let I ( x ) be the fisher information matrix conditioned on the input , i.e . I ( x ) , E [ ∇2a ` ( a , Y ) |a=f ? ( X ) |X = x ] . When ( X , Y ) follows the logistic model in equation 2 , I ( x ) = 1 ( 1 + exp ( f ? ( x ) ) ( 1 + exp ( −f ? ( x ) ) = Var ( Y |X = x ) . Therefore , I ( x ) captures the aleatoric uncertainty of x . For example , when Y is deterministic conditioned on X = x , we have I ( x ) = 0 , indicating perfect certainty . Define the test metric as the mean-squared-error on the test set { ( xi , yi ) } ni=1 , i.e.,1 MSE ( f̂ ) , E { ( xi , yi ) } ni=1 ∫ 1 0 ( f̂ ( t ) − f ? ( t ) ) 2dt ( 4 ) Our main goal is to derive the optimal choice of ρ ( x ) that minimizes the MSE . We start with an analytical characterization of the test error . LetW 22 = { f ′ is absolute continuous and f ′′ ∈ L2 [ 0 , 1 ] } . We denote the density of X as q ( x ) . The following theorem analytically computes the MSE under the regularization strength ρ ( · ) , building upon ( Wang et al. , 2013 ) for regression problems . The proof of the Theorem is deferred to Appendix A. Theorem 1 . Assume that f ? , q , I ∈W 22 . Let r ( t ) = −1/ ( q ( t ) I ( t ) ) and L0 = ∫∞ −∞ 1 4 exp ( −2|t| ) dt . If we choose λ = C0n−2/5 for some constant C0 > 0 , the asymptotic mean squared error is lim n→∞ MSE ( f̂ ) = Cn ∫ 1 0 λ2r2 ( t ) [ d dt ( ρ ( t ) ( f∗ ) ′ ( t ) ) ] 2 + L0r ( t ) 1/2ρ ( t ) −1/2dt in probability , where Cn is a scalar that only depends on n. Using the analytical formula of the test error above , we want to derive an approximately optimal choice of ρ ( x ) . A precise computation is infeasible , so we restrict ourselves to consider only ρ ( x ) that is constant within groups of examples . We introduce an additional structure – we assume the data can be divided into k groups [ a0 , a1 ) , [ a1 , a2 ) , · · · , [ ak−1 , ak ) . Each group [ aj , aj+1 ) consists of an interval of data with approximately the same aleatoric uncertainty . We approximate ρ ( t ) is constant on each of the group [ ai , ai+1 ) with value ρi . Plugging this piece-wise constant ρ into the asymptotic MSE in Theorem 1 , we obtain lim n→∞ MSE ( f̂ ) = ∑ j [ ρ2j ∫ aj+1 aj r2 ( t ) [ d2 dt2 f ? ( t ) ] 2 dt+ ρ −1/2 j L0 ∫ aj+1 aj r1/2 ( t ) dt ] . Minimizing the above formula over ρ1 , . . . , ρk separately , we derive the optimal weights , ρj = [ L0 ∫ aj+1 aj r ( t ) 1/2dt 4 ∫ aj+1 aj r2 ( t ) [ d2 dt2 f ? ( t ) ] 2 dt ] 2/5 . In practice , we do not know f ? and q ( x ) , so we make the following simplifications . We assume that q ( t ) and I ( t ) are constant on each interval [ aj , aj+1 ] . In other words , we assume that q ( t ) = qj and I ( t ) = Ij for all t ∈ [ aj , aj+1 ] . We further assume that d 2 dt2 f ? ( t ) is close to a constant on the entire space , because estimating the curvature in high dimension is difficult . This simplification yields ρj ∝ [ q −1/2 j I −1/2 j q−2j I −2 j ] 2/5 = q 3/5 j I 3/5 j . We find the simplification works well in practice . Adaptive regularization with importance sampling . It is practically infeasible to implement the integration in equation 3 for high-dimensional data . We use importance sampling to approximate the integral : minimizef L ( f ) , 1 n n∑ i=1 ` ( f ( xi ) , yi ) + λ n∑ i=1 τif ′ ( xi ) 2 ( 5 ) Suppose xi ∈ [ aj , aj+1 ) , we have that τi should satisfy that τiqj = ρj so that the expectation of the regularization term in equation 5 is equal to that in equation 3 . Hence , τi = I 3/5 j q −2/5 j = I ( xi ) 3/5q ( xi ) −2/5 . Adaptive regularization for multi-class classification and regression . In fact , the proof of Theorem 1 is proved for general loss ` ( a , y ) . Therefore , we can directly generalize it to multiclass classification and regression problems . For a regression problem , ` ( a , y ) is the square loss : ` ( y , a ) = 0.5 ( y − a ) 2 , the Fisher information I ( x ) = 1 . Therefore , for a regression problem , we can choose regularization weight τi = q ( xi ) −2/5 . 1Note that we integrate the error without weighting because we are interested in the balanced test performance .
The authors propose a novel regularization approach aimed at addressing issues of class imbalance and heteroskedasticity. This adaptive approach uses a Lipschitz regularizer with varying strength in different parts of the input space, regularizing harder in cases of rare and noisy examples. The authors derive the optional regularization strength in the one-dimensional setting, to set ground for the proposed approach and its application in higher-dimensional settings. The approach is evaluated on multiple image datasets, and a textual dataset - and compared to a number of baselines, including those involving noise-cleaning, reweigthing-based methods, meta learning, robust loss functions, as well as tuned uniform regularization. The improvements seem quite strong, and clearly demonstrate the utility of the proposed approach. The paper is well structured, clearly written - and was a pleasure to read.
SP:7be27202a84037a62bdc651fc24a8450325e0fd6
Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes
1 INTRODUCTION . Deep models , formed by stacking together many simple layers , give rise to extremely powerful machine learning algorithms , from deep neural networks ( DNNs ) to deep Gaussian processes ( DGPs ) ( Damianou & Lawrence , 2013 ) . One approach to reason about uncertainty in these models is to use variational inference ( VI ) ( Jordan et al. , 1999 ) . VI in Bayesian neural networks ( BNNs ) requires the user to specify a family of approximate posteriors over the weights , with the classical approach being independent Gaussian distributions over each individual weight ( Hinton & Van Camp , 1993 ; Graves , 2011 ; Blundell et al. , 2015 ) . Later work has considered more complex approximate posteriors , for instance using a Matrix-Normal distribution as the approximate posterior for a full weight-matrix ( Louizos & Welling , 2016 ; Ritter et al. , 2018 ) . By contrast , DGPs use an approximate posterior defined over functions — the standard approach is to specify the inputs and outputs at a finite number of “ inducing ” points ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . Critically , these classical BNN and DGP approaches define approximate posteriors over functions that are independent across layers . An approximate posterior that factorises across layers is problematic , because what matters for a deep model is the overall input-output transformation for the full model , not the input-output transformation for individual layers . This raises the question of what family of approximate posteriors should be used to capture correlations across layers . One approach for BNNs would be to introduce a flexible “ hypernetwork ” , used to generate the weights ( Krueger et al. , 2017 ; Pawlowski et al. , 2017 ) . However , this approach is likely to be suboptimal as it does not sufficiently exploit the rich structure in the underlying neural network . For guidance , we consider the optimal approximate posterior over the top-layer units in a deep network for regression . Remarkably , the optimal approximate posterior for the last-layer weights given the earlier weights can be obtained in closed form without choosing a restrictive family of distributions . In particular , the optimal approximate posterior is given by propagating the training inputs through lower layers to compute the top-layer representation , then using Bayesian linear regression to map from the top-layer representation to the outputs . Inspired by this result , we use Bayesian linear regression to define a generic family of approximate posteriors for BNNs . In particular , we introduce learned “ pseudo-data ” at every layer , and compute the posterior over the weights by performing linear regression from the inputs ( propagated from lower layers ) onto the pseudo-data . We reduce the burden of working with many training inputs by summarising the posterior using a small number of “ inducing ” points . We find that these approximate posteriors give excellent performance in the non-tempered , no-data-augmentation regime , with performance on datasets such as CIFAR-10 reaching 86.7 % , comparable to SGMCMC ( Wenzel et al. , 2020 ) . Our approach can be extended to DGPs , and we explore connections to the inducing point GP literature , showing that inference in the two classes of models can be unified . 2 METHODS . We consider neural networks with lower-layer weights { W ` } L ` =1 , W ` ∈ RN ` −1×N ` , and top-layer weights , WL+1 ∈ RNL×NL+1 , where the activity , F ` , at layer ` is given by , F1 = XW1 , F ` = φ ( F ` −1 ) W ` for ` ∈ { 2 , . . . , L } , ( 1 ) where φ ( · ) is an elementwise nonlinearity . The outputs , Y ∈ RP×NL+1 , depend on the top-level activity , FL , and the output weights , WL+1 , according to a likelihood , P ( Y|WL+1 , FL ) . In the following derivations , we will focus on ` > 1 ; corresponding expressions for the input layer can be obtained by replacing φ ( F0 ) with the inputs , X ∈ RP×N0 . The prior over weights is independent across layers and output units ( see Sec . 2.3 for the form of S ` ) , P ( W ` ) = ∏N ` λ=1 P ( w ` λ ) , P ( w ` λ ) = N ( w ` λ ∣∣∣0 , 1N ` −1 S ` ) , ( 2 ) where w ` λ is a column of W ` , i.e . all the input weights to unit λ in layer ` . To fit the parameters of the approximate posterior , Q ( { W } L+1 ` =1 ) , we maximise the evidence lower bound ( ELBO ) , L = EQ ( { W } L+1 ` =1 ) [ log P ( Y , { W } L+1 ` =1 |X ) − log Q ( { W } L+1 ` =1 ) ] . ( 3 ) To build intuition about how to parameterise Q ( { W } L+1 ` =1 ) , we consider the optimal Q ( WL+1| { W ` } L ` =1 ) for any given Q ( { W ` } L ` =1 ) . We begin by simplifying the ELBO by incorporating terms that do not depend on WL+1 into a constant , c , L = EQ ( { W ` } L+1 ` =1 ) [ log P ( Y , WL+1|X , { W ` } L ` =1 ) − log Q ( WL+1| { W ` } L ` =1 ) + c ] . ( 4 ) Rearranging these terms , we find that all WL+1 dependence can be written in terms of the KL divergence between the approximate posterior of interest and the true posterior , L = EQ ( { W ` } L ` =1 ) [ log P ( Y|X , { W ` } L ` =1 ) − DKL ( Q ( WL+1| { W ` } L ` =1 ) ||P ( WL+1|Y , X , { W ` } L ` =1 ) ) + c ] . ( 5 ) Thus , the optimal approximate posterior is , Q ( WL+1| { W ` } L ` =1 ) = P ( WL+1|Y , X , { W ` } L ` =1 ) ∝ P ( Y|WL+1 , FL ) P ( WL+1 ) , ( 6 ) and where the final proportionality comes by applying Bayes theorem and exploiting the model ’ s conditional independencies . For regression , the likelihood is Gaussian , P ( Y|WL+1 , FL ) = ∏NL+1 λ=1 N ( yλ ; φ ( FL ) w L+1 λ , Λ −1 L+1 ) , ( 7 ) where yλ is the value of a single output channel for all training inputs , and ΛL+1 is a precision matrix . Thus , the posterior is given in closed form by Bayesian linear regression ( Rasmussen & Williams , 2006 ) . 2.1 DEFINING THE FULL APPROXIMATE POSTERIOR WITH GLOBAL INDUCING POINTS AND PSEUDO-DATA . We adapt the optimal scheme above to give a scalable approximate posterior over the weights at all layers . To avoid propagating all training inputs through the network , which is intractable for large datasets , we instead propagate M global inducing locations , U0 , U1 = U0W1 , U ` = φ ( U ` −1 ) W ` for ` = 2 , . . . , L+ 1 . ( 8 ) Next , the optimal posterior requires outputs , Y . However , no outputs are available at inducing locations for the output layer , let alone for intermediate layers . We thus introduce learned variational parameters to mimic the form of the optimal posterior . In particular , we use the product of the prior over weights and a “ pseudo-likelihood ” , N ( v ` λ ; u ` λ , Λ −1 ` ) , representing noisy “ pseudo-observations ” of the outputs of the linear layer at the inducing locations , u ` λ = φ ( U ` −1 ) w ` λ . Substituting u ` λ into the pseudo-likelihood the approximate posterior becomes , Q ( W ` | { W ` ′ } ` −1 ` ′=1 ) ∝ ∏N ` λ=1N ( v ` λ ; φ ( U ` −1 ) w ` λ , Λ −1 ` ) P ( w ` λ ) , Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) = ∏N ` λ=1N ( w ` λ∣∣∣Σw ` φ ( U ` −1 ) T Λ ` v ` λ , Σw ` ) , Σw ` = ( N ` −1S −1 ` + φ ( U ` −1 ) T Λ ` φ ( U ` −1 ) ) −1 . ( 9 ) where v ` λ and Λ ` are variational parameters . Therefore , our full approximate posterior factorises as Q ( { W ` } L+1 ` =1 ) = L+1∏ ` =1 Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) . ( 10 ) Thus , the full ELBO can be written , L = EQ ( { W } L+1 ` =1 ) [ log P ( Y|X , { W } L+1 ` =1 ) + log P ( { W ` } L+1 ` =1 ) − log Q ( { W ` } L+1 ` =1 ) ] ( 11 ) = EQ ( { W } L+1 ` =1 ) log P ( Y , |X , { W } L+1 ` =1 ) + L+1∑ ` =1 log P ( W ` ) Q ( W ` | { W ` } ` −1 ` =1 ) . ( 12 ) The forms of the ELBO and approximate posterior suggest a sequential procedure to evaluate and subsequently optimise it : we alternate between sampling the weights using Eq . ( 9 ) and propagating the data and inducing locations using Eq . ( 8 ) ( see Alg . 1 ) . In summary , the parameters of the approximate posterior are the global inducing inputs , U0 , and the pseudo-data and precisions at all layers , { V ` , Λ ` } L+1 ` =1 . As each factor Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) is a Gaussian , these parameters can be optimised using standard reparameterised variational inference ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) in combination with the Adam optimiser ( Kingma & Ba , 2014 ) ( Appendix A ) . Importantly , by placing inducing inputs on the training data ( i.e . U0 = X ) , and setting v ` λ = yλ this approximate posterior matches the optimal top-layer posterior ( Eq . 6 ) . Finally , we note that while this posterior is conditionally Gaussian , the full posterior over all { W ` } L+1 ` =1 is non-Gaussian , and is thus potentially more flexible than a full-covariance Gaussian over all weights . Algorithm 1 : Global inducing points for neural networks Parameters : inducing inputs , U0 , inducing outputs and precisions , { V ` , Λ ` } L ` =1 , at all layers . Neural network inputs : ( e.g . MNIST digits ) F0 Neural network outputs : ( e.g . classification logits ) FL+1 L+∞← 0 for ` ∈ { 1 , . . . , L+ 1 } do Compute the mean and covariance over the weights at this layer Σw ` = ( N ` −1S −1 ` + φ ( U ` −1 ) T Λ ` φ ( U ` −1 ) ) −1 M ` = Σ w ` φ ( U ` −1 ) T Λ ` V ` Sample the weights and compute the ELBO W ` ∼ N ( M ` , Σw ` ) = Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) L ← L+ log P ( W ` ) − logN ( W ` |M ` , Σw ` ) Propagate the inputs and inducing points using the sampled weights , U ` = φ ( U ` −1 ) W ` F ` = φ ( F ` −1 ) W ` L ← L+ log P ( Y|FL+1 )
The paper proposed a new way of doing Bayesian deep learning in which the optimal conditional posterior for the last layer weights could be reached if the inducing input $Z_0$ is chosen to be the input data $X$ and the pseudo-observation for the last layer $V^L$ is the observation $Y$. Instead of factorizing the inducing points The global inducing input $Z_0$ is propagated through the network to ensure posterior dependencies across layers. The authors also extend this idea to deep Gaussian processes so that the latent functions across layers are correlated. Experiments show better performance than previous methods on both synthetic and real datasets, without the need to anneal the weights for the KL term in ELBO.
SP:cda2c05c55cce270fdb88ee63ad828dc7f91bc7a
Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes
1 INTRODUCTION . Deep models , formed by stacking together many simple layers , give rise to extremely powerful machine learning algorithms , from deep neural networks ( DNNs ) to deep Gaussian processes ( DGPs ) ( Damianou & Lawrence , 2013 ) . One approach to reason about uncertainty in these models is to use variational inference ( VI ) ( Jordan et al. , 1999 ) . VI in Bayesian neural networks ( BNNs ) requires the user to specify a family of approximate posteriors over the weights , with the classical approach being independent Gaussian distributions over each individual weight ( Hinton & Van Camp , 1993 ; Graves , 2011 ; Blundell et al. , 2015 ) . Later work has considered more complex approximate posteriors , for instance using a Matrix-Normal distribution as the approximate posterior for a full weight-matrix ( Louizos & Welling , 2016 ; Ritter et al. , 2018 ) . By contrast , DGPs use an approximate posterior defined over functions — the standard approach is to specify the inputs and outputs at a finite number of “ inducing ” points ( Damianou & Lawrence , 2013 ; Salimbeni & Deisenroth , 2017 ) . Critically , these classical BNN and DGP approaches define approximate posteriors over functions that are independent across layers . An approximate posterior that factorises across layers is problematic , because what matters for a deep model is the overall input-output transformation for the full model , not the input-output transformation for individual layers . This raises the question of what family of approximate posteriors should be used to capture correlations across layers . One approach for BNNs would be to introduce a flexible “ hypernetwork ” , used to generate the weights ( Krueger et al. , 2017 ; Pawlowski et al. , 2017 ) . However , this approach is likely to be suboptimal as it does not sufficiently exploit the rich structure in the underlying neural network . For guidance , we consider the optimal approximate posterior over the top-layer units in a deep network for regression . Remarkably , the optimal approximate posterior for the last-layer weights given the earlier weights can be obtained in closed form without choosing a restrictive family of distributions . In particular , the optimal approximate posterior is given by propagating the training inputs through lower layers to compute the top-layer representation , then using Bayesian linear regression to map from the top-layer representation to the outputs . Inspired by this result , we use Bayesian linear regression to define a generic family of approximate posteriors for BNNs . In particular , we introduce learned “ pseudo-data ” at every layer , and compute the posterior over the weights by performing linear regression from the inputs ( propagated from lower layers ) onto the pseudo-data . We reduce the burden of working with many training inputs by summarising the posterior using a small number of “ inducing ” points . We find that these approximate posteriors give excellent performance in the non-tempered , no-data-augmentation regime , with performance on datasets such as CIFAR-10 reaching 86.7 % , comparable to SGMCMC ( Wenzel et al. , 2020 ) . Our approach can be extended to DGPs , and we explore connections to the inducing point GP literature , showing that inference in the two classes of models can be unified . 2 METHODS . We consider neural networks with lower-layer weights { W ` } L ` =1 , W ` ∈ RN ` −1×N ` , and top-layer weights , WL+1 ∈ RNL×NL+1 , where the activity , F ` , at layer ` is given by , F1 = XW1 , F ` = φ ( F ` −1 ) W ` for ` ∈ { 2 , . . . , L } , ( 1 ) where φ ( · ) is an elementwise nonlinearity . The outputs , Y ∈ RP×NL+1 , depend on the top-level activity , FL , and the output weights , WL+1 , according to a likelihood , P ( Y|WL+1 , FL ) . In the following derivations , we will focus on ` > 1 ; corresponding expressions for the input layer can be obtained by replacing φ ( F0 ) with the inputs , X ∈ RP×N0 . The prior over weights is independent across layers and output units ( see Sec . 2.3 for the form of S ` ) , P ( W ` ) = ∏N ` λ=1 P ( w ` λ ) , P ( w ` λ ) = N ( w ` λ ∣∣∣0 , 1N ` −1 S ` ) , ( 2 ) where w ` λ is a column of W ` , i.e . all the input weights to unit λ in layer ` . To fit the parameters of the approximate posterior , Q ( { W } L+1 ` =1 ) , we maximise the evidence lower bound ( ELBO ) , L = EQ ( { W } L+1 ` =1 ) [ log P ( Y , { W } L+1 ` =1 |X ) − log Q ( { W } L+1 ` =1 ) ] . ( 3 ) To build intuition about how to parameterise Q ( { W } L+1 ` =1 ) , we consider the optimal Q ( WL+1| { W ` } L ` =1 ) for any given Q ( { W ` } L ` =1 ) . We begin by simplifying the ELBO by incorporating terms that do not depend on WL+1 into a constant , c , L = EQ ( { W ` } L+1 ` =1 ) [ log P ( Y , WL+1|X , { W ` } L ` =1 ) − log Q ( WL+1| { W ` } L ` =1 ) + c ] . ( 4 ) Rearranging these terms , we find that all WL+1 dependence can be written in terms of the KL divergence between the approximate posterior of interest and the true posterior , L = EQ ( { W ` } L ` =1 ) [ log P ( Y|X , { W ` } L ` =1 ) − DKL ( Q ( WL+1| { W ` } L ` =1 ) ||P ( WL+1|Y , X , { W ` } L ` =1 ) ) + c ] . ( 5 ) Thus , the optimal approximate posterior is , Q ( WL+1| { W ` } L ` =1 ) = P ( WL+1|Y , X , { W ` } L ` =1 ) ∝ P ( Y|WL+1 , FL ) P ( WL+1 ) , ( 6 ) and where the final proportionality comes by applying Bayes theorem and exploiting the model ’ s conditional independencies . For regression , the likelihood is Gaussian , P ( Y|WL+1 , FL ) = ∏NL+1 λ=1 N ( yλ ; φ ( FL ) w L+1 λ , Λ −1 L+1 ) , ( 7 ) where yλ is the value of a single output channel for all training inputs , and ΛL+1 is a precision matrix . Thus , the posterior is given in closed form by Bayesian linear regression ( Rasmussen & Williams , 2006 ) . 2.1 DEFINING THE FULL APPROXIMATE POSTERIOR WITH GLOBAL INDUCING POINTS AND PSEUDO-DATA . We adapt the optimal scheme above to give a scalable approximate posterior over the weights at all layers . To avoid propagating all training inputs through the network , which is intractable for large datasets , we instead propagate M global inducing locations , U0 , U1 = U0W1 , U ` = φ ( U ` −1 ) W ` for ` = 2 , . . . , L+ 1 . ( 8 ) Next , the optimal posterior requires outputs , Y . However , no outputs are available at inducing locations for the output layer , let alone for intermediate layers . We thus introduce learned variational parameters to mimic the form of the optimal posterior . In particular , we use the product of the prior over weights and a “ pseudo-likelihood ” , N ( v ` λ ; u ` λ , Λ −1 ` ) , representing noisy “ pseudo-observations ” of the outputs of the linear layer at the inducing locations , u ` λ = φ ( U ` −1 ) w ` λ . Substituting u ` λ into the pseudo-likelihood the approximate posterior becomes , Q ( W ` | { W ` ′ } ` −1 ` ′=1 ) ∝ ∏N ` λ=1N ( v ` λ ; φ ( U ` −1 ) w ` λ , Λ −1 ` ) P ( w ` λ ) , Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) = ∏N ` λ=1N ( w ` λ∣∣∣Σw ` φ ( U ` −1 ) T Λ ` v ` λ , Σw ` ) , Σw ` = ( N ` −1S −1 ` + φ ( U ` −1 ) T Λ ` φ ( U ` −1 ) ) −1 . ( 9 ) where v ` λ and Λ ` are variational parameters . Therefore , our full approximate posterior factorises as Q ( { W ` } L+1 ` =1 ) = L+1∏ ` =1 Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) . ( 10 ) Thus , the full ELBO can be written , L = EQ ( { W } L+1 ` =1 ) [ log P ( Y|X , { W } L+1 ` =1 ) + log P ( { W ` } L+1 ` =1 ) − log Q ( { W ` } L+1 ` =1 ) ] ( 11 ) = EQ ( { W } L+1 ` =1 ) log P ( Y , |X , { W } L+1 ` =1 ) + L+1∑ ` =1 log P ( W ` ) Q ( W ` | { W ` } ` −1 ` =1 ) . ( 12 ) The forms of the ELBO and approximate posterior suggest a sequential procedure to evaluate and subsequently optimise it : we alternate between sampling the weights using Eq . ( 9 ) and propagating the data and inducing locations using Eq . ( 8 ) ( see Alg . 1 ) . In summary , the parameters of the approximate posterior are the global inducing inputs , U0 , and the pseudo-data and precisions at all layers , { V ` , Λ ` } L+1 ` =1 . As each factor Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) is a Gaussian , these parameters can be optimised using standard reparameterised variational inference ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) in combination with the Adam optimiser ( Kingma & Ba , 2014 ) ( Appendix A ) . Importantly , by placing inducing inputs on the training data ( i.e . U0 = X ) , and setting v ` λ = yλ this approximate posterior matches the optimal top-layer posterior ( Eq . 6 ) . Finally , we note that while this posterior is conditionally Gaussian , the full posterior over all { W ` } L+1 ` =1 is non-Gaussian , and is thus potentially more flexible than a full-covariance Gaussian over all weights . Algorithm 1 : Global inducing points for neural networks Parameters : inducing inputs , U0 , inducing outputs and precisions , { V ` , Λ ` } L ` =1 , at all layers . Neural network inputs : ( e.g . MNIST digits ) F0 Neural network outputs : ( e.g . classification logits ) FL+1 L+∞← 0 for ` ∈ { 1 , . . . , L+ 1 } do Compute the mean and covariance over the weights at this layer Σw ` = ( N ` −1S −1 ` + φ ( U ` −1 ) T Λ ` φ ( U ` −1 ) ) −1 M ` = Σ w ` φ ( U ` −1 ) T Λ ` V ` Sample the weights and compute the ELBO W ` ∼ N ( M ` , Σw ` ) = Q ( W ` ∣∣∣ { W ` ′ } ` −1 ` ′=1 ) L ← L+ log P ( W ` ) − logN ( W ` |M ` , Σw ` ) Propagate the inputs and inducing points using the sampled weights , U ` = φ ( U ` −1 ) W ` F ` = φ ( F ` −1 ) W ` L ← L+ log P ( Y|FL+1 )
This paper proposes a posterior approximation for BNN that models correlations between the layers weights. The paper begins by pointing out that, for any posterior distribution approximation, the optimal conditional posterior distribution over the top-layer weights given the weights of the previous layers has a closed-form in the case of Gaussian likelihood, which, due to Bayes rule, turns out to be product of the likelihood and the top-layer prior. Based on this insight the paper proposes to model each conditional posterior distribution over intermediate layers weights given previous layers weights following the same structure, that is, as a product of a 'pseudo-likelihood' over unobserved noisy activations and the prior for that layer. In order to make inference tractable, the paper proposes the use of global inducing points as well as noisy pseudo-observations of the activations of intermediate layers which are treated as variational parameters. The paper also describes how such procedure applies to convolutional neural networks and how it can be applied for DGPs as well.
SP:cda2c05c55cce270fdb88ee63ad828dc7f91bc7a
Learning Manifold Patch-Based Representations of Man-Made Shapes
Choosing the right representation for geometry is crucial for making 3D models compatible with existing applications . Focusing on piecewise-smooth man-made shapes , we propose a new representation that is usable in conventional CAD modeling pipelines and can also be learned by deep neural networks . We demonstrate its benefits by applying it to the task of sketch-based modeling . Given a raster image , our system infers a set of parametric surfaces that realize the input in 3D . To capture piecewise smooth geometry , we learn a special shape representation : a deformable parametric template composed of Coons patches . Naı̈vely training such a system , however , is hampered by non-manifold artifacts in the parametric shapes and by a lack of data . To address this , we introduce loss functions that bias the network to output non-self-intersecting shapes and implement them as part of a fully self-supervised system , automatically generating both shape templates and synthetic training data . We develop a testbed for sketch-based modeling , demonstrate shape interpolation , and provide comparison to related work . 1 INTRODUCTION . While state-of-the art deep learning systems that output 3D geometry as point clouds , triangle meshes , voxel grids , and implicit surfaces yield detailed results , these representations are dense , highdimensional , and incompatible with CAD modeling pipelines . In this work , we develop a 3D representation that is parsimonious , geometrically interpretable , and easily editable with standard tools , while being compatible with deep learning . This enables a shape modeling system leveraging the ability of neural networks to process incomplete , ambiguous input and produces useful , consistent 3D output . Our primary technical contributions involve the development of machinery for learning parametric 3D surfaces in a fashion that is efficiently compatible with modern deep learning pipelines and effective for a challenging 3D modeling task . We automatically infer a template per shape category and incorporate loss functions that operate explicitly on the geometry rather than in the parametric domain or on a sampling of surrounding space . Extending learning methodologies from images and point sets to more exotic modalities like networks of surface patches is a central theme of modern graphics , vision , and learning research , and we anticipate broad application of these developments in CAD workflows . To test our system , we choose sketch-based modeling as a target application . Converting rough , incomplete 2D input into a clean , complete 3D shape is extremely ill-posed , requiring hallucination of missing parts and interpretation of noisy signal . To cope with these ambiguities , existing systems either rely on hand-designed priors , severely limiting applications , or learn the shapes from data , implicitly inferring relevant priors ( Delanoy et al. , 2018 ; Wang et al. , 2018a ; Lun et al. , 2017 ) . However , the output of the latter methods often lacks resolution and sharp features necessary for high-quality 3D modeling . In industrial design , man-made shapes are typically modeled as collections of smooth parametric patches ( e.g. , NURBS surfaces ) whose boundaries form the sharp features . To learn such shapes effectively , we use a deformable parametric template ( Jain et al. , 1998 ) —a manifold surface composed of patches , each parameterized by control points ( Fig . 3a ) . This representation enables the model to control the smoothness of each patch and introduce sharp edges between patches where necessary . Compared to traditional representations , deformable parametric templates have numerous benefits for our task . They are intuitive to edit with conventional software , are resolution-independent , and can be meshed to arbitrary accuracy . Since only boundary control points are needed , our representation has relatively few parameters . Finally , this structure admits closed-form expressions for normals and other geometric features , which can be used for loss functions that improve reconstruction quality ( §3.2 ) . Training a model for such representations faces three major challenges : detection of non-manifold surfaces , structural variation within shape categories , and lack of data . We address them as follows : • We introduce several loss functions that encourage our patch-based output to form a manifold mesh without topological artifacts or self-intersections . • Some categories of man-made shapes exhibit structural variation . To address this , for each category we algorithmically generate a varying deformable template , which allows us to support separate structural variation using a variable number of parts ( §3.1 ) . • Supervised methods mapping from sketches to 3D require a database of sketch-model pairs , and , to-date , there are no such large-scale repositories . We use a synthetic sketch augmentation pipeline inspired by artistic literature to simulate variations observed in natural drawings ( §4.1 ) . Although our model is trained on synthetic sketches , it generalizes to natural sketches . Our method is self-supervised : We predict patch parameters , but our data is not labeled with patches . 2 RELATED WORK . 2.1 DEEP LEARNING FOR SHAPE RECONSTRUCTION . Learning to reconstruct 3D geometry has recently enjoyed significant research interest . Typical forms of input are images ( Gao et al. , 2019 ; Wu et al. , 2017 ; Delanoy et al. , 2018 ; Häne et al. , 2019 ) and point clouds ( Williams et al. , 2019 ; Groueix et al. , 2018 ; Park et al. , 2019 ) . When designing a network for this task , two considerations affect the architecture : the loss function and the geometric representation . Loss Functions . One popular direction employs a differentiable renderer and measures 2D image loss between a rendering of the inferred 3D model and the input image ( Kato et al. , 2018 ; Wu et al. , 2016 ; Yan et al. , 2016 ; Rezende et al. , 2016 ; Wu et al. , 2017 ; Tulsiani et al. , 2017b ; 2018 ) . A notable example is the work by Wu et al . ( 2017 ) , which learns a mapping from a photograph to a normal map , a depth map , a silhouette , and the mapping from these outputs to a voxelization . They use a differentiable renderer and measure inconsistencies in 2D . Hand-drawn sketches , however , can not be interpreted as perfect projections of 3D objects : They are imprecise and often inconsistent ( Bessmeltsev et al. , 2016 ) . Another approach uses 3D loss functions , measuring discrepancies between the predicted and target 3D shapes directly , often via Chamfer or a regularized Wasserstein distance ( Williams et al. , 2019 ; Liu et al. , 2010 ; Mandikal et al. , 2018 ; Groueix et al. , 2018 ; Park et al. , 2019 ; Gao et al. , 2019 ; Häne et al. , 2019 ) . We build on this work , adapting Chamfer distance to patch-based geometric representations and extending the loss function with new regularizers ( §3.2 ) . Shape representation . As noted by Park et al . ( 2019 ) , geometric representations in deep learning broadly can be divided into three classes : voxel- , point- , and mesh-based . Voxel-based methods ( Delanoy et al. , 2018 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Wu et al. , 2018 ) yield dense reconstruction that are limited in resolution , offer no topological guarantees , and can not represent sharp features . Point-based approaches represent geometry as a point cloud ( Yin et al. , 2018 ; Mandikal et al. , 2018 ; Fan et al. , 2017 ; Lun et al. , 2017 ; Yang et al. , 2018 ) , sidestepping memory issues , but do not capture manifold connectivity . Some recent methods represent shapes using meshes ( Bagautdinov et al. , 2018 ; Baque et al. , 2018 ; Litany et al. , 2018 ; Kanazawa et al. , 2018 ; Wang et al. , 2019 ; Nash et al. , 2020 ) . Our parametric template representation allows us to more easily enforce piecewise smoothness and test for selfintersections ( §3.2 ) . These properties are difficult to measure on meshes in a differentiable manner . Compared to a generic template shape , such as a sphere , our category-specific templates improve reconstruction quality and enable complex reconstruction constraints , e.g. , symmetry . We compare to the deformable mesh representations in §4.5 . Most importantly , our shape representation is native to modern CAD software and can be directly edited in this software , as demonstrated in Figure 1 . The key to this flexibility is the type of the parametric patches we use , which can be trivially converted to a NURBS representation ( Piegl & Tiller , 1996 ) , the standard surface type in CAD . Other common shape representations , such as meshes or point clouds , can not be easily converted into NURBS format : algorithmically fitting NURBS surfaces is nontrivial and is an active area of research ( Yumer & Kara , 2012 ; Krishnamurthy & Levoy , 1996 ) . Finally , some recent works parameterize 3D geometry using learned deep neural networks—e.g. , they learn an implicit representation ( Mescheder et al. , 2019 ; Chen & Zhang , 2019 ; Genova et al. , 2019 ) or a mapping from parameter space to a collection of patches ( Groueix et al. , 2018 ; Deng et al. , 2020 ) . These demonstrate impressive results but are not tuned to CAD applications ; it is unclear how their output can be converted to editable CAD shape representations . 2.2 SKETCH-BASED 3D SHAPE MODELING . 3D reconstruction from sketches has a long history in graphics . A survey is beyond the scope of this paper ; see ( Delanoy et al. , 2018 ) or surveys by Ding & Liu ( 2016 ) and Olsen et al . ( 2009 ) . Unlike incremental sketch-based 3D modeling , where users progressively add new strokes ( Cherlin et al. , 2005 ; Gingold et al. , 2009 ; Chen et al. , 2013 ; Igarashi et al. , 1999 ) , our method interprets complete sketches , eliminating training for artists and enabling 3D reconstruction of legacy sketches . Some systems interpret complete sketches without extra information . This input is extremely ambiguous thanks to occlusions and inaccuracies . Hence , reconstruction algorithms rely on strong 3D shape priors . These priors are typically manually created , e.g. , for humanoids , animals , and natural shapes ( Bessmeltsev et al. , 2015 ; Entem et al. , 2015 ; Igarashi et al. , 1999 ) . Our work focuses on man-made shapes , which have characteristic sharp edges and are only piecewise smooth . Rather than relying on expert-designed priors , we automatically learn category-specific shape priors . A few deep learning approaches address sketch-based modeling . Nishida et al . ( 2016 ) and Huang et al . ( 2017 ) train networks to predict procedural model parameters that yield detailed shapes from a sketch . These methods produce complex high-resolution models but only for shapes that can be procedurally generated . Lun et al . ( 2017 ) use a CNN-based architecture to predict multi-view depth and normal maps , later converted to point clouds ; Li et al . ( 2018 ) improve on their results by first predicting a flow field from an annotated sketch . In contrast , we output a deformable parametric template , which can be converted to a manifold mesh without post-processing . Wang et al . ( 2018a ) learn from unlabeled databases of sketches and 3D models with no correspondence using an adverserial training approach . Another inspiration for our research is the work of Delanoy et al . ( 2018 ) , which reconstructs a 3D object as voxel grids ; we compare to this work in Fig . 7 .
This paper presents a method that leverages parametric surface patches as the fundamental representation in the task of shape modeling and reconstruction. This method requires a pre-generated template for each shape category. Several losses are specially designed to regularize the generation of the surface patches. Empirical results have demonstrated the performance of the proposed method in sketch-based shape reconstruction and 3D shape interpolation.
SP:71cba50055f4eaa6e1cc1f3cc40789788f26d60d
Learning Manifold Patch-Based Representations of Man-Made Shapes
Choosing the right representation for geometry is crucial for making 3D models compatible with existing applications . Focusing on piecewise-smooth man-made shapes , we propose a new representation that is usable in conventional CAD modeling pipelines and can also be learned by deep neural networks . We demonstrate its benefits by applying it to the task of sketch-based modeling . Given a raster image , our system infers a set of parametric surfaces that realize the input in 3D . To capture piecewise smooth geometry , we learn a special shape representation : a deformable parametric template composed of Coons patches . Naı̈vely training such a system , however , is hampered by non-manifold artifacts in the parametric shapes and by a lack of data . To address this , we introduce loss functions that bias the network to output non-self-intersecting shapes and implement them as part of a fully self-supervised system , automatically generating both shape templates and synthetic training data . We develop a testbed for sketch-based modeling , demonstrate shape interpolation , and provide comparison to related work . 1 INTRODUCTION . While state-of-the art deep learning systems that output 3D geometry as point clouds , triangle meshes , voxel grids , and implicit surfaces yield detailed results , these representations are dense , highdimensional , and incompatible with CAD modeling pipelines . In this work , we develop a 3D representation that is parsimonious , geometrically interpretable , and easily editable with standard tools , while being compatible with deep learning . This enables a shape modeling system leveraging the ability of neural networks to process incomplete , ambiguous input and produces useful , consistent 3D output . Our primary technical contributions involve the development of machinery for learning parametric 3D surfaces in a fashion that is efficiently compatible with modern deep learning pipelines and effective for a challenging 3D modeling task . We automatically infer a template per shape category and incorporate loss functions that operate explicitly on the geometry rather than in the parametric domain or on a sampling of surrounding space . Extending learning methodologies from images and point sets to more exotic modalities like networks of surface patches is a central theme of modern graphics , vision , and learning research , and we anticipate broad application of these developments in CAD workflows . To test our system , we choose sketch-based modeling as a target application . Converting rough , incomplete 2D input into a clean , complete 3D shape is extremely ill-posed , requiring hallucination of missing parts and interpretation of noisy signal . To cope with these ambiguities , existing systems either rely on hand-designed priors , severely limiting applications , or learn the shapes from data , implicitly inferring relevant priors ( Delanoy et al. , 2018 ; Wang et al. , 2018a ; Lun et al. , 2017 ) . However , the output of the latter methods often lacks resolution and sharp features necessary for high-quality 3D modeling . In industrial design , man-made shapes are typically modeled as collections of smooth parametric patches ( e.g. , NURBS surfaces ) whose boundaries form the sharp features . To learn such shapes effectively , we use a deformable parametric template ( Jain et al. , 1998 ) —a manifold surface composed of patches , each parameterized by control points ( Fig . 3a ) . This representation enables the model to control the smoothness of each patch and introduce sharp edges between patches where necessary . Compared to traditional representations , deformable parametric templates have numerous benefits for our task . They are intuitive to edit with conventional software , are resolution-independent , and can be meshed to arbitrary accuracy . Since only boundary control points are needed , our representation has relatively few parameters . Finally , this structure admits closed-form expressions for normals and other geometric features , which can be used for loss functions that improve reconstruction quality ( §3.2 ) . Training a model for such representations faces three major challenges : detection of non-manifold surfaces , structural variation within shape categories , and lack of data . We address them as follows : • We introduce several loss functions that encourage our patch-based output to form a manifold mesh without topological artifacts or self-intersections . • Some categories of man-made shapes exhibit structural variation . To address this , for each category we algorithmically generate a varying deformable template , which allows us to support separate structural variation using a variable number of parts ( §3.1 ) . • Supervised methods mapping from sketches to 3D require a database of sketch-model pairs , and , to-date , there are no such large-scale repositories . We use a synthetic sketch augmentation pipeline inspired by artistic literature to simulate variations observed in natural drawings ( §4.1 ) . Although our model is trained on synthetic sketches , it generalizes to natural sketches . Our method is self-supervised : We predict patch parameters , but our data is not labeled with patches . 2 RELATED WORK . 2.1 DEEP LEARNING FOR SHAPE RECONSTRUCTION . Learning to reconstruct 3D geometry has recently enjoyed significant research interest . Typical forms of input are images ( Gao et al. , 2019 ; Wu et al. , 2017 ; Delanoy et al. , 2018 ; Häne et al. , 2019 ) and point clouds ( Williams et al. , 2019 ; Groueix et al. , 2018 ; Park et al. , 2019 ) . When designing a network for this task , two considerations affect the architecture : the loss function and the geometric representation . Loss Functions . One popular direction employs a differentiable renderer and measures 2D image loss between a rendering of the inferred 3D model and the input image ( Kato et al. , 2018 ; Wu et al. , 2016 ; Yan et al. , 2016 ; Rezende et al. , 2016 ; Wu et al. , 2017 ; Tulsiani et al. , 2017b ; 2018 ) . A notable example is the work by Wu et al . ( 2017 ) , which learns a mapping from a photograph to a normal map , a depth map , a silhouette , and the mapping from these outputs to a voxelization . They use a differentiable renderer and measure inconsistencies in 2D . Hand-drawn sketches , however , can not be interpreted as perfect projections of 3D objects : They are imprecise and often inconsistent ( Bessmeltsev et al. , 2016 ) . Another approach uses 3D loss functions , measuring discrepancies between the predicted and target 3D shapes directly , often via Chamfer or a regularized Wasserstein distance ( Williams et al. , 2019 ; Liu et al. , 2010 ; Mandikal et al. , 2018 ; Groueix et al. , 2018 ; Park et al. , 2019 ; Gao et al. , 2019 ; Häne et al. , 2019 ) . We build on this work , adapting Chamfer distance to patch-based geometric representations and extending the loss function with new regularizers ( §3.2 ) . Shape representation . As noted by Park et al . ( 2019 ) , geometric representations in deep learning broadly can be divided into three classes : voxel- , point- , and mesh-based . Voxel-based methods ( Delanoy et al. , 2018 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Wu et al. , 2018 ) yield dense reconstruction that are limited in resolution , offer no topological guarantees , and can not represent sharp features . Point-based approaches represent geometry as a point cloud ( Yin et al. , 2018 ; Mandikal et al. , 2018 ; Fan et al. , 2017 ; Lun et al. , 2017 ; Yang et al. , 2018 ) , sidestepping memory issues , but do not capture manifold connectivity . Some recent methods represent shapes using meshes ( Bagautdinov et al. , 2018 ; Baque et al. , 2018 ; Litany et al. , 2018 ; Kanazawa et al. , 2018 ; Wang et al. , 2019 ; Nash et al. , 2020 ) . Our parametric template representation allows us to more easily enforce piecewise smoothness and test for selfintersections ( §3.2 ) . These properties are difficult to measure on meshes in a differentiable manner . Compared to a generic template shape , such as a sphere , our category-specific templates improve reconstruction quality and enable complex reconstruction constraints , e.g. , symmetry . We compare to the deformable mesh representations in §4.5 . Most importantly , our shape representation is native to modern CAD software and can be directly edited in this software , as demonstrated in Figure 1 . The key to this flexibility is the type of the parametric patches we use , which can be trivially converted to a NURBS representation ( Piegl & Tiller , 1996 ) , the standard surface type in CAD . Other common shape representations , such as meshes or point clouds , can not be easily converted into NURBS format : algorithmically fitting NURBS surfaces is nontrivial and is an active area of research ( Yumer & Kara , 2012 ; Krishnamurthy & Levoy , 1996 ) . Finally , some recent works parameterize 3D geometry using learned deep neural networks—e.g. , they learn an implicit representation ( Mescheder et al. , 2019 ; Chen & Zhang , 2019 ; Genova et al. , 2019 ) or a mapping from parameter space to a collection of patches ( Groueix et al. , 2018 ; Deng et al. , 2020 ) . These demonstrate impressive results but are not tuned to CAD applications ; it is unclear how their output can be converted to editable CAD shape representations . 2.2 SKETCH-BASED 3D SHAPE MODELING . 3D reconstruction from sketches has a long history in graphics . A survey is beyond the scope of this paper ; see ( Delanoy et al. , 2018 ) or surveys by Ding & Liu ( 2016 ) and Olsen et al . ( 2009 ) . Unlike incremental sketch-based 3D modeling , where users progressively add new strokes ( Cherlin et al. , 2005 ; Gingold et al. , 2009 ; Chen et al. , 2013 ; Igarashi et al. , 1999 ) , our method interprets complete sketches , eliminating training for artists and enabling 3D reconstruction of legacy sketches . Some systems interpret complete sketches without extra information . This input is extremely ambiguous thanks to occlusions and inaccuracies . Hence , reconstruction algorithms rely on strong 3D shape priors . These priors are typically manually created , e.g. , for humanoids , animals , and natural shapes ( Bessmeltsev et al. , 2015 ; Entem et al. , 2015 ; Igarashi et al. , 1999 ) . Our work focuses on man-made shapes , which have characteristic sharp edges and are only piecewise smooth . Rather than relying on expert-designed priors , we automatically learn category-specific shape priors . A few deep learning approaches address sketch-based modeling . Nishida et al . ( 2016 ) and Huang et al . ( 2017 ) train networks to predict procedural model parameters that yield detailed shapes from a sketch . These methods produce complex high-resolution models but only for shapes that can be procedurally generated . Lun et al . ( 2017 ) use a CNN-based architecture to predict multi-view depth and normal maps , later converted to point clouds ; Li et al . ( 2018 ) improve on their results by first predicting a flow field from an annotated sketch . In contrast , we output a deformable parametric template , which can be converted to a manifold mesh without post-processing . Wang et al . ( 2018a ) learn from unlabeled databases of sketches and 3D models with no correspondence using an adverserial training approach . Another inspiration for our research is the work of Delanoy et al . ( 2018 ) , which reconstructs a 3D object as voxel grids ; we compare to this work in Fig . 7 .
The paper proposes a self-supervised method to fit a template (represented as a union of Coons patches) to a certain 2D sketch. It derives a way to build a proper template, uses a network to predict the patches' parameters, and proposes a union of different losses. The qualitative results of the method are shown in several different objects.
SP:71cba50055f4eaa6e1cc1f3cc40789788f26d60d
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
1 INTRODUCTION . The inference-time computational demands of deep neural networks ( DNNs ) are increasing , owing to the “ going deeper '' ( Szegedy et al. , 2015 ) strategy for improving accuracy : as a DNN gets deeper , it progressively gains the ability to learn higher-level , complex representations . This strategy has enabled breakthroughs in many tasks , such as image classification ( Krizhevsky et al. , 2012 ) or speech recognition ( Hinton et al. , 2012 ) , at the price of costly inferences . For instance , with 4× more inference cost , a 56-layer ResNet ( He et al. , 2016 ) improved the Top-1 accuracy on ImageNet by 19 % over the 8-layer AlexNet . This trend continued with the 57-layer state-of-the-art EfficientNet ( Tan & Le , 2019 ) : it improved the accuracy by 10 % over ResNet , with 9× costlier inferences . The accuracy improvements stem from the fact that the deeper networks fix the mistakes of the shallow ones ( Huang et al. , 2018 ) . This implies that some samples , which are already correctly classified by shallow networks , do not necessitate the extra complexity . This observation has motivated research on input-adaptive mechanisms , in particular , multi-exit architectures ( Teerapittayanon et al. , 2016 ; Huang et al. , 2018 ; Kaya et al. , 2019 ; Hu et al. , 2020 ) . Multi-exit architectures save computation by making input-specific decisions about bypassing the remaining layers , once the model becomes confident , and are orthogonal to techniques that achieve savings by permanently modifying the ∗Authors contributed equally . model ( Li et al. , 2016 ; Banner et al. , 2018 ; Han et al. , 2015 ; Taylor et al. , 2018 ) . Figure 1 illustrates how a multi-exit model ( Kaya et al. , 2019 ) , based on a standard VGG-16 architecture , correctly classifies a selection of test images from ‘ Tiny ImageNet ’ before the final layer . We see that more typical samples , which have more supporting examples in the training set , require less depth and , therefore , less computation . It is unknown if the computational savings provided by multi-exit architectures are robust against adversarial pressure . Prior research showed that DNNs are vulnerable to a wide range of attacks , which involve imperceptible input perturbations ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2016 ; Hu et al. , 2020 ) . Considering that a multi-exit model , on the worst-case input , does not provide any computational savings , we ask : Can the savings from multi-exit models be maliciously negated by input perturbations ? As some natural inputs do require the full depth of the model , it may be possible to craft adversarial examples that delay the correct decision ; it is unclear , however , how many inputs can be delayed with imperceptible perturbations . Furthermore , it is unknown if universal versions of these adversarial examples exist , if the examples transfer across multi-exit architectures and datasets , or if existing defenses ( e.g . adversarial training ) are effective against slowdown attacks . Threat Model . We consider a new threat against DNNs , analogous to the denial-of-service ( DoS ) attacks that have been plaguing the Internet for decades . By imperceptibly perturbing the input to trigger this worst-case , the adversary aims to slow down the inferences and increase the cost of using the DNN . This is an important threat for many practical applications , which impose strict limits on the responsiveness and resource usage of DNN models ( e.g . in the Internet-of-Things ( Taylor et al. , 2018 ) ) , because the adversary could push the victim outside these limits . For example , against a commercial image classification system , such as Clarifai.com , a slowdown attack might waste valuable computational resources . Against a model partitioning scheme , such as Big-Little ( De Coninck et al. , 2015 ) , it might introduce network latency by forcing excessive transmissions between local and remote models . A slowdown attack aims to force the victim to do more work than the adversary , e.g . by amplifying the latency needed to process the sample or by crafting reusable perturbations . The adversary may have to achieve this with incomplete information about the multi-exit architecture targeted , the training data used by the victim or the classification task ( see discussion in Appendix A ) . Our Contributions . To our best knowledge , we conduct the first study of the robustness of multi-exit architectures against adversarial slowdowns . To this end , we find that examples crafted by prior evasion attacks ( Madry et al. , 2017 ; Hu et al. , 2020 ) fail to bypass the victim model ’ s early exits , and we show that an adversary can adapt such attacks to the goal of model slowdown by modifying its objective function . We call the resulting attack DeepSloth . We also propose an efficacy metric for comparing slowdowns across different multi-exit architectures . We experiment with three generic multi-exit DNNs ( based on VGG16 , ResNet56 and MobileNet ) ( Kaya et al. , 2019 ) and a speciallydesigned multi-exit architecture , MSDNets ( Huang et al. , 2018 ) , on two popular image classification benchmarks ( CIFAR-10 and Tiny ImageNet ) . We find that DeepSloth reduces the efficacy of multiexit DNNs by 90–100 % , i.e. , the perturbations render nearly all early exits ineffective . In a scenario typical for IoT deployments , where the model is partitioned between edge devices and the cloud , our attack amplifies the latency by 1.5–5× , negating the benefits of model partitioning . We also show that it is possible to craft a universal DeepSloth perturbation , which can slow down the model on either all or a class of inputs . While more constrained , this attack still reduces the efficacy by 5–45 % . Further , we observe that DeepSloth can be effective in some black-box scenarios , where the attacker has limited knowledge about the victim . Finally , we show that a standard defense against adversarial samples—adversarial training—is inadequate against slowdowns . Our results suggest that further research will be required for protecting multi-exit architectures against this emerging security threat . 2 RELATED WORK . Adversarial Examples and Defenses . Prior work on adversarial examples has shown that DNNs are vulnerable to test-time input perturbations ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2017 ; Carlini & Wagner , 2017 ; Madry et al. , 2018 ) . An adversary who wants to maximize a model ’ s error on specific test-time samples can introduce human-imperceptible perturbations to these samples . Moreover , an adversary can also exploit a surrogate model for launching the attack and still hurt an unknown victim ( Athalye et al. , 2018 ; Tramèr et al. , 2017b ; Inkawhich et al. , 2019 ) . This transferability leads to adversarial examples in more practical black-box scenarios . Although many defenses ( Kurakin et al. , 2016 ; Xu et al. , 2017 ; Song et al. , 2018 ; Liao et al. , 2018 ; Lecuyer et al. , 2019 ) have been proposed against this threat , adversarial training ( AT ) has become the frontrunner ( Madry et al. , 2018 ) . In Sec 5 , we evaluate the vulnerability of multi-exit DNNs to adversarial slowdowns in white-box and black-box scenarios . In Sec 6 , we show that standard AT and its simple adaptation to our perturbations are not sufficient for preventing slowdown attacks . Efficient Input-Adaptive Inference . Recent input-adaptive DNN architectures have brought two seemingly distant goals closer : achieving both high predictive quality and computational efficiency . There are two types of input-adaptive DNNs : adaptive neural networks ( AdNNs ) and multi-exit architectures . During the inference , AdNNs ( Wang et al. , 2018 ; Figurnov et al. , 2017 ) dynamically skip a certain part of the model to reduce the number of computations . This mechanism can be used only for ResNet-based architectures as they facilitate skipping within a network . On the other hand , multi-exit architectures ( Teerapittayanon et al. , 2016 ; Huang et al. , 2018 ; Kaya et al. , 2019 ) introduce multiple side branches—or early-exits—to a model . During the inference on an input sample , these models can preemptively stop the computation altogether once the stopping criteria are met at one of the branches . Kaya et al . ( 2019 ) have also identified that standard , non-adaptive DNNs are susceptible to overthinking , i.e. , their inability to stop computation leads to inefficient inferences on many inputs . Haque et al . ( 2020 ) presented attacks specifically designed for reducing the energy-efficiency of AdNNs by using adversarial input perturbations . However , our work studies a new threat model that an adversary causes slowdowns on multi-exit architectures . By imperceptibly perturbing the inputs , our attacker can ( i ) introduce network latency to an infrastructure that utilizes multi-exit architectures and ( ii ) waste the victim ’ s computational resources . To quantify this vulnerability , we define a new metric to measure the impact of adversarial input perturbation on different multi-exit architectures ( Sec 3 ) . In Sec 5 , we also study practical attack scenarios and the transferability of adversarial input perturbations crafted by our attacker . Moreover , we discuss the potential defense mechanisms against this vulnerability , by proposing a simple adaptation of adversarial training ( Sec 6 ) . To the best of our knowledge , our work is the first systematic study of this new vulnerability . Model Partitioning . Model partitioning has been proposed to bring DNNs to resource-constrained devices ( De Coninck et al. , 2015 ; Taylor et al. , 2018 ) . These schemes split a multi-exit model into sequential components and deploy them in separate endpoints , e.g. , a small , local on-device part and a large , cloud-based part . For bringing DNNs to the Internet of Things ( IoT ) , partitioning is instrumental as it reduces the transmissions between endpoints , a major bottleneck . In Sec 5.1 , on a partitioning scenario , we show that our attack can force excessive transmissions . 3 EXPERIMENTAL SETUP . Datasets . We use two datasets : CIFAR-10 ( Krizhevsky et al. , 2009 ) and Tiny-ImageNet ( Tiny ) . For testing the cross-domain transferability of our attacks , we use the CIFAR-100 dataset . Architectures and Hyper-parameters . To demonstrate that the vulnerability to adversarial slowdowns is common among multi-exit architectures , we experiment on two recent techniques : ShallowDeep Networks ( SDNs ) ( Kaya et al. , 2019 ) and MSDNets ( Huang et al. , 2018 ) . These architectures were designed for different purposes : SDNs are generic and can convert any DNN into a multi-exit model , and MSDNets are custom designed for efficiency . We evaluate an MSDNet architecture ( 6 exits ) and three SDN architectures , based on VGG-16 ( Simonyan & Zisserman , 2014 ) ( 14 exits ) , ResNet-56 ( He et al. , 2016 ) ( 27 exits ) , and MobileNet ( Howard et al. , 2017 ) ( 14 exits ) . Metrics . We define the early-exit capability ( EEC ) curve of a multi-exit model to indicate the fraction of the test samples that exit early at a specific fraction of the model ’ s full inference cost . Figure 2 shows the EEC curves of our SDNs on Tiny ImageNet , assuming that the computation stops when there is a correct classification at an exit point . For example , VGG-16-based SDN model can correctly classify ∼50 % of the samples using ∼50 % of its full cost . Note that this stopping criterion is impractical ; in Sec 4 , we will discuss the practical ones . We define the early-exit efficacy , or efficacy in short , to quantify a model ’ s ability of utilizing its exit points . The efficacy of a multi-exit model is the area under its EEC curve , estimated via the trapezoidal rule . An ideal efficacy for a model is close to 1 , when most of the input samples the computation stops very early ; models that do not use their early exits have 0 efficacy . A model with low efficacy generally exhibits a higher latency ; in a partitioned model , the low efficacy will cause more input transmissions to the cloud , and the latency is further amplified by the network round trips . A multi-exit model ’ s efficacy and accuracy are dictated by its stopping criteria , which we discuss in the next section . As for the classification performance , we report the Top-1 accuracy on the test data .
This paper studies a new category of adversarial attacks, i.e., attackers that try to slow-down multi-exit DNNs using adversarial examples. The paper extended adversarial attacks to perform the slow-down attack and showed that the attacks could slow-down multi-exit DNNs by 1.5x - 5.0x. Additionally, the paper experimentally answers many questions such as (1) the effectiveness of adversarial training against the attack, (2) input-agnostic attack, and (3) cross-architecture/domain transferability.
SP:cb32c18a6a766894aa23e1f84ea9c38ef21fe023