paper_name
stringlengths 11
170
| text
stringlengths 8.07k
307k
| summary
stringlengths 152
6.16k
| paper_id
stringlengths 43
43
|
---|---|---|---|
Unbiased Learning with State-Conditioned Rewards in Adversarial Imitation Learning | 1 INTRODUCTION . Inverse reinforcement learning ( IRL ) is an algorithm of recovering the ground truth reward function from observed behavior ( Ng & Russell , 2000 ) . IRL algorithms—followed by appropriate reinforcement learning ( RL ) algorithms—can optimize policy through farsighted cumulative value measures in the given system ( Sutton & Barto , 2018 ) ; hence it can usually achieve more satisfying results than mere supervision . While a few studies have investigated recovering reward functions to continuous spaces ( Babes et al. , 2011 ; Levine & Koltun , 2012 ) , IRL algorithms often fail to find the ground-truth reward function in high-dimensional complex domains ( Finn et al. , 2016b ) . The notion of the ground-truth reward requires elaboration since IRL is an ill-posed problem ; there can be numerous solutions to the reward function inducing the same optimal policy ( Ng et al. , 1999 ; Ng & Russell , 2000 ) . Recently , adversarial imitation learning ( AIL ) as a reward acquisition method has shown promising results ( Ho & Ermon , 2016 ) . One of the distinctive strengths of AIL is the scalability through parameterized non-linear functions such as neural networks . The maximum causal entropy principles are widely regarded as the solution when the optimal control problem is modeled as probabilistic inference ( Ziebart et al. , 2010 ; Haarnoja et al. , 2017 ) . In particular , probabilistic modeling using a continuous energy function forms a representation called an energy-based model ( EBM ) . We highlight the following advantages of the energy-based IRL : • It provides a unified framework for stochastic policies to the learning ; most probabilistic models can be viewed as special types of EBMs ( LeCun et al. , 2006 ) . • It rationalizes the stochasticity of behavior ; this provides robustness in the face of uncertain dy- namics ( Ziebart et al. , 2010 ) and a natural way of modeling complex multi-modal distribution . AIL reward functions seem to be exceptions to these arguments—the AIL framework produces distinct types of rewards that are ever-changing and are intended for discriminating joint densities . We argue that these characteristics hinder proper information projection to the optimal decision . This work points out that there remain two kinds of biases in AIL . The established AIL algorithms are typically formalized by the cumulative densities called occupancy measure . We claim that the accumulated measures contain biases that are not related to modeling purposeful behavior , and the formulation is vulnerable to distributional shifts of an MDP . Empirically , they work as dominant noises in training because of the formulation ’ s innate high variance . The other bias is implicit survival or early termination bias caused by reward formulation , which lacks consideration for the terminal states in finite episodes . These unnormalized rewards often provokes sub-optimal behaviors where the agent learns to maliciously make use of temporal-aware strategies . This paper proposes an adversarial IRL method called causal adversarial inverse reinforcement learning ( CAIRL ) . We primarily associate the reward acquisition method with approaches for energy-based RL and IRL algorithms ; the CAIRL reward function can induce complex probabilistic behaviors with multiple modalities . We then show that learning with a dual discriminator architecture provides stepwise , state-conditioned rewards . For handling biases induced by finite-horizon , the model postulates the reward function satisfies a Bellman equation , including “ self-looping ” terminal states . As a result , it learns the reward function satisfying the property of EBMs . Noteworthy contributions of this work are 1 ) a model-free , energy-based IRL algorithm that is effective in high-dimensional environments , 2 ) a dual discriminator architecture for recovering a robust state-conditioned reward function , 3 ) an effective approach for handling terminal states , and 4 ) meaningful experiments and comparison studies with state-of-the-art algorithms in various topics . 2 RELATED WORKS . Imitation learning is a fundamental approach for modeling intellectual behavior from an expert at specific tasks ( Pomerleau , 1991 ; Zhang et al. , 2018 ) . For the standard framework called Behavioral Cloning , learning from demonstrations is treated as supervised learning for a trajectory dataset . On the other hand , IRL aims to study the reward function of the underlying system , which characterizes the expert . In this perspective , training a policy with an IRL reward function is a branch of imitation learning , specialized in dealing with sequential decision-making problems by recovering the concise representation of a task ( Ng & Russell , 2000 ; Abbeel & Ng , 2004 ) . For modeling stochastic expert policies , Boltzmann distributions appeared in early IRL research , such as Bayesian IRL , natural gradient IRL , and maximum likelihood IRL ( Ramachandran & Amir , 2007 ; Neu & Szepesvári , 2012 ; Babes et al. , 2011 ) . Notably , maximum entropy IRL ( Ziebart et al. , 2008 ) is explicitly formulated based on the principle of maximum entropy . The framework has also been derived from causal entropy—the derived algorithm can model the purposeful distribution of optimal policy into a reward function ( Ziebart et al. , 2010 ) . Our work draws significant inspirations from these prior works and aims to redeem the perspective of probabilistic causality . Recently , AIL methods ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Ghasemipour et al. , 2020 ) have shown great success on continuous control benchmarks . Each of the them provides a unique divergence minimization scheme by its architecture . In particular , our work shares major components with AIRL . It has been argued that the algorithm does not recover the energy of the expert policy ( Liu et al. , 2020 ) . We stress that our work introduces essential concepts to draw an energy-based representation of the expert policy correctly . The discriminator design is based on the rich energy-based interpretation of GANs ( Zhao et al. , 2016 ; Azadi et al. , 2018 ; Che et al. , 2020 ) and numerous studies with multiple discriminators ( Chongxuan et al. , 2017 ; Gan et al. , 2017 ; Choi et al. , 2018 ) . The issues of finite-horizon tasks were initially raised in RL during the discussion of time limits in MDP benchmarks ( Pardo et al. , 2017 ; Tucker et al. , 2018 ) . It turned out that the time limits , or even the existence of terminal states , would significantly affect the value learning procedure of RL compared to that generated in infinite horizon MDPs . IRL suffers from the identical problem that reward learning of finite episodes is not really stable for tasks outside of appropriate benchmarks . Kostrikov et al . ( 2018 ) suggested explicitly adding a self-repeating absorbing state ( Sutton & Barto , 2018 ) after the terminal state ; consequently , AIL discriminators can evaluate the termination frequencies . 3 BACKGROUND . Markov Decision Process ( MDP ) . We define an MDP as a tuple M = ( S , A , P , r , p0 , γ ) where S and A denote the state and action spaces , and γ is the discount factor . The transition distribution P , the deterministic state-action reward function r , and the initial state distribution p0 are unknown . Let τπ and τE be sequences of finite states and actions ( s0 , a0 , . . . , aT−1 , sT ) obtained by a policy π and the expert policy πE , respectively . The term ρπ denotes the occupancy measures derived by Table 1 : The objectives for AIL algorithms in a form as the minimization of statistical divergences . Method Optimized Objective ( Minimization ) Behavioral Cloning EπE [ DKL ( πE ( a|s ) ‖π ( a|s ) ) ] = −EπE [ log π ( a|s ) ] + const GAIL ( Ho & Ermon , 2016 ) Eπ [ DJS ( ρπ ( s , a ) , ρE ( s , a ) ) −H ( π ( ·|s ) ) ] AIRL ( Fu et al. , 2017 ) Eπ [ DKL ( ρπ ( s , a ) ∥∥ρE ( s , a ) ) ] = −Eπ [ log ρE ( s , a ) +H ( ρπ ) ] FAIRL ( Ghasemipour et al. , 2020 ) Eπ [ DKL ( ρE ( s , a ) ∥∥ρπ ( s , a ) ) ] = −EπE [ log ρπ ( s , a ) +H ( ρE ) ] CAIRL ( this work ) Eπ [ DKL ( π ( a|s ) ∥∥πE ( a|s ) ) ] = −Eπ [ r ( s , a ) +H ( π ( ·|s ) ) ] + const π , and is defined as ρπ ( s , a ) = π ( a|s ) ∑∞ t=0 γ t Pr ( st = s|π ) . With a slight abuse of notation , we refer to the occupancy measures of states as ρE ( s ) and ρπ ( s ) . The expectation of π for an arbitrary function c denotes an expected return for infinite-horizon : Eπ [ c ( s , a ) ] , E [ ∑∞ t=0 γ tc ( s , a ) |π ] . Maximum Entropy IRL ( MaxEnt IRL ) . Ziebart ( 2010 ) and Haarnoja et al . ( 2017 ) defined the optimality of stochastic policy with an entropy-regularized RL objective as follows : π ? = arg max π∈Π ∑ t E ( st , at ) ∼ρπ [ r ( st , at ) + αH ( at|st ) ] whereH denotes the causal entropy function.1 If πE is the MaxEnt RL policy , the softmax Bellman optimality equations can be defined by the following recursive logic : Q ? ( st , at ) = r ( st , at ) + γEst+1∼P ( ·|st , at ) [ V ? ( st+1 ) ] V ? ( st ) = Eat∼πE ( ·|st ) [ Q ? ( st , at ) − log πE ( at|st ) ] ( 1 ) MaxEnt IRL algorithms ( Ziebart et al. , 2008 ; 2010 ) are energy-based interpretations of IRL which aim to find behavior abiding the MaxEnt principle . Such algorithms , however , are difficult to be computed when the given spaces are continuous or dynamics are unknown ( Finn et al. , 2016a ) . Adversarial Imitation Learning . Ho & Ermon ( 2016 ) considered adversarial learning as a modelfree , sampling-based approximation to MaxEnt IRL . Instead of exhaustively solving the problem , GAIL performs imitation learning by minimizing the divergence between the state-action occupancy measures from expert and learner through the following logistic objective : min π∈Π max D EπE [ logD ( s , a ) ] + Eπ [ log ( 1−D ( s , a ) ) ] −H ( π ) ( 2 ) where D ∈ ( 0 , 1 ) |S||A| indicates a binary classifier trained to distinguish between τπ and τE . The AIRL discriminator tries to disentangle a reward function that is invariant to dynamics . It takes a particular form : Dθ ( s , a ) = exp ( fθ , ψ ( s , a ) ) / ( exp ( fθ , ψ ( s , a ) ) + πφ ( a|s ) ) . Learning with the AIRL can be considered as the reverse KL divergence between occupancy measures . Ghasemipour et al . ( 2020 ) proposed the FAIRL algorithm as an adversarial method for the forward KL divergence . | This paper builds on a recent inverse-RL method, AIRL. The authors argue that the rewards learned by AIRL are potentially inefficient since they depend on the ratio of state-action visitation distributions of the expert and the policy. To resolve this, CAIRL derives rewards that excludes these visitation distributions; this is realized in practice by employing another discriminator to approximate the state-visitation distribution ratio. The paper further proposes a mechanism to handle the reward-bias issue in IRL. Experiments with the MuJoCo locomotion tasks show that CAIRL is a competitive algorithm for imitation learning and can also handle dynamics mismatch between the expert and the learner. | SP:08d227e9382cb5eb359462f2e75cca62f3457cf0 |
Transformers for Modeling Physical Systems | 1 INTRODUCTION . The transformer model ( Vaswani et al. , 2017 ) , built on self-attention , has largely become the stateof-the-art approach for a large set of neural language processing ( NLP ) tasks including language modeling , text classification , question answering , etc . Although more recent transformer work is focused on unsupervised pre-training of extremely large models ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Dai et al. , 2019 ; Liu et al. , 2019 ) , the original transformer model garnered attention due to its ability to out-perform other state-of-the-art methods by learning longer-term dependencies without recurrent connections . Given that the transformer model was originally developed for NLP , nearly all related work has been rightfully confined within this field with only a few exceptions . Here , we focus on the development of transformers to model dynamical systems that can replace otherwise expensive numerical solvers . In other words , we are interested in using transformers to learn the language of physics . The surrogate modeling of physical systems is a research field that has existed for several decades and is a large ongoing effort in scientific machine learning . A surrogate model is defined as an approximate model of a physical phenomenon that is designed to replace an expensive computational solver that would otherwise be needed to resolve the system of interest . The key characteristic of surrogate models is their ability to model a distribution of initial or boundary conditions rather than learning just one solution . This is arguably essential for the justification of training a deep learning model versus using a standard numerical solver . The most tangible applications of surrogates are for optimization , design and inverse problems where many repeated simulations are typically needed . With the growing interest in deep learning , deep neural networks have been used for surrogate modeling a large range of physical systems in recent literature . Standard deep neural network architectures such as auto-regressive ( Mo et al. , 2019 ; Geneva & Zabaras , 2020a ) , residual/Euler ( González-Garcı́a et al. , 1998 ; Sanchez-Gonzalez et al. , 2020 ) , recurrent and LSTM based models ( Mo et al. , 2019 ; Tang et al. , 2020 ; Maulik et al. , 2020 ) have been largely demonstrated to be effective at modeling various physical dynamics . Such models generally rely exclusively on the past time-step to provide complete information on the current state of the system ’ s evolution . Particularly for dynamical systems , present machine learning models lack generalizable time cognisant capabilities to predict multi-time-scale phenomena present in systems including turbulent fluid flow , multi-scale materials modeling , molecular dynamics , chemical processes , etc . Thus currently adopted models struggle to maintain true physical accuracy for long-time 1Code available at : [ URL available after review ] . 2Supplementary videos available at : https : //sites.google.com/view/transformersphysx . predictions . Much work is needed to scale such deep learning models to larger physical systems that are of scientific and industrial interest . This work deviates from this pre-existing literature by investigating the use of transformers for the prediction of physical systems , relying entirely on selfattention to model dynamics . In the recent work of Shalova & Oseledets ( 2020 ) , such self-attention models were tested to learn single solutions of several low-dimensional ordinary differential equations . In this work , we propose a physics inspired embedding methodology to model a distribution of dynamical solutions . We will demonstrate our model on high-dimensional partial differential equation problems that far surpass the complexity of past works . To the authors best knowledge , this is the first work to explore transformer NLP architectures for the prediction of physical systems . 2 METHODS . When discussing dynamical systems , we are interested in systems that can be described through a dynamical ordinary or partial differential equation : φt = F ( x , φ ( t , x , η ) , ∇xφ , ∇2xφ , φ · ∇xφ , . . . ) , F : R× Rn → Rn , t ∈ T ⊂ R+ , x ∈ Ω ⊂ Rm , ( 1 ) in which φ ∈ Rn is the solution of this differential equation with parameters η , in the time interval T and spatial domain Ω with a boundary Γ ⊂ Ω . This general form can embody a vast spectrum of physical phenomena including fluid flow and transport processes , mechanics and materials physics , and molecular dynamics . In this work , we are interested in learning the set of solutions for a distribution of initial conditions φ0 ∼ p ( φ0 ) , boundary conditions B ( φ ) ∼ p ( B ) ∀x ∈ Γ or equation parameters η ∼ p ( η ) . This accounts for modeling initial value , boundary value and stochastic problems . We emphasize that this is fundamentally different , more difficult and of greater interest for most scientific applications compared to learning a single solution . To make this problem applicable to the use of transformer models , the continuous solution is discretized in both the spatial and temporal domains such that the solution of the differential equation is Φ = { φ0 , φ1 , . . .φT } ; φi ∈ Rn×d for which φi has been discretized by d points in Ω . We assume an initial state φ0 and that the time interval T is discretized by T time-steps with a time-step size ∆t . Hence , we pose the problem of modeling a dynamical system as a time-series problem . The machine learning methodology has two core components : the transformer for modeling dynamics and the embedding network for projecting physical states into a vector representation . Similar to NLP , the embedding model is trained prior to the transformer . This embedding model is then frozen and the entire data-set is converted to the embedded space in which the transformer is then trained as illustrated in Fig . 1 . During testing , the embedding decoder is used to reconstruct the physical states from the transformer ’ s predictions . 2.1 TRANSFORMER . The transformer model was originally designed with NLP as the sole application with word vector embeddings of a passage of text being the primary input ( Vaswani et al. , 2017 ) . However , recent works have explored using attention mechanisms for different machine learning tasks ( Veličković et al. , 2017 ; Zhang et al. , 2019 ; Fu et al. , 2019 ) and a few investigate the use of transformers for applications outside of the NLP field ( Chen et al. , 2020 ) . This suggests that self-attention and in particular transformer models may work well for any problem that can be posed as a sequence of vectors . In this work , the primary input to the transformer will be an embedded dynamical system , Ξ = { ξ0 , ξ1 , . . . .ξT } , where the embedded state at time-step i is denoted as ξi ∈ Re . Given that we are interested in the prediction of a physical time series , this motivates the usage of a language modeling architecture that is designed for the sequential prediction of words in a body of text . We select the transformer decoder architecture used in the GPT models ( Radford et al. , 2018 ; 2019 ) . Our model follows the GPT-2 architecture based on the implementation in the Hugging Face transformer repository ( Wolf et al. , 2019 ) , but is significantly smaller in size than these modern NLP transformers . This model consists of a stack of transformer decoder layers that use masked attention , as depicted in Fig . 2 . The inputs to the transformer are the embedded representation of the physical system with the sinusoidal positional encoding proposed in the original transformer ( Vaswani et al. , 2017 ) . To train the model , consider a data set of D embedded i.i.d . time-series D = { Ξi } D i=1 for which we can use the standard time-series Markov model ( language modeling ) log-likelihood : LD = D∑ i T∑ j − log p ( ξij |ξij−k , . . . , ξij−1 , θ ) . ( 2 ) k is the context window and θ are the model ’ s parameters . Contrary to the standard NLP approach which poses the likelihood as a softmax over a dictionary of tokens , the likelihood here is taken as a standard Gaussian between the transformer ’ s prediction and the target embedded value resulting in a L2 loss . This is due to the fact that the solution to most physical systems can not be condensed to a discrete finite set making tokenization into a finite dictionary not possible and thus a softmax approach not applicable . Training is the standard auto-regressive method used in GPT ( Radford et al. , 2018 ) , as opposed to the word masking ( Devlin et al. , 2018 ) , constrained to the embedded space . The physical states , φi , have the potential to be very high dimensional thus training the transformer in the lower-dimensional embedded space can significantly lower training costs . 2.2 EMBEDDING MODEL . The second major component of the machine learning methodology is the embedding model responsible for projecting the discretized physical state space into a 1D vector representation . In NLP , the standard approach is to tokenize then embed a finite vocabulary of words , syllables or characters using methods such as n-gram models , Byte Pair Encoding ( Gage , 1994 ) , Word2Vec ( Mikolov et al. , 2013a ; b ) , GloVe ( Pennington et al. , 2014 ) , etc . These methods allow language to be represented by a series of 1D vectors that serve as the input to the transformer . Clearly a finite tokenization and such NLP embeddings are not directly applicable to a physical system , thus we propose our own embedding method designed specifically for dynamical systems . Consider learning the generalized mapping between the system ’ s state space and embedded space : F : Rn×d → Re and G : Re → Rn×d . Naturally , multiple approaches can be used especially if the dimensionality of the embedded space is less than that of the state-space but this is not always the case . The primary approach that we will propose is a Koopman observable embedding which is a technique that can be applied universally to all dynamical systems . Considering the discrete time form of the dynamical system in Eq . 1 , the evolution of the state variables can be abstracted by φi+1 = F ( φi ) for which F is the dynamic map from one time-step to the next . The foundation of Koopman theory states that for any dynamical system of this form , there exists an infinite set of state observables , g ( φi ) , that evolve linearly in time such that : Kg ( φi ) , g ◦ F ( φi ) , ( 3 ) where K is known as the Koopman operator ( Koopman , 1931 ) which is time-invariant . Namely , the Koopman observables can evolve in time through continual matrix multiplication of with the Koopman operator : g ( φi+1 ) = Kg ( φi ) , g ( φi+2 ) = K2g ( φi ) , g ( φi+3 ) = K3g ( φi ) , . . . ( 4 ) in which Kn denotes n matrix products ( e.g . K3 = K · K · K ) . Modeling the dynamics of a system through the linear Koopman space can be attractive due to its simplification of the dynamics but also the potential physical insights it brings along with it . Spectral analysis of the Koopman operator can reveal fundamental dynamical modes that drive the system ’ s evolution in time . However , these observables , g ( φi ) , are typically unknown and are theoretically infinite dimensional . Thus Koopman theory can be viewed as a trade off between lifting the state space into observable space with more complex states but with simpler dynamics . For practical application , the Koopman operator and observables are finitely approximated . In recent years , various machine learning based approaches have been proposed to learn both the Koopman operator and observables approximated in a finite space for modeling , control and dynamical mode analysis ( Takeishi et al. , 2017 ; Li et al. , 2017 ; Lusch et al. , 2018 ; Korda & Mezić , 2018 ; Otto & Rowley , 2019 ; Korda et al. , 2020 ; Mezic , 2020 ) . While deep learning methods have enabled greater success with discovering Koopman observables and operators ( Lusch et al. , 2018 ; Otto & Rowley , 2019 ) , applications have still been limited to fairly simple systems and do not work for long time prediction . This is likely due to the approximation of the finite-dimensional Koopman observables , limited data and complete dependence on the discovered Koopman operator K to model the dynamics . Suggesting the prediction of a system through a single linear transform clearly has significant limitations and is fundamentally a naive approach from a machine learning perspective . In this work , we propose using approximate Koopman observations as a methodology to develop embeddings for the transformer model such that F ( φi ) , g ( φi ) . As seen in Fig . 3 , the embedding model follows a standard encoder-decoder model with the middle latent variables being the Koopman observables . Embedding model architectures for each numerical example are provided in Appendix A . We introduce a learnable Koopman operator which takes the form of a banded matrix that is symmetrical about the diagonal to encourage the discovery of dominate dynamical modes rather than high-frequency oscillations ( Lusch et al. , 2018 ; Otto & Rowley , 2019 ) . This learned Koopman operator is disposed of once training of the embedding model is complete . Given the data set of physical state time-series , DΦ = { Φi } D i , the Koopman embedding model is trained with the following loss : LDΦ = D∑ i=1 T∑ j=0 λ0MSE ( φij , G ◦ F ( φij ) ) ︸ ︷︷ ︸ Reconstruction +λ1MSE ( φij , G ◦ KjF ( φi0 ) ) ︸ ︷︷ ︸ Dynamics +λ2 ‖K‖22︸ ︷︷ ︸ Decay . ( 5 ) This loss function consists of three components : the first is a reconstruction loss which ensures a consistent mapping to and from the embedded representation . The second is the Koopman dynamics loss which pushes ξj to follow linear dynamics resulting in time-steps of similar dynamics to have similar embeddings . The last term decays the Koopman operator ’ s parameters to help force the model to discover lower-dimensional dynamical modes . In reference to traditional NLP embeddings , we believe our Koopman observable embedding has a motivation similar to Word2Vec ( Mikolov et al. , 2013a ) as well as more recent embedding methods such as context2vec ( Melamud et al. , 2016 ) , ELMo ( Peters et al. , 2018 ) , etc . Namely , these methods are based on the principle of context : words that are contextually related to each other have a similar embedding . The Koopman embedding has a similar objective encouraging physical realizations containing similar time-invariant dynamical modes to also have similar embeddings . Hence , our goal with the embedding model is to not find the true Koopman observables or operator , but rather leverage Koopman to enforce physical context . | The paper proposes to use transformers for modelling dynamical systems. The transformer is combined with a linear dynamical system to enforce Koopman features and is trained using the reconstruction and prediction loss. Finally, the proposed algorithm is applied to the different tasks with 1, 2 & 3 dimensions. On each simulated task the proposed algorithm marginally outperforms sufficient baselines. | SP:8ab44295af08f56cc4486f603e7b3c8167d156ce |
Transformers for Modeling Physical Systems | 1 INTRODUCTION . The transformer model ( Vaswani et al. , 2017 ) , built on self-attention , has largely become the stateof-the-art approach for a large set of neural language processing ( NLP ) tasks including language modeling , text classification , question answering , etc . Although more recent transformer work is focused on unsupervised pre-training of extremely large models ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Dai et al. , 2019 ; Liu et al. , 2019 ) , the original transformer model garnered attention due to its ability to out-perform other state-of-the-art methods by learning longer-term dependencies without recurrent connections . Given that the transformer model was originally developed for NLP , nearly all related work has been rightfully confined within this field with only a few exceptions . Here , we focus on the development of transformers to model dynamical systems that can replace otherwise expensive numerical solvers . In other words , we are interested in using transformers to learn the language of physics . The surrogate modeling of physical systems is a research field that has existed for several decades and is a large ongoing effort in scientific machine learning . A surrogate model is defined as an approximate model of a physical phenomenon that is designed to replace an expensive computational solver that would otherwise be needed to resolve the system of interest . The key characteristic of surrogate models is their ability to model a distribution of initial or boundary conditions rather than learning just one solution . This is arguably essential for the justification of training a deep learning model versus using a standard numerical solver . The most tangible applications of surrogates are for optimization , design and inverse problems where many repeated simulations are typically needed . With the growing interest in deep learning , deep neural networks have been used for surrogate modeling a large range of physical systems in recent literature . Standard deep neural network architectures such as auto-regressive ( Mo et al. , 2019 ; Geneva & Zabaras , 2020a ) , residual/Euler ( González-Garcı́a et al. , 1998 ; Sanchez-Gonzalez et al. , 2020 ) , recurrent and LSTM based models ( Mo et al. , 2019 ; Tang et al. , 2020 ; Maulik et al. , 2020 ) have been largely demonstrated to be effective at modeling various physical dynamics . Such models generally rely exclusively on the past time-step to provide complete information on the current state of the system ’ s evolution . Particularly for dynamical systems , present machine learning models lack generalizable time cognisant capabilities to predict multi-time-scale phenomena present in systems including turbulent fluid flow , multi-scale materials modeling , molecular dynamics , chemical processes , etc . Thus currently adopted models struggle to maintain true physical accuracy for long-time 1Code available at : [ URL available after review ] . 2Supplementary videos available at : https : //sites.google.com/view/transformersphysx . predictions . Much work is needed to scale such deep learning models to larger physical systems that are of scientific and industrial interest . This work deviates from this pre-existing literature by investigating the use of transformers for the prediction of physical systems , relying entirely on selfattention to model dynamics . In the recent work of Shalova & Oseledets ( 2020 ) , such self-attention models were tested to learn single solutions of several low-dimensional ordinary differential equations . In this work , we propose a physics inspired embedding methodology to model a distribution of dynamical solutions . We will demonstrate our model on high-dimensional partial differential equation problems that far surpass the complexity of past works . To the authors best knowledge , this is the first work to explore transformer NLP architectures for the prediction of physical systems . 2 METHODS . When discussing dynamical systems , we are interested in systems that can be described through a dynamical ordinary or partial differential equation : φt = F ( x , φ ( t , x , η ) , ∇xφ , ∇2xφ , φ · ∇xφ , . . . ) , F : R× Rn → Rn , t ∈ T ⊂ R+ , x ∈ Ω ⊂ Rm , ( 1 ) in which φ ∈ Rn is the solution of this differential equation with parameters η , in the time interval T and spatial domain Ω with a boundary Γ ⊂ Ω . This general form can embody a vast spectrum of physical phenomena including fluid flow and transport processes , mechanics and materials physics , and molecular dynamics . In this work , we are interested in learning the set of solutions for a distribution of initial conditions φ0 ∼ p ( φ0 ) , boundary conditions B ( φ ) ∼ p ( B ) ∀x ∈ Γ or equation parameters η ∼ p ( η ) . This accounts for modeling initial value , boundary value and stochastic problems . We emphasize that this is fundamentally different , more difficult and of greater interest for most scientific applications compared to learning a single solution . To make this problem applicable to the use of transformer models , the continuous solution is discretized in both the spatial and temporal domains such that the solution of the differential equation is Φ = { φ0 , φ1 , . . .φT } ; φi ∈ Rn×d for which φi has been discretized by d points in Ω . We assume an initial state φ0 and that the time interval T is discretized by T time-steps with a time-step size ∆t . Hence , we pose the problem of modeling a dynamical system as a time-series problem . The machine learning methodology has two core components : the transformer for modeling dynamics and the embedding network for projecting physical states into a vector representation . Similar to NLP , the embedding model is trained prior to the transformer . This embedding model is then frozen and the entire data-set is converted to the embedded space in which the transformer is then trained as illustrated in Fig . 1 . During testing , the embedding decoder is used to reconstruct the physical states from the transformer ’ s predictions . 2.1 TRANSFORMER . The transformer model was originally designed with NLP as the sole application with word vector embeddings of a passage of text being the primary input ( Vaswani et al. , 2017 ) . However , recent works have explored using attention mechanisms for different machine learning tasks ( Veličković et al. , 2017 ; Zhang et al. , 2019 ; Fu et al. , 2019 ) and a few investigate the use of transformers for applications outside of the NLP field ( Chen et al. , 2020 ) . This suggests that self-attention and in particular transformer models may work well for any problem that can be posed as a sequence of vectors . In this work , the primary input to the transformer will be an embedded dynamical system , Ξ = { ξ0 , ξ1 , . . . .ξT } , where the embedded state at time-step i is denoted as ξi ∈ Re . Given that we are interested in the prediction of a physical time series , this motivates the usage of a language modeling architecture that is designed for the sequential prediction of words in a body of text . We select the transformer decoder architecture used in the GPT models ( Radford et al. , 2018 ; 2019 ) . Our model follows the GPT-2 architecture based on the implementation in the Hugging Face transformer repository ( Wolf et al. , 2019 ) , but is significantly smaller in size than these modern NLP transformers . This model consists of a stack of transformer decoder layers that use masked attention , as depicted in Fig . 2 . The inputs to the transformer are the embedded representation of the physical system with the sinusoidal positional encoding proposed in the original transformer ( Vaswani et al. , 2017 ) . To train the model , consider a data set of D embedded i.i.d . time-series D = { Ξi } D i=1 for which we can use the standard time-series Markov model ( language modeling ) log-likelihood : LD = D∑ i T∑ j − log p ( ξij |ξij−k , . . . , ξij−1 , θ ) . ( 2 ) k is the context window and θ are the model ’ s parameters . Contrary to the standard NLP approach which poses the likelihood as a softmax over a dictionary of tokens , the likelihood here is taken as a standard Gaussian between the transformer ’ s prediction and the target embedded value resulting in a L2 loss . This is due to the fact that the solution to most physical systems can not be condensed to a discrete finite set making tokenization into a finite dictionary not possible and thus a softmax approach not applicable . Training is the standard auto-regressive method used in GPT ( Radford et al. , 2018 ) , as opposed to the word masking ( Devlin et al. , 2018 ) , constrained to the embedded space . The physical states , φi , have the potential to be very high dimensional thus training the transformer in the lower-dimensional embedded space can significantly lower training costs . 2.2 EMBEDDING MODEL . The second major component of the machine learning methodology is the embedding model responsible for projecting the discretized physical state space into a 1D vector representation . In NLP , the standard approach is to tokenize then embed a finite vocabulary of words , syllables or characters using methods such as n-gram models , Byte Pair Encoding ( Gage , 1994 ) , Word2Vec ( Mikolov et al. , 2013a ; b ) , GloVe ( Pennington et al. , 2014 ) , etc . These methods allow language to be represented by a series of 1D vectors that serve as the input to the transformer . Clearly a finite tokenization and such NLP embeddings are not directly applicable to a physical system , thus we propose our own embedding method designed specifically for dynamical systems . Consider learning the generalized mapping between the system ’ s state space and embedded space : F : Rn×d → Re and G : Re → Rn×d . Naturally , multiple approaches can be used especially if the dimensionality of the embedded space is less than that of the state-space but this is not always the case . The primary approach that we will propose is a Koopman observable embedding which is a technique that can be applied universally to all dynamical systems . Considering the discrete time form of the dynamical system in Eq . 1 , the evolution of the state variables can be abstracted by φi+1 = F ( φi ) for which F is the dynamic map from one time-step to the next . The foundation of Koopman theory states that for any dynamical system of this form , there exists an infinite set of state observables , g ( φi ) , that evolve linearly in time such that : Kg ( φi ) , g ◦ F ( φi ) , ( 3 ) where K is known as the Koopman operator ( Koopman , 1931 ) which is time-invariant . Namely , the Koopman observables can evolve in time through continual matrix multiplication of with the Koopman operator : g ( φi+1 ) = Kg ( φi ) , g ( φi+2 ) = K2g ( φi ) , g ( φi+3 ) = K3g ( φi ) , . . . ( 4 ) in which Kn denotes n matrix products ( e.g . K3 = K · K · K ) . Modeling the dynamics of a system through the linear Koopman space can be attractive due to its simplification of the dynamics but also the potential physical insights it brings along with it . Spectral analysis of the Koopman operator can reveal fundamental dynamical modes that drive the system ’ s evolution in time . However , these observables , g ( φi ) , are typically unknown and are theoretically infinite dimensional . Thus Koopman theory can be viewed as a trade off between lifting the state space into observable space with more complex states but with simpler dynamics . For practical application , the Koopman operator and observables are finitely approximated . In recent years , various machine learning based approaches have been proposed to learn both the Koopman operator and observables approximated in a finite space for modeling , control and dynamical mode analysis ( Takeishi et al. , 2017 ; Li et al. , 2017 ; Lusch et al. , 2018 ; Korda & Mezić , 2018 ; Otto & Rowley , 2019 ; Korda et al. , 2020 ; Mezic , 2020 ) . While deep learning methods have enabled greater success with discovering Koopman observables and operators ( Lusch et al. , 2018 ; Otto & Rowley , 2019 ) , applications have still been limited to fairly simple systems and do not work for long time prediction . This is likely due to the approximation of the finite-dimensional Koopman observables , limited data and complete dependence on the discovered Koopman operator K to model the dynamics . Suggesting the prediction of a system through a single linear transform clearly has significant limitations and is fundamentally a naive approach from a machine learning perspective . In this work , we propose using approximate Koopman observations as a methodology to develop embeddings for the transformer model such that F ( φi ) , g ( φi ) . As seen in Fig . 3 , the embedding model follows a standard encoder-decoder model with the middle latent variables being the Koopman observables . Embedding model architectures for each numerical example are provided in Appendix A . We introduce a learnable Koopman operator which takes the form of a banded matrix that is symmetrical about the diagonal to encourage the discovery of dominate dynamical modes rather than high-frequency oscillations ( Lusch et al. , 2018 ; Otto & Rowley , 2019 ) . This learned Koopman operator is disposed of once training of the embedding model is complete . Given the data set of physical state time-series , DΦ = { Φi } D i , the Koopman embedding model is trained with the following loss : LDΦ = D∑ i=1 T∑ j=0 λ0MSE ( φij , G ◦ F ( φij ) ) ︸ ︷︷ ︸ Reconstruction +λ1MSE ( φij , G ◦ KjF ( φi0 ) ) ︸ ︷︷ ︸ Dynamics +λ2 ‖K‖22︸ ︷︷ ︸ Decay . ( 5 ) This loss function consists of three components : the first is a reconstruction loss which ensures a consistent mapping to and from the embedded representation . The second is the Koopman dynamics loss which pushes ξj to follow linear dynamics resulting in time-steps of similar dynamics to have similar embeddings . The last term decays the Koopman operator ’ s parameters to help force the model to discover lower-dimensional dynamical modes . In reference to traditional NLP embeddings , we believe our Koopman observable embedding has a motivation similar to Word2Vec ( Mikolov et al. , 2013a ) as well as more recent embedding methods such as context2vec ( Melamud et al. , 2016 ) , ELMo ( Peters et al. , 2018 ) , etc . Namely , these methods are based on the principle of context : words that are contextually related to each other have a similar embedding . The Koopman embedding has a similar objective encouraging physical realizations containing similar time-invariant dynamical modes to also have similar embeddings . Hence , our goal with the embedding model is to not find the true Koopman observables or operator , but rather leverage Koopman to enforce physical context . | The paper proposes applying transformer models to modeling physical systems. The state at each time step is embedded into a continuous vector using a pretrained encoder-decoder model based on Koopman’s theory. The experiments are performed on three physical systems and generally show that (1) a transformer model outperforms alternative machine learning methods, (2) a transformer model with the proposed embedding outperforms transformer models with alternative embeddings based on autoencoders or PCA, and (3) more transformer layers help (but only slightly). | SP:8ab44295af08f56cc4486f603e7b3c8167d156ce |
Layer-adaptive Sparsity for the Magnitude-based Pruning | 1 INTRODUCTION . Neural network pruning is an art of removing “ unimportant weights ” from a model , with an intention to meet practical constraints ( Han et al. , 2015 ) , mitigate overfitting ( Hanson & Pratt , 1988 ) , enhance interpretability ( Mozer & Smolensky , 1988 ) , or deepen our understanding on neural network training ( Frankle & Carbin , 2019 ) . Yet , the importance of weight is still a vaguely defined notion , and thus a wide range of pruning algorithms based on various importance scores has been proposed . One popular approach is to estimate the loss increment from removing the target weight to use as an importance score , e.g. , Hessian-based approximations ( LeCun et al. , 1989 ; Hassibi & Stork , 1993 ; Dong et al. , 2017 ) , coreset-based estimates ( Baykal et al. , 2019 ; Mussay et al. , 2020 ) , convex optimization ( Aghasi et al. , 2017 ) , and operator distortion ( Park et al. , 2020 ) . Other approaches include on-the-fly1 regularization ( Louizos et al. , 2018 ; Xiao et al. , 2019 ) , Bayesian methods ( Molchanov et al. , 2017 ; Louizos et al. , 2017 ; Dai et al. , 2018 ) , and reinforcement learning ( Lin et al. , 2017 ) . Recent discoveries ( Gale et al. , 2019 ; Evci et al. , 2020 ) demonstrate that , given an appropriate choice of layerwise sparsity , simply pruning on the basis of weight magnitude yields a surprisingly powerful unstructured pruning scheme . For instance , Gale et al . ( 2019 ) evaluates the performance of magnitudebased pruning ( MP ; Han et al . ( 2015 ) ; Zhu & Gupta ( 2018 ) ) with an extensive hyperparameter tuning , and shows that MP achieves comparable or better performance than state-of-the-art pruning algorithms that use more complicated importance scores . To arrive at such a performance level , the authors introduce the following handcrafted heuristic : Leave the first convolutional layer fully dense , and prune up to only 80 % of weights from the last fully-connected layer ; the heuristic is motivated by the sparsity pattern from other state-of-the-art algorithms ( Molchanov et al. , 2017 ) and additional experimental/architectural observations . Unfortunately , there is an apparent lack of consensus on “ how to choose the layerwise sparsity ” for the magnitude-based pruning . Instead , the layerwise sparsity is selected mostly on an algorithm-byalgorithm basis . One common method is the global MP criteria ( see , e.g. , Morcos et al . ( 2019 ) ) , ∗Work done at KAIST 1i.e. , simultaneously training and pruning where the layerwise sparsity is automatically determined by using a single global threshold on weight magnitude . Lin et al . ( 2020 ) propose a magnitude-based pruning algorithm using a feedback signal , using a heuristic rule of keeping the last fully connected layer dense . A recent work by Evci et al . ( 2020 ) proposes a magnitude-based dynamic sparse training method , adopting layerwise sparsity inspired from the network science approach toward neural network pruning ( Mocanu et al. , 2018 ) . Contributions . In search of a “ go-to ” layerwise sparsity for MP , we take a model-level distortion minimization perspective towards MP . We build on the observation of Dong et al . ( 2017 ) ; Park et al . ( 2020 ) that each neural network layer can be viewed as an operator , and MP is a choice that incurs minimum ` 2 distortion to the operator output ( given a worst-case input signal ) . We bring the perspective further to examine the “ model-level ” distortion incurred by pruning a layer ; preceding layers scale the input signal to the target layer , and succeeding layers scale the output distortion . Based on the distortion minimization framework , we propose a novel importance score for global pruning , coined LAMP ( Layer-Adaptive Magnitude-based Pruning ) . The LAMP score is a rescaled weight magnitude , approximating the model-level distortion from pruning . Importantly , the LAMP score is designed to approximate the distortion on the model being pruned , i.e. , all connections with a smaller LAMP score than the target weight is already pruned . Global pruning2 with the LAMP score is equivalent to the MP with an automatically determined layerwise sparsity . At the same time , pruning with LAMP keeps the benefits of MP intact ; the LAMP score is efficiently computable , hyperparameter-free , and does not rely on any model-specific knowledge . We validate the effectiveness of LAMP under a diverse experimental setup , encompassing various convolutional neural network architectures ( VGG-16 , ResNet-18/34 , DenseNet-121 , EfficientNet-B0 ) and various image datasets ( CIFAR-10/100 , SVHN , Restricted ImageNet ) . In all considered setups , LAMP consistently outperforms the baseline layerwise sparsity selection schemes . We also perform additional ablation studies with one-shot pruning and weight-rewinding setup to confirm that LAMP performs reliably well under a wider range of scenarios . Organization . In Section 2 , we briefly describe existing methods to choose the layerwise sparsity for magnitude-based pruning . In Section 3 , we formally introduce LAMP and describe how the ` 2 distortion minimization perspective motivates the LAMP score . In Section 4 , we empirically validate the effectiveness and versatility of LAMP . In Section 5 , we take a closer look at the layerwise sparsity discovered by LAMP and compare with baseline methods and previously proposed handcrafted heuristics . In Section 6 , we summarize our findings and discuss future directions . Appendices include the experimental details ( Appendix A ) , complexity analysis ( Appendix B ) , derivation of the LAMP score ( Appendix C ) , additional experiments on Transformer ( Appendix D ) , and detailed experimental results with standard deviations ( Appendix E ) . 2 RELATED WORK . This section gives a ( necessarily non-exhaustive ) survey of various layerwise sparsity selection schemes used for magnitude-based pruning algorithms . Magnitude-based pruning of neural networks dates back to the early works of Janowsky ( 1989 ) ; LeCun et al . ( 1989 ) , and has been actively studied 2i.e. , using a global threshold for LAMP score again under the context of model compression since the work of Han et al . ( 2015 ) . In Han et al . ( 2015 ) , the authors propose an iterative pruning scheme where the layerwise pruning threshold is determined by the standard-deviation-based heuristic . Zhu & Gupta ( 2018 ) propose a uniform pruning algorithm with a carefully tuned gradual pruning schedule combined with weight re-growths . Gale et al . ( 2019 ) refine the algorithm by adding a heuristic constraint of keeping the first convolutional layer fully dense and keeping at least 20 % of the weights surviving in the last fully-connected layer . MP has also been widely used in the context of “ pruning at initialization. ” Frankle & Carbin ( 2019 ) combine MP with weight rewinding to discover efficiently trainable subnetworks : for small nets , the authors employ uniform layerwise sparsity , but use different rates for convolutional layers and fully-connected layers ( with an added heuristic on the last fully-connected layer ) ; for larger nets , authors use global MP . Morcos et al . ( 2019 ) consider transferring the “ winning ticket ” initializations , using the global MP . Evci et al . ( 2020 ) proposes a training scheme for sparsely initialized neural networks , where the layerwise sparsity is given by the Erdős-Rényi kernel method ; the method generalizes the scheme initially proposed by Mocanu et al . ( 2018 ) to convolutional neural networks . We note that there is a line of results on the trainable layerwise sparsity ; we refer the interested readers to the recent work of Kusupati et al . ( 2020 ) for a concise survey . However , we do not make direct comparisons to these methods , as our primary purpose is to deliver an easy-to-use layerwise sparsity selection scheme without requiring the modification of training objective , or an extensive hyperparameter tuning . We also note that we focus on the unstructured sparsity . While such unstructured pruning techniques have been considered less practical ( compared to structured pruning ) , several recent breakthroughs provide promising methods to bridge this gap ; see Gale et al . ( 2020 ) ; Elsen et al . ( 2020 ) . 3 LAYER-ADAPTIVE MAGNITUDE-BASED PRUNING ( LAMP ) . We now formally introduce the Layer-Adaptive Magnitude-based Pruning ( LAMP ) score . Consider a depth-d feedforward neural network with weight tensors W ( 1 ) , . . . , W ( d ) associated with each fully-connected/convolutional layer . For fully-connected layers , corresponding weight tensors are twodimensional matrices , and for 2d convolutional layers , corresponding tensors are four-dimensional . To give a unified definition of the LAMP score for both fully-connected and convolutional layers , we assume that each weight tensor is unrolled ( or flattened ) to a one-dimensional vector . For each of these unrolled vectors , we assume ( without loss of generality ) that the weights are sorted in an ascending order according to the given index map , i.e. , |W [ u ] | ≤ |W [ v ] | holds whenever u < v , where W [ u ] denote the entry of W mapped by the index u.3 The LAMP score for the u-th index of the weight tensor W is then defined as score ( u ; W ) : = ( W [ u ] ) 2∑ v≥u ( W [ v ] ) 2 . ( 1 ) Informally , the LAMP score ( Eq . 1 ) measures the relative importance of the target connection among all surviving connections belonging to the same layer , where the connections with a smaller weight magnitude ( in the same layer ) have already been pruned . As a consequence , two connections with identical weight magnitudes have different LAMP scores , depending on the index map being used . Once the LAMP score is computed , we globally prune the connections with smallest LAMP scores until the desired global sparsity constraint is met ; the procedure is equivalent to performing MP with an automatically selected layerwise sparsity . To see this , it suffices to check that ( W [ u ] ) 2 > ( W [ v ] ) 2 ⇒ score ( u ; W ) > score ( v ; W ) ( 2 ) holds for any weight tensor W and a pair of indices u , v. From the definition of the LAMP score ( Eq . 1 ) , it is easy to see that the logical relation ( 2 ) holds . Indeed , for the connection with a larger weight magnitude , the denominator of Eq . 1 is smaller , while the numerator is larger . We note that the global pruning with respect to the LAMP score is not identical to the global pruning with respect to the magnitude score |W [ u ] | ( or ( W [ u ] ) 2 , equivalently ) . Indeed , in each layer , there 3This “ order ” among weights is required to handle the weights with equal magnitude . exists exactly one connection with the LAMP score of 1 , which is the maximum LAMP score possible . In other words , LAMP keeps at least one surviving connection in each layer . The same does not hold for the global pruning with respect to the weight magnitude score . We also note that the LAMP score is easy-to-use . Similar to the vanilla MP , the LAMP score does not have any hyperparameter to be tuned , and is easily implementable via elementary tensor operations . Furthermore , the LAMP score can be computed with only a minimal computational overhead ; the sorting of squared weight magnitudes required to compute the denominator in Eq . 1 is already a part of typical vanilla MP algorithms . For more discussions , see Appendix B . | The paper proposes LAMP, an importance score for unstructured pruning that incorporates layerwise statistics such that the resultant scores for each connection can be compared globally, cutting down on the hyperparameter space for magnitude pruning from the relatively standard practice of requiring hand-specified layerwise pruning rates. LAMP is motivated with a distortion analysis: LAMP is shown to be equivalent to minimizing an upper bound on the supremum of the change in model predictions of unit vectors. LAMP is compared against layerwise pruning rates obtained by standard uniform layerwise and global pruning, along with the less standard Erdos-Renyi kernel method, showing that LAMP can achieve higher accuracy for equivalent pruning rates in a specific experimental setup across several different networks. | SP:08dbd0677de078598537324299a1495f34aa5bc2 |
Layer-adaptive Sparsity for the Magnitude-based Pruning | 1 INTRODUCTION . Neural network pruning is an art of removing “ unimportant weights ” from a model , with an intention to meet practical constraints ( Han et al. , 2015 ) , mitigate overfitting ( Hanson & Pratt , 1988 ) , enhance interpretability ( Mozer & Smolensky , 1988 ) , or deepen our understanding on neural network training ( Frankle & Carbin , 2019 ) . Yet , the importance of weight is still a vaguely defined notion , and thus a wide range of pruning algorithms based on various importance scores has been proposed . One popular approach is to estimate the loss increment from removing the target weight to use as an importance score , e.g. , Hessian-based approximations ( LeCun et al. , 1989 ; Hassibi & Stork , 1993 ; Dong et al. , 2017 ) , coreset-based estimates ( Baykal et al. , 2019 ; Mussay et al. , 2020 ) , convex optimization ( Aghasi et al. , 2017 ) , and operator distortion ( Park et al. , 2020 ) . Other approaches include on-the-fly1 regularization ( Louizos et al. , 2018 ; Xiao et al. , 2019 ) , Bayesian methods ( Molchanov et al. , 2017 ; Louizos et al. , 2017 ; Dai et al. , 2018 ) , and reinforcement learning ( Lin et al. , 2017 ) . Recent discoveries ( Gale et al. , 2019 ; Evci et al. , 2020 ) demonstrate that , given an appropriate choice of layerwise sparsity , simply pruning on the basis of weight magnitude yields a surprisingly powerful unstructured pruning scheme . For instance , Gale et al . ( 2019 ) evaluates the performance of magnitudebased pruning ( MP ; Han et al . ( 2015 ) ; Zhu & Gupta ( 2018 ) ) with an extensive hyperparameter tuning , and shows that MP achieves comparable or better performance than state-of-the-art pruning algorithms that use more complicated importance scores . To arrive at such a performance level , the authors introduce the following handcrafted heuristic : Leave the first convolutional layer fully dense , and prune up to only 80 % of weights from the last fully-connected layer ; the heuristic is motivated by the sparsity pattern from other state-of-the-art algorithms ( Molchanov et al. , 2017 ) and additional experimental/architectural observations . Unfortunately , there is an apparent lack of consensus on “ how to choose the layerwise sparsity ” for the magnitude-based pruning . Instead , the layerwise sparsity is selected mostly on an algorithm-byalgorithm basis . One common method is the global MP criteria ( see , e.g. , Morcos et al . ( 2019 ) ) , ∗Work done at KAIST 1i.e. , simultaneously training and pruning where the layerwise sparsity is automatically determined by using a single global threshold on weight magnitude . Lin et al . ( 2020 ) propose a magnitude-based pruning algorithm using a feedback signal , using a heuristic rule of keeping the last fully connected layer dense . A recent work by Evci et al . ( 2020 ) proposes a magnitude-based dynamic sparse training method , adopting layerwise sparsity inspired from the network science approach toward neural network pruning ( Mocanu et al. , 2018 ) . Contributions . In search of a “ go-to ” layerwise sparsity for MP , we take a model-level distortion minimization perspective towards MP . We build on the observation of Dong et al . ( 2017 ) ; Park et al . ( 2020 ) that each neural network layer can be viewed as an operator , and MP is a choice that incurs minimum ` 2 distortion to the operator output ( given a worst-case input signal ) . We bring the perspective further to examine the “ model-level ” distortion incurred by pruning a layer ; preceding layers scale the input signal to the target layer , and succeeding layers scale the output distortion . Based on the distortion minimization framework , we propose a novel importance score for global pruning , coined LAMP ( Layer-Adaptive Magnitude-based Pruning ) . The LAMP score is a rescaled weight magnitude , approximating the model-level distortion from pruning . Importantly , the LAMP score is designed to approximate the distortion on the model being pruned , i.e. , all connections with a smaller LAMP score than the target weight is already pruned . Global pruning2 with the LAMP score is equivalent to the MP with an automatically determined layerwise sparsity . At the same time , pruning with LAMP keeps the benefits of MP intact ; the LAMP score is efficiently computable , hyperparameter-free , and does not rely on any model-specific knowledge . We validate the effectiveness of LAMP under a diverse experimental setup , encompassing various convolutional neural network architectures ( VGG-16 , ResNet-18/34 , DenseNet-121 , EfficientNet-B0 ) and various image datasets ( CIFAR-10/100 , SVHN , Restricted ImageNet ) . In all considered setups , LAMP consistently outperforms the baseline layerwise sparsity selection schemes . We also perform additional ablation studies with one-shot pruning and weight-rewinding setup to confirm that LAMP performs reliably well under a wider range of scenarios . Organization . In Section 2 , we briefly describe existing methods to choose the layerwise sparsity for magnitude-based pruning . In Section 3 , we formally introduce LAMP and describe how the ` 2 distortion minimization perspective motivates the LAMP score . In Section 4 , we empirically validate the effectiveness and versatility of LAMP . In Section 5 , we take a closer look at the layerwise sparsity discovered by LAMP and compare with baseline methods and previously proposed handcrafted heuristics . In Section 6 , we summarize our findings and discuss future directions . Appendices include the experimental details ( Appendix A ) , complexity analysis ( Appendix B ) , derivation of the LAMP score ( Appendix C ) , additional experiments on Transformer ( Appendix D ) , and detailed experimental results with standard deviations ( Appendix E ) . 2 RELATED WORK . This section gives a ( necessarily non-exhaustive ) survey of various layerwise sparsity selection schemes used for magnitude-based pruning algorithms . Magnitude-based pruning of neural networks dates back to the early works of Janowsky ( 1989 ) ; LeCun et al . ( 1989 ) , and has been actively studied 2i.e. , using a global threshold for LAMP score again under the context of model compression since the work of Han et al . ( 2015 ) . In Han et al . ( 2015 ) , the authors propose an iterative pruning scheme where the layerwise pruning threshold is determined by the standard-deviation-based heuristic . Zhu & Gupta ( 2018 ) propose a uniform pruning algorithm with a carefully tuned gradual pruning schedule combined with weight re-growths . Gale et al . ( 2019 ) refine the algorithm by adding a heuristic constraint of keeping the first convolutional layer fully dense and keeping at least 20 % of the weights surviving in the last fully-connected layer . MP has also been widely used in the context of “ pruning at initialization. ” Frankle & Carbin ( 2019 ) combine MP with weight rewinding to discover efficiently trainable subnetworks : for small nets , the authors employ uniform layerwise sparsity , but use different rates for convolutional layers and fully-connected layers ( with an added heuristic on the last fully-connected layer ) ; for larger nets , authors use global MP . Morcos et al . ( 2019 ) consider transferring the “ winning ticket ” initializations , using the global MP . Evci et al . ( 2020 ) proposes a training scheme for sparsely initialized neural networks , where the layerwise sparsity is given by the Erdős-Rényi kernel method ; the method generalizes the scheme initially proposed by Mocanu et al . ( 2018 ) to convolutional neural networks . We note that there is a line of results on the trainable layerwise sparsity ; we refer the interested readers to the recent work of Kusupati et al . ( 2020 ) for a concise survey . However , we do not make direct comparisons to these methods , as our primary purpose is to deliver an easy-to-use layerwise sparsity selection scheme without requiring the modification of training objective , or an extensive hyperparameter tuning . We also note that we focus on the unstructured sparsity . While such unstructured pruning techniques have been considered less practical ( compared to structured pruning ) , several recent breakthroughs provide promising methods to bridge this gap ; see Gale et al . ( 2020 ) ; Elsen et al . ( 2020 ) . 3 LAYER-ADAPTIVE MAGNITUDE-BASED PRUNING ( LAMP ) . We now formally introduce the Layer-Adaptive Magnitude-based Pruning ( LAMP ) score . Consider a depth-d feedforward neural network with weight tensors W ( 1 ) , . . . , W ( d ) associated with each fully-connected/convolutional layer . For fully-connected layers , corresponding weight tensors are twodimensional matrices , and for 2d convolutional layers , corresponding tensors are four-dimensional . To give a unified definition of the LAMP score for both fully-connected and convolutional layers , we assume that each weight tensor is unrolled ( or flattened ) to a one-dimensional vector . For each of these unrolled vectors , we assume ( without loss of generality ) that the weights are sorted in an ascending order according to the given index map , i.e. , |W [ u ] | ≤ |W [ v ] | holds whenever u < v , where W [ u ] denote the entry of W mapped by the index u.3 The LAMP score for the u-th index of the weight tensor W is then defined as score ( u ; W ) : = ( W [ u ] ) 2∑ v≥u ( W [ v ] ) 2 . ( 1 ) Informally , the LAMP score ( Eq . 1 ) measures the relative importance of the target connection among all surviving connections belonging to the same layer , where the connections with a smaller weight magnitude ( in the same layer ) have already been pruned . As a consequence , two connections with identical weight magnitudes have different LAMP scores , depending on the index map being used . Once the LAMP score is computed , we globally prune the connections with smallest LAMP scores until the desired global sparsity constraint is met ; the procedure is equivalent to performing MP with an automatically selected layerwise sparsity . To see this , it suffices to check that ( W [ u ] ) 2 > ( W [ v ] ) 2 ⇒ score ( u ; W ) > score ( v ; W ) ( 2 ) holds for any weight tensor W and a pair of indices u , v. From the definition of the LAMP score ( Eq . 1 ) , it is easy to see that the logical relation ( 2 ) holds . Indeed , for the connection with a larger weight magnitude , the denominator of Eq . 1 is smaller , while the numerator is larger . We note that the global pruning with respect to the LAMP score is not identical to the global pruning with respect to the magnitude score |W [ u ] | ( or ( W [ u ] ) 2 , equivalently ) . Indeed , in each layer , there 3This “ order ” among weights is required to handle the weights with equal magnitude . exists exactly one connection with the LAMP score of 1 , which is the maximum LAMP score possible . In other words , LAMP keeps at least one surviving connection in each layer . The same does not hold for the global pruning with respect to the weight magnitude score . We also note that the LAMP score is easy-to-use . Similar to the vanilla MP , the LAMP score does not have any hyperparameter to be tuned , and is easily implementable via elementary tensor operations . Furthermore , the LAMP score can be computed with only a minimal computational overhead ; the sorting of squared weight magnitudes required to compute the denominator in Eq . 1 is already a part of typical vanilla MP algorithms . For more discussions , see Appendix B . | This paper presents a novel technique (layer-adaptive magnitude based pruning, or LAMP) for pruning neural network weights (pruning can be beneficial in terms of overfitting prevention as well as other practical considerations). LAMP evaluates weights in each layer in terms of the ratio of the magnitude of the weight to the sum of magnitudes of all surviving weights in the layer. The weight which evaluates as least important across all layers is pruned and then the process is repeated until the desired sparsity is achieved. The method is motivated theoretically as minimizing the distortion in the input/output mapping implemented by the weights of the layer. Experimental results on several benchmarks are presented. | SP:08dbd0677de078598537324299a1495f34aa5bc2 |
When Do Curricula Work? | 1 INTRODUCTION . Inspired by the importance of properly ordering information when teaching humans ( Avrahami et al. , 1997 ) , curriculum learning ( CL ) proposes training models by presenting easier examples earlier during training ( Elman , 1993 ; Sanger , 1994 ; Bengio et al. , 2009 ) . Previous empirical studies have shown instances where curriculum learning can improve convergence speed and/or generalization in domains such as natural language processing ( Cirik et al. , 2016 ; Platanios et al. , 2019 ) , computer vision ( Pentina et al. , 2015 ; Sarafianos et al. , 2017 ; Guo et al. , 2018 ; Wang et al. , 2019 ) , and neural evolutionary computing ( Zaremba & Sutskever , 2014 ) . In contrast to curriculum learning , anticurriculum learning selects the most difficult examples first and gradually exposes the model to easier ones . Though counter-intuitive , empirical experiments have suggested that anti-curriculum learning can be as good as or better than curriculum learning in certain scenarios ( Kocmi & Bojar , 2017 ; Zhang et al. , 2018 ; 2019b ) . This is in tension with experiments in other contexts , however , which demonstrate that anti-curricula under perform standard or curriculum training ( Bengio et al. , 2009 ; Hacohen & Weinshall , 2019 ) . As explained above , empirical observations on curricula appear to be in conflict . Moreover , despite a rich literature ( see Section A ) , no ordered learning method is known to improve consistently across contexts , and curricula have not been widely adopted in machine learning . This suggest ruling out curricula as a beneficial practice for learning . In certain contexts , however , for large-scale text models such as GPT-3 ( Brown et al. , 2020 ) and T5 ( Raffel et al. , 2019 ) , non-uniform mixing strategies are standard practice . These contradicting observations contribute to a confusing picture on the usefulness of curricula . This work is an attempt to improve our understanding of curricula systematically . We start by asking a very fundamental question about a phenomenon that we call implicit curricula . Are examples ∗Work performed while Xiaoxia Wu was student at UT Austin and interning at Blueshift . 1Code at https : //github.com/google-research/understanding-curricula learned in a consistent order across different runs , architectures , and tasks ? If such a robust notion exists , is it possible to change the order in which the examples are learned by presenting them in a different order ? The answer to this question determines if there exists a robust notion of example difficulty that could be used to influence training . We then look into different ways of associating difficulty to examples using scoring functions and a variety of schedules known as pacing functions for introducing examples to the training procedure . We investigate if any of these choices can improve over the standard full-data i.i.d . training procedure commonly used in machine learning . Inspired by the success of CL in large scale training scenarios , we train in settings intended to emulate these large scale settings . In particular , we study the effect of curricula when training with a training time budget and training in the presence of noise . Contributions . In this paper , we systematically design and run extensive experiments to gain a better understanding of curricula . We train over 25,000 models over four datasets , CIFAR10/100 , FOOD101 , and FOOD101N covering a wide range of choices in designing curricula and arrive at the following conclusions : • Implicit Curricula : Examples are learned in a consistent order ( Section 2 ) . We show that the order in which examples are learned is consistent across runs , similar training methods , and similar architectures . Furthermore , we show that it is possible to change this order by changing the order in which examples are presented during training . Finally , we establish that well-known notions of sample difficulty are highly correlated with each other . • Curricula achieve ( almost ) no improvement in the standard setting ( Section 4 and 6 ) . We show curriculum learning , random , and anti-curriculum learning perform almost equally well in the standard setting.2 • Curriculum learning improves over standard training when training time is limited ( Section 5 and 6 ) . Imitating the large data regime , where training for multiple epochs is not feasible , we limit the number of iterations in the training algorithm and compare curriculum , random and anti-curriculum ordering against standard training . Our experiments reveal a clear advantage of curriculum learning over other methods . • Curriculum learning improves over standard training in noisy regime ( Section 5 and 6 ) . Finally , we mimic noisy data by adding label noise to CIFAR100 and also use a natural noisy dataset – FOOD101N . Similar to Jiang et al . ( 2018 ) ; Saxena et al . ( 2019 ) ; Guo et al . ( 2018 ) , our experiments indicate that curriculum learning has a clear advantage over other curricula and standard training . Related Work . Bengio et al . ( 2009 ) is perhaps the most prominent work on curriculum learning where the “ difficulty ” of examples is determined by the loss value of a pre-trained model . Toneva 2See the first paragraph of Section B for details of the standard-time experimental setup . et al . ( 2019 ) instead suggested using the first iteration in which an example is learned and remains learned after that . Finally , Jiang et al . ( 2020b ) has proposed using a consistency score ( c-score ) calculated based on the consistency of a model in correctly predicting a particular example ’ s label trained on i.i.d . draws of the training set . When studying curriculum learning , we look into all of the above-suggested measures of sample difficulty . We further follow Hacohen & Weinshall ( 2019 ) in defining the notion of pacing function and use it to schedule how examples are introduced to the training procedure . However , we look into a much more comprehensive set of pacing functions and different tasks in this work . Please see Section A for a comprehensive review of the literature on curricula . 2 IMPLICIT CURRICULA . Curriculum learning is predicated on the expectation that we can adjust the course of learning by controlling the order of training examples . Despite the intuitive appeal , the connection between the order in which examples are shown to a network during training and the order in which a network learns to classify these examples correctly is not apriori obvious . To better understand this connection , we first study the order in which a network learns examples under traditional stochastic gradient descent with i.i.d . data sampling . We refer to this ordering – which results from the choice of architecture and optimization procedure – as an implicit curriculum . To quantify this ordering we define the learned iteration of a sample for a given model as the epoch for which the model correctly predicts the sample for that and all subsequent epochs . Explicitly , mint∗ { t∗|ŷw ( t ) i = yi , ∀t∗ ≤ t ≤ T } where yi and ŷw ( t ) i are the correct label and the predictions of the model for i-th data point ( see the detailed mathematical description in Section 3.1 ) . We study a wide range of model families including fully-connected networks , VGG ( Simonyan & Zisserman , 2014 ) , ResNet ( He et al. , 2016 ) , Wide-ResNet ( Zagoruyko & Komodakis , 2016 ) , DenseNet ( Huang et al. , 2017 ) and EfficientNet ( Tan & Le , 2019 ) models with different optimization algorithms such as Adam ( Kingma & Ba , 2014 ) and SGD with momentum ( see Section B for details ) . The results in Figure 2 for CIFAR10 ( Krizhevsky & Hinton , 2009 ) show that the implicit curricula are broadly consistent within model families . In particular , the ordering in which images are learned within convolutional networks is much more consistent than between convolutional networks and fully connected networks,3 and the learned ordering within each sub-type of CNN is even more uniform . The robustness of this ordering , at least within model types , allows us to talk with less ambiguity about the difficulty of a given image without worrying that the notion of difficulty is highly model-dependent . 3The difference between shallow fully connected nets and deep convolutional nets , to some extent , matches what Mangalam & Prabhu ( 2019 ) has found when comparing shallow and deep networks . We will see in the next section ( and Figure 3 ) that , as expected , the choice of explicit curriculum can alter the order in which a network learns examples . The most dramatic manifestation of this is anti-curriculum learning where showing the network images in the reverse order indeed causes the network to learn more difficult images first . Next , we introduce the class of curricula we will consider for the remainder of the paper . 3 PUTTING CURRICULA THROUGH THEIR PACES . Many different approaches have been taken to implement curricula in machine learning . Here we focus on a particular widely used paradigm introduced in Bengio et al . ( 2009 ) and used in Hacohen & Weinshall ( 2019 ) . In this setup , a curriculum is defined by specifying three ingredients , • The scoring function : The scoring function is a map from an input example , x , to a numerical score , s ( x ) ∈ R. This score is typically intended to correspond to a notion of difficulty , where a higher score corresponds to a more difficult example . • The pacing function : The pacing function g ( t ) specifies the size of the training data-set used at each step , t. The training set at step t consists of the g ( t ) lowest scored examples . Training batches are then sampled uniformly from this set . • The order : Additionally we specify an order of either curriculum – ordering examples from lowest score to highest score , anti-curriculum – ordering examples from highest score to lowest , or random . Though technically redundant with redefining the scoring function , we maintain the convention that the score is ordered from easiest to hardest . This procedure is summarized in Algorithm 1 . It is worth emphasizing that due to the pacing function , using a random ordering in Algorithm 1 is not the same as traditional i.i.d . training on the full training dataset , but rather corresponds to i.i.d . training on a training dataset with dynamic size . We Algorithm 1 ( Random-/Anti- ) Curriculum learning with pacing and scoring functions 1 : Input : Initial weights w0 , training set { x1 , . . . , xN } , pacing function g : [ T ] → [ N ] , scoring function s : [ N ] → R , order o ∈ { “ ascending ” , “ descending ” , “ random ” } . 2 : ( x1 , . . . , xN ) ← sort ( { x1 , . . . , xN } , s , o ) 3 : for t = 1 , . . . , T do 4 : w ( t ) ← train-one-epoch ( w ( t−1 ) , { x1 , . . . , xg ( t ) } ) 5 : end for stress that the scoring and pacing function paradigm for curriculum learning is inherently limited . In this setup , the scoring function is computed before training over all of the data and thus the algorithm can not implement a self-paced and training-dependent curriculum as has been considered in Kumar et al . ( 2010 ) ; Jiang et al . ( 2015 ) ; Platanios et al . ( 2019 ) . Additionally , the dynamic training dataset is built by including all examples within a fixed score window ( from lowest score up in curricula and highest score down in anti-curricula ) and does not accommodate more flexible subsets . Furthermore , the form of curriculum discussed here only involves ordering examples from a fixed training dataset , rather than more drastic modifications of the training procedure , such as gradually increasing image resolution ( Vogelsang et al. , 2018 ) or the classes ( Weinshall et al. , 2018 ) . Nonetheless , it is commonly studied and serves as a useful framework and control study to empirically investigate the relative benefits of training orderings . Next , we describe scoring and pacing functions that will be used in our empirical investigation . | The paper conducts a large-scale evaluation of the impact of curriculum learning (CL) in image classification. The paper progresses nicely through a sequence of well-thought research questions and experiments, with the key findings stated up front. In particular, the notion of "implicit curriculum" is shown to exist. Prior findings around when CL is helpful (limited training, label noise) are confirmed, which is nice. Overall, this methodical empirical evaluation comes away with a clear set of takeaways, empirically "summarizing" a lot of prior work on CL and the training of deep models. Some discussion about why CL helps when training is limited or data has label noise (or next steps) would strengthen the paper a bit more. | SP:86b2d288cccd05f632414e500f86103956f62ab9 |
When Do Curricula Work? | 1 INTRODUCTION . Inspired by the importance of properly ordering information when teaching humans ( Avrahami et al. , 1997 ) , curriculum learning ( CL ) proposes training models by presenting easier examples earlier during training ( Elman , 1993 ; Sanger , 1994 ; Bengio et al. , 2009 ) . Previous empirical studies have shown instances where curriculum learning can improve convergence speed and/or generalization in domains such as natural language processing ( Cirik et al. , 2016 ; Platanios et al. , 2019 ) , computer vision ( Pentina et al. , 2015 ; Sarafianos et al. , 2017 ; Guo et al. , 2018 ; Wang et al. , 2019 ) , and neural evolutionary computing ( Zaremba & Sutskever , 2014 ) . In contrast to curriculum learning , anticurriculum learning selects the most difficult examples first and gradually exposes the model to easier ones . Though counter-intuitive , empirical experiments have suggested that anti-curriculum learning can be as good as or better than curriculum learning in certain scenarios ( Kocmi & Bojar , 2017 ; Zhang et al. , 2018 ; 2019b ) . This is in tension with experiments in other contexts , however , which demonstrate that anti-curricula under perform standard or curriculum training ( Bengio et al. , 2009 ; Hacohen & Weinshall , 2019 ) . As explained above , empirical observations on curricula appear to be in conflict . Moreover , despite a rich literature ( see Section A ) , no ordered learning method is known to improve consistently across contexts , and curricula have not been widely adopted in machine learning . This suggest ruling out curricula as a beneficial practice for learning . In certain contexts , however , for large-scale text models such as GPT-3 ( Brown et al. , 2020 ) and T5 ( Raffel et al. , 2019 ) , non-uniform mixing strategies are standard practice . These contradicting observations contribute to a confusing picture on the usefulness of curricula . This work is an attempt to improve our understanding of curricula systematically . We start by asking a very fundamental question about a phenomenon that we call implicit curricula . Are examples ∗Work performed while Xiaoxia Wu was student at UT Austin and interning at Blueshift . 1Code at https : //github.com/google-research/understanding-curricula learned in a consistent order across different runs , architectures , and tasks ? If such a robust notion exists , is it possible to change the order in which the examples are learned by presenting them in a different order ? The answer to this question determines if there exists a robust notion of example difficulty that could be used to influence training . We then look into different ways of associating difficulty to examples using scoring functions and a variety of schedules known as pacing functions for introducing examples to the training procedure . We investigate if any of these choices can improve over the standard full-data i.i.d . training procedure commonly used in machine learning . Inspired by the success of CL in large scale training scenarios , we train in settings intended to emulate these large scale settings . In particular , we study the effect of curricula when training with a training time budget and training in the presence of noise . Contributions . In this paper , we systematically design and run extensive experiments to gain a better understanding of curricula . We train over 25,000 models over four datasets , CIFAR10/100 , FOOD101 , and FOOD101N covering a wide range of choices in designing curricula and arrive at the following conclusions : • Implicit Curricula : Examples are learned in a consistent order ( Section 2 ) . We show that the order in which examples are learned is consistent across runs , similar training methods , and similar architectures . Furthermore , we show that it is possible to change this order by changing the order in which examples are presented during training . Finally , we establish that well-known notions of sample difficulty are highly correlated with each other . • Curricula achieve ( almost ) no improvement in the standard setting ( Section 4 and 6 ) . We show curriculum learning , random , and anti-curriculum learning perform almost equally well in the standard setting.2 • Curriculum learning improves over standard training when training time is limited ( Section 5 and 6 ) . Imitating the large data regime , where training for multiple epochs is not feasible , we limit the number of iterations in the training algorithm and compare curriculum , random and anti-curriculum ordering against standard training . Our experiments reveal a clear advantage of curriculum learning over other methods . • Curriculum learning improves over standard training in noisy regime ( Section 5 and 6 ) . Finally , we mimic noisy data by adding label noise to CIFAR100 and also use a natural noisy dataset – FOOD101N . Similar to Jiang et al . ( 2018 ) ; Saxena et al . ( 2019 ) ; Guo et al . ( 2018 ) , our experiments indicate that curriculum learning has a clear advantage over other curricula and standard training . Related Work . Bengio et al . ( 2009 ) is perhaps the most prominent work on curriculum learning where the “ difficulty ” of examples is determined by the loss value of a pre-trained model . Toneva 2See the first paragraph of Section B for details of the standard-time experimental setup . et al . ( 2019 ) instead suggested using the first iteration in which an example is learned and remains learned after that . Finally , Jiang et al . ( 2020b ) has proposed using a consistency score ( c-score ) calculated based on the consistency of a model in correctly predicting a particular example ’ s label trained on i.i.d . draws of the training set . When studying curriculum learning , we look into all of the above-suggested measures of sample difficulty . We further follow Hacohen & Weinshall ( 2019 ) in defining the notion of pacing function and use it to schedule how examples are introduced to the training procedure . However , we look into a much more comprehensive set of pacing functions and different tasks in this work . Please see Section A for a comprehensive review of the literature on curricula . 2 IMPLICIT CURRICULA . Curriculum learning is predicated on the expectation that we can adjust the course of learning by controlling the order of training examples . Despite the intuitive appeal , the connection between the order in which examples are shown to a network during training and the order in which a network learns to classify these examples correctly is not apriori obvious . To better understand this connection , we first study the order in which a network learns examples under traditional stochastic gradient descent with i.i.d . data sampling . We refer to this ordering – which results from the choice of architecture and optimization procedure – as an implicit curriculum . To quantify this ordering we define the learned iteration of a sample for a given model as the epoch for which the model correctly predicts the sample for that and all subsequent epochs . Explicitly , mint∗ { t∗|ŷw ( t ) i = yi , ∀t∗ ≤ t ≤ T } where yi and ŷw ( t ) i are the correct label and the predictions of the model for i-th data point ( see the detailed mathematical description in Section 3.1 ) . We study a wide range of model families including fully-connected networks , VGG ( Simonyan & Zisserman , 2014 ) , ResNet ( He et al. , 2016 ) , Wide-ResNet ( Zagoruyko & Komodakis , 2016 ) , DenseNet ( Huang et al. , 2017 ) and EfficientNet ( Tan & Le , 2019 ) models with different optimization algorithms such as Adam ( Kingma & Ba , 2014 ) and SGD with momentum ( see Section B for details ) . The results in Figure 2 for CIFAR10 ( Krizhevsky & Hinton , 2009 ) show that the implicit curricula are broadly consistent within model families . In particular , the ordering in which images are learned within convolutional networks is much more consistent than between convolutional networks and fully connected networks,3 and the learned ordering within each sub-type of CNN is even more uniform . The robustness of this ordering , at least within model types , allows us to talk with less ambiguity about the difficulty of a given image without worrying that the notion of difficulty is highly model-dependent . 3The difference between shallow fully connected nets and deep convolutional nets , to some extent , matches what Mangalam & Prabhu ( 2019 ) has found when comparing shallow and deep networks . We will see in the next section ( and Figure 3 ) that , as expected , the choice of explicit curriculum can alter the order in which a network learns examples . The most dramatic manifestation of this is anti-curriculum learning where showing the network images in the reverse order indeed causes the network to learn more difficult images first . Next , we introduce the class of curricula we will consider for the remainder of the paper . 3 PUTTING CURRICULA THROUGH THEIR PACES . Many different approaches have been taken to implement curricula in machine learning . Here we focus on a particular widely used paradigm introduced in Bengio et al . ( 2009 ) and used in Hacohen & Weinshall ( 2019 ) . In this setup , a curriculum is defined by specifying three ingredients , • The scoring function : The scoring function is a map from an input example , x , to a numerical score , s ( x ) ∈ R. This score is typically intended to correspond to a notion of difficulty , where a higher score corresponds to a more difficult example . • The pacing function : The pacing function g ( t ) specifies the size of the training data-set used at each step , t. The training set at step t consists of the g ( t ) lowest scored examples . Training batches are then sampled uniformly from this set . • The order : Additionally we specify an order of either curriculum – ordering examples from lowest score to highest score , anti-curriculum – ordering examples from highest score to lowest , or random . Though technically redundant with redefining the scoring function , we maintain the convention that the score is ordered from easiest to hardest . This procedure is summarized in Algorithm 1 . It is worth emphasizing that due to the pacing function , using a random ordering in Algorithm 1 is not the same as traditional i.i.d . training on the full training dataset , but rather corresponds to i.i.d . training on a training dataset with dynamic size . We Algorithm 1 ( Random-/Anti- ) Curriculum learning with pacing and scoring functions 1 : Input : Initial weights w0 , training set { x1 , . . . , xN } , pacing function g : [ T ] → [ N ] , scoring function s : [ N ] → R , order o ∈ { “ ascending ” , “ descending ” , “ random ” } . 2 : ( x1 , . . . , xN ) ← sort ( { x1 , . . . , xN } , s , o ) 3 : for t = 1 , . . . , T do 4 : w ( t ) ← train-one-epoch ( w ( t−1 ) , { x1 , . . . , xg ( t ) } ) 5 : end for stress that the scoring and pacing function paradigm for curriculum learning is inherently limited . In this setup , the scoring function is computed before training over all of the data and thus the algorithm can not implement a self-paced and training-dependent curriculum as has been considered in Kumar et al . ( 2010 ) ; Jiang et al . ( 2015 ) ; Platanios et al . ( 2019 ) . Additionally , the dynamic training dataset is built by including all examples within a fixed score window ( from lowest score up in curricula and highest score down in anti-curricula ) and does not accommodate more flexible subsets . Furthermore , the form of curriculum discussed here only involves ordering examples from a fixed training dataset , rather than more drastic modifications of the training procedure , such as gradually increasing image resolution ( Vogelsang et al. , 2018 ) or the classes ( Weinshall et al. , 2018 ) . Nonetheless , it is commonly studied and serves as a useful framework and control study to empirically investigate the relative benefits of training orderings . Next , we describe scoring and pacing functions that will be used in our empirical investigation . | The paper provides a comprehensive analysis of the benefits of curriculum learning in different application scenarios. This includes investigating the phenomenon of implicit curricula, showing if the examples are learned in a consistent order across different architectures, and exploring the influences of explicit curricula in the standard and emulation settings. The paper empirically shows that curriculum learning has marginal benefits for standard training, but is helpful when the training time is limited or the training data is noisy. | SP:86b2d288cccd05f632414e500f86103956f62ab9 |
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning | 1 INTRODUCTION . A principal challenge in modeling human behavior is in obtaining a transparent understanding of decision-making . In medical diagnosis , for instance , there is often significant regional and institutional variation in clinical practice [ 1 ] , much of it the leading cause of rising healthcare costs [ 2 ] . The ability to quantify different decision processes is the first step towards a more systematic understanding of medical practice . Purely by observing demonstrated behavior , our principal objective is to answer the question : Under any given state of affairs , what actions are ( more/less ) likely to be taken , and why ? We address this challenge by setting our sights on three key criteria . First , we desire a method that is transparent by design . Specifically , a transparent description of behavior should locate the factors that contribute to individual decisions , in a language readily understood by domain experts [ 3 , 4 ] . This will be clearer per our subsequent formalism , but we can already note some contrasts : Classical imitation learning—popularly by reduction to supervised classification—does not fit the bill , since black-box hidden states of RNNs are rarely amenable to meaningful interpretation . Similarly , apprenticeship learning algorithms—popularly through inverse reinforcement learning—do not satisfy either , since the high-level nature of reward mappings is not informative as to individual actions observed in the data . Rather than focusing purely on replicating actions ( imitation learning ) or on matching expert performance ( apprenticeship learning ) , our chief pursuit lies in understanding demonstrated behavior . Second , real-world environments such as healthcare are often partially observable in nature . This requires modeling the accumulation of information from entire sequences of past observations—an endeavor that is prima facie at odds with the goal of transparency . For instance , in a fully-observable setting , ( model-free ) behavioral cloning is arguably ‘ transparent ’ in providing simple mappings of states to actions ; however , coping with partial observability using any form of recurrent function ∗Authors contributed equally approximation immediately lands in black-box territory . Likewise , while ( model-based ) methods have been developed for robotic control , their transparency crucially hinges on fully-observable kinematics . Finally , in realistic settings it is often impossible to experiment online—especially in high-stakes environments with real products and patients . The vast majority of recent work in ( inverse ) reinforcement learning has focused on games , simulations , and gym environments where access to live interaction is unrestricted . By contrast , in healthcare settings the environment dynamics are neither known a priori , nor estimable by repeated exploration . We want a data-driven representation of behavior that is learnable in a completely offline fashion , yet does not rely on knowing/modeling any true dynamics . Contributions Our contributions are three-fold . First , we propose a model for interpretable policy learning ( “ INTERPOLE ” ) —where sequential observations are aggregated through a decision agent ’ s decision dynamics ( viz . subjective belief-update process ) , and sequential actions are determined by the agent ’ s decision boundaries ( viz . probabilistic belief-action mapping ) . Second , we suggest a Bayesian learning algorithm for estimating the model , simultaneously satisfying the key criteria of transparency , partial observability , and offline learning . Third , through experiments on both simulated and real-world data for Alzheimer ’ s disease diagnosis , we illustrate the potential of our method as an investigative device for auditing , quantifying , and understanding human decision-making behavior . 2 RELATED WORK . We seek to learn an interpretable parameterization of observed behavior to understand an agent ’ s actions . Fundamentally , this contrasts with imitation learning ( which seeks to best replicate demonstrated policies ) and apprenticeship learning ( which seeks to match some notion of performance ) . Imitation Learning In fully-observable settings , behavior cloning ( BC ) readily reduces the imitation problem to one of supervised classification [ 5 , 11–13 ] ; i.e . actions are simply regressed on observations . While this can be extended to account for partial observability by parameterizing policies via recurrent function approximation [ 14 ] , it immediately gives up on ease of interpretability per the black-box nature of RNN hidden states . A plethora of model-free techniques have recently been developed , which account for information in the rollout dynamics of the environment during policy learning ( see e.g . [ 15–20 ] ) —most famously , generative adversarial imitation learning ( GAIL ) based on statedistribution matching [ 6 , 21 ] . However , such methods require repeated online rollouts of intermediate policies during training , and also face the same black-box problem as BC in partially observable settings . Clearly in model-free imitation , it is difficult to admit both transparency and partial observability . Specifically with an eye on explainability , Info-GAIL [ 22 , 23 ] proposes an orthogonal notion of “ interpretability ” that hinges on clustering similar demonstrations to explain variations in behavior . However , as with GAIL it suffers from the need for live interaction for learning . Finally , several model-based techniques for imitation learning ( MB-IL ) have been studied in the domain of robotics . [ 24 ] consider kinematic models designed for robot dynamics , while [ 25 ] and [ 7 ] consider ( non- ) linear autoregressive exogenous models . However , such approaches invariably operate in fully-observable settings , and are restricted models hand-crafted for specific robotic applications under consideration . Apprenticeship Learning In subtle distinction to imitation learning , methods in apprenticeship learning assume the observed behavior is optimal with respect to some underlying reward function . Apprenticeship thus proceeds indirectly—often through inverse reinforcement learning ( IRL ) in order to infer a reward function , which ( with appropriate optimization ) generates learned behavior that matches the performance of the original—as measured by the rewards ( see e.g . [ 8 , 26–29 ] ) . These approaches have been variously extended to cope with partial observability ( PO-IRL ) [ 9 , 30 ] , to offline settings through off-policy evaluation [ 31–33 ] , as well as to learned environment models [ 10 ] . However , a shortcoming of such methods is the requirement that the demonstrated policy in fact be optimal with respect to a true reward function that lies within an ( often limited ) hypothesis class under consideration—or is otherwise black-box in nature . Further , learning the true environment dynamics [ 10 ] corresponds to the requirement that policies be restricted to the class of functions that map from unbiased beliefs ( cf . exact inference ) into actions . Notably though , [ 34 ] considers both a form of suboptimality caused by time-inconsistent agents as well as biased beliefs . However , perhaps most importantly , due to the indirect , task-level nature of reward functions , inverse reinforcement learning is essentially opposed to our central goal of transparency—that is , in providing direct , action-level descriptions of behavior . In Section 5 , we provide empirical evidence of this notion of interpretability . Towards INTERPOLE In contrast , we avoid making any assumptions as to either unbiasedness of beliefs or optimality of policies . After all , the former requires estimating ( externally ) “ true ” environment dynamics , and the latter requires specifying ( objectively ) “ true ” classes of reward functions— neither of which are necessary per our goal of transparently describing individual actions . Instead , INTERPOLE simply seeks the most plausible explanation in terms of ( internal ) decision dynamics and ( subjective ) decision boundaries . To the best of our knowledge , our work is the first to tackle all three key criteria—while making no assumptions on the generative process behind behaviors . Table 1 contextualizes our work , showing typical incarnations of related approaches and their graphical models . Before continuing , we note that the separation between the internal dynamics of an agent and the external dynamics of the environment has been considered in several other works , though often for entirely different problem formulations . Most notably , [ 35 ] tackles the same policy learning problem as we do in online , fully-observable environments but for agent ’ s with internal states that can not be observed . They propose agent Markov models ( AMMs ) to model such environment-agent interactions . For problems other than policy learning , [ 36–38 ] also consider the subproblem of inferring an agent ’ s internal dynamics ; however , none of these works satisfy all three key criteria simultaneously as we do . 3 INTERPRETABLE POLICY LEARNING . We first introduce INTERPOLE ’ s model of behavior , formalizing notions of decision dynamics and decision boundaries . In the next section , we suggest a Bayesian algorithm for model-learning from data . Problem Setup Consider a partially-observable decision-making environment in discrete time . At each step t , the agent takes action at ∈ A and observes outcome zt ∈ Z.1 We have at our disposal an observed dataset of demonstrations D= { ( ai1 , zi1 , . . . , aiτi , z i τi ) } n i=1 by an agent , τi being the length of the i-th trajectory ( we shall omit indices i unless required ) . Denote by ht . = ( a1 , z1 , . . . , at−1 , zt−1 ) the observed history at the beginning of step t , where h1 . =∅ . Analogously , let Ht . = ( A × Z ) t−1 indicate the set of all possible histories at the start of step t , where H1 . = { ∅ } , and let H .= ∪∞t=1Ht . A proper policy π is a mapping π ∈ ∆ ( A ) H from observed histories to action distributions , where π ( a|h ) is the probability of taking action a given h. We assume that D is generated by an agent acting according to some behavioral policy πb . The problem we wish to tackle , then , is precisely how to obtain an interpretable parameterization of πb . We proceed in two steps : First , we describe a parsimonious belief-update process for accumulating histories—which we term decision dynamics . Then , we take beliefs to actions via a probabilistic mapping—which gives rise to decision boundaries . Decision Dynamics We model belief-updates by way of an input-output hidden Markov model ( IOHMM ) identified by the tuple ( S , A , Z , T , O , b1 ) , with S being the finite set of underlying states . T ∈ ∆ ( S ) S×A denotes the transition function such that T ( st+1|st , at ) gives the probability of transitioning into state st+1 upon action at in state st , and O ∈ ∆ ( Z ) A×S denotes the observation function such that O ( zt|at , st+1 ) gives the probability of observing zt after taking action at and transitioning into state st+1 . Finally , let beliefs bt ∈ ∆ ( S ) indicate the probability bt ( s ) that the 1While we take it here that Z is finite , our method can easily be generalized to allow continuous observations . environment exists in any state s ∈ S at time t , and let b1 give the initial state distribution . Note that—unlike in existing uses of the IOHMM formalism—these “ probabilities ” are for representing the thought process of the human , and may freely diverge from the actual mechanics of the world . To aggregate observed histories as beliefs , we identify bt ( s ) with P ( st = s|ht ) —an interpretation that leads to the recursive belief-update process ( where in our problem , quantities T , O , b1 are unknown ) : bt+1 ( s ′ ) ∝ ∑ s∈S bt ( s ) T ( s ′|s , at ) O ( zt|at , s′ ) ( 1 ) A key distinction bears emphasis : We do not require that this latter set of quantities correspond to ( external ) environment dynamics—and we do not obligate ourselves to recover any such notion of “ true ” parameters . To do so would imply the assumption that the agent in fact performs exactly unbiased inference on a perfectly known model of the environment , which is restrictive . It is also unnecessary , since our mandate is simply to model the ( internal ) mechanics of decision-making— which could well be generated from possibly biased beliefs or imperfectly known models of the world . In other words , our objective ( see Equation 3 ) of simultaneously determining the most likely beliefs ( cf . decision dynamics ) and policies ( cf . decision boundaries ) is fundamentally more parsimonious . Decision Boundaries Given decision dynamics , a policy is then equivalently a map π ∈ ∆ ( A ) ∆ ( S ) . Now , what is an interpretable parameterization ? Consider the three-state example in Figure 1 . We argue that a probabilistic parameterization that directly induces “ decision regions ” ( cf . panel 1b ) over the belief simplex is uniquely interpretable . For instance , strong beliefs that a patient has underlying mild cognitive impairment may map to the region where a specific follow-up test is promptly prescribed ; this parameterization allows clearly locating such regions—as well as their boundaries . Precisely , we parameterize policies in terms of |A|-many “ mean ” vectors that correspond to actions : π ( a|b ) = e−η‖b−µa‖ 2 / ∑ a′∈A e −η‖b−µa′‖ 2 , ∑ s∈S µa ( s ) = 1 ( 2 ) where η ≥ 0 is the inverse temperature , ‖ · ‖ the ` 2-norm , and µa ∈ R|S| the mean vector corresponding to action a ∈ A . Intuitively , mean vectors induce decision boundaries ( and decision regions ) over the belief space ∆ ( S ) : At any time , the action whose corresponding mean is closest to the current belief is most likely to be chosen . In particular , lines that are equidistant to the means of any pair of actions form decision boundaries between them . The inverse temperature controls the transitions between such boundaries : A larger η captures more deterministic behavior ( i.e . more “ abrupt ” transitions ) , whereas a smaller η captures more stochastic behavior ( i.e . “ smoother ” transitions ) . Note that the case of η = 0 recovers policies that are uniformly random , and η →∞ recovers argmax policies . A second distinction is due : The exponentiated form of Equation 2 should not be confused with typical Boltzmann [ 27 ] or MaxEnt [ 39 ] policies common in RL : These are indirect parameterizations via optimal/soft q-values , which themselves require approximate solutions to optimization problems ; as we shall see in our experiments , the quality of learned policies suffers as a result . Further , using q-values would imply the assumption that the agent in fact behaves optimally w.r.t . an ( objectively ) “ true ” class of reward functions—e.g . linear—which is restrictive . It is also unnecessary , as our mandate is simply to capture their ( subjective ) tendencies toward different actions—which are generated from possibly suboptimal policies . In contrast , by directly partitioning the belief simplex into probabilistic “ decision regions ” , INTERPOLE ’ s mean-vector representation can be immediately explained and understood . Learning Objective In a nutshell , our objective is to identify the most likely parameterizations T , O , b1 for decision dynamics as well as η , { µa } a∈A for decision boundaries , given the observed data : Given : D , S , A , Z Determine : T , O , b1 , η , { µa } a∈A ( 3 ) Next , we propose a Bayesian algorithm that finds the maximum a posteriori ( MAP ) estimate of these quantities . Figure 2 illustrates the problem setup . | This work proposes an approach for understanding and explaining decision-making behavior. The authors aim to make the method 1) transparent, 2) able to handle partial observability, and 3) work with offline data. To do this, they develop INTERPOLE, which uses Bayesian techniques to estimate decision dynamics as well as decision boundaries. Results on simulated and real-world domains show that their method explains the decisions in behavior data while still maintaining accuracy and focuses on explaining decision dynamics rather than the “true” dynamics of the world. | SP:af913437115d717862f353ae238f3fb1fc9d72f4 |
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning | 1 INTRODUCTION . A principal challenge in modeling human behavior is in obtaining a transparent understanding of decision-making . In medical diagnosis , for instance , there is often significant regional and institutional variation in clinical practice [ 1 ] , much of it the leading cause of rising healthcare costs [ 2 ] . The ability to quantify different decision processes is the first step towards a more systematic understanding of medical practice . Purely by observing demonstrated behavior , our principal objective is to answer the question : Under any given state of affairs , what actions are ( more/less ) likely to be taken , and why ? We address this challenge by setting our sights on three key criteria . First , we desire a method that is transparent by design . Specifically , a transparent description of behavior should locate the factors that contribute to individual decisions , in a language readily understood by domain experts [ 3 , 4 ] . This will be clearer per our subsequent formalism , but we can already note some contrasts : Classical imitation learning—popularly by reduction to supervised classification—does not fit the bill , since black-box hidden states of RNNs are rarely amenable to meaningful interpretation . Similarly , apprenticeship learning algorithms—popularly through inverse reinforcement learning—do not satisfy either , since the high-level nature of reward mappings is not informative as to individual actions observed in the data . Rather than focusing purely on replicating actions ( imitation learning ) or on matching expert performance ( apprenticeship learning ) , our chief pursuit lies in understanding demonstrated behavior . Second , real-world environments such as healthcare are often partially observable in nature . This requires modeling the accumulation of information from entire sequences of past observations—an endeavor that is prima facie at odds with the goal of transparency . For instance , in a fully-observable setting , ( model-free ) behavioral cloning is arguably ‘ transparent ’ in providing simple mappings of states to actions ; however , coping with partial observability using any form of recurrent function ∗Authors contributed equally approximation immediately lands in black-box territory . Likewise , while ( model-based ) methods have been developed for robotic control , their transparency crucially hinges on fully-observable kinematics . Finally , in realistic settings it is often impossible to experiment online—especially in high-stakes environments with real products and patients . The vast majority of recent work in ( inverse ) reinforcement learning has focused on games , simulations , and gym environments where access to live interaction is unrestricted . By contrast , in healthcare settings the environment dynamics are neither known a priori , nor estimable by repeated exploration . We want a data-driven representation of behavior that is learnable in a completely offline fashion , yet does not rely on knowing/modeling any true dynamics . Contributions Our contributions are three-fold . First , we propose a model for interpretable policy learning ( “ INTERPOLE ” ) —where sequential observations are aggregated through a decision agent ’ s decision dynamics ( viz . subjective belief-update process ) , and sequential actions are determined by the agent ’ s decision boundaries ( viz . probabilistic belief-action mapping ) . Second , we suggest a Bayesian learning algorithm for estimating the model , simultaneously satisfying the key criteria of transparency , partial observability , and offline learning . Third , through experiments on both simulated and real-world data for Alzheimer ’ s disease diagnosis , we illustrate the potential of our method as an investigative device for auditing , quantifying , and understanding human decision-making behavior . 2 RELATED WORK . We seek to learn an interpretable parameterization of observed behavior to understand an agent ’ s actions . Fundamentally , this contrasts with imitation learning ( which seeks to best replicate demonstrated policies ) and apprenticeship learning ( which seeks to match some notion of performance ) . Imitation Learning In fully-observable settings , behavior cloning ( BC ) readily reduces the imitation problem to one of supervised classification [ 5 , 11–13 ] ; i.e . actions are simply regressed on observations . While this can be extended to account for partial observability by parameterizing policies via recurrent function approximation [ 14 ] , it immediately gives up on ease of interpretability per the black-box nature of RNN hidden states . A plethora of model-free techniques have recently been developed , which account for information in the rollout dynamics of the environment during policy learning ( see e.g . [ 15–20 ] ) —most famously , generative adversarial imitation learning ( GAIL ) based on statedistribution matching [ 6 , 21 ] . However , such methods require repeated online rollouts of intermediate policies during training , and also face the same black-box problem as BC in partially observable settings . Clearly in model-free imitation , it is difficult to admit both transparency and partial observability . Specifically with an eye on explainability , Info-GAIL [ 22 , 23 ] proposes an orthogonal notion of “ interpretability ” that hinges on clustering similar demonstrations to explain variations in behavior . However , as with GAIL it suffers from the need for live interaction for learning . Finally , several model-based techniques for imitation learning ( MB-IL ) have been studied in the domain of robotics . [ 24 ] consider kinematic models designed for robot dynamics , while [ 25 ] and [ 7 ] consider ( non- ) linear autoregressive exogenous models . However , such approaches invariably operate in fully-observable settings , and are restricted models hand-crafted for specific robotic applications under consideration . Apprenticeship Learning In subtle distinction to imitation learning , methods in apprenticeship learning assume the observed behavior is optimal with respect to some underlying reward function . Apprenticeship thus proceeds indirectly—often through inverse reinforcement learning ( IRL ) in order to infer a reward function , which ( with appropriate optimization ) generates learned behavior that matches the performance of the original—as measured by the rewards ( see e.g . [ 8 , 26–29 ] ) . These approaches have been variously extended to cope with partial observability ( PO-IRL ) [ 9 , 30 ] , to offline settings through off-policy evaluation [ 31–33 ] , as well as to learned environment models [ 10 ] . However , a shortcoming of such methods is the requirement that the demonstrated policy in fact be optimal with respect to a true reward function that lies within an ( often limited ) hypothesis class under consideration—or is otherwise black-box in nature . Further , learning the true environment dynamics [ 10 ] corresponds to the requirement that policies be restricted to the class of functions that map from unbiased beliefs ( cf . exact inference ) into actions . Notably though , [ 34 ] considers both a form of suboptimality caused by time-inconsistent agents as well as biased beliefs . However , perhaps most importantly , due to the indirect , task-level nature of reward functions , inverse reinforcement learning is essentially opposed to our central goal of transparency—that is , in providing direct , action-level descriptions of behavior . In Section 5 , we provide empirical evidence of this notion of interpretability . Towards INTERPOLE In contrast , we avoid making any assumptions as to either unbiasedness of beliefs or optimality of policies . After all , the former requires estimating ( externally ) “ true ” environment dynamics , and the latter requires specifying ( objectively ) “ true ” classes of reward functions— neither of which are necessary per our goal of transparently describing individual actions . Instead , INTERPOLE simply seeks the most plausible explanation in terms of ( internal ) decision dynamics and ( subjective ) decision boundaries . To the best of our knowledge , our work is the first to tackle all three key criteria—while making no assumptions on the generative process behind behaviors . Table 1 contextualizes our work , showing typical incarnations of related approaches and their graphical models . Before continuing , we note that the separation between the internal dynamics of an agent and the external dynamics of the environment has been considered in several other works , though often for entirely different problem formulations . Most notably , [ 35 ] tackles the same policy learning problem as we do in online , fully-observable environments but for agent ’ s with internal states that can not be observed . They propose agent Markov models ( AMMs ) to model such environment-agent interactions . For problems other than policy learning , [ 36–38 ] also consider the subproblem of inferring an agent ’ s internal dynamics ; however , none of these works satisfy all three key criteria simultaneously as we do . 3 INTERPRETABLE POLICY LEARNING . We first introduce INTERPOLE ’ s model of behavior , formalizing notions of decision dynamics and decision boundaries . In the next section , we suggest a Bayesian algorithm for model-learning from data . Problem Setup Consider a partially-observable decision-making environment in discrete time . At each step t , the agent takes action at ∈ A and observes outcome zt ∈ Z.1 We have at our disposal an observed dataset of demonstrations D= { ( ai1 , zi1 , . . . , aiτi , z i τi ) } n i=1 by an agent , τi being the length of the i-th trajectory ( we shall omit indices i unless required ) . Denote by ht . = ( a1 , z1 , . . . , at−1 , zt−1 ) the observed history at the beginning of step t , where h1 . =∅ . Analogously , let Ht . = ( A × Z ) t−1 indicate the set of all possible histories at the start of step t , where H1 . = { ∅ } , and let H .= ∪∞t=1Ht . A proper policy π is a mapping π ∈ ∆ ( A ) H from observed histories to action distributions , where π ( a|h ) is the probability of taking action a given h. We assume that D is generated by an agent acting according to some behavioral policy πb . The problem we wish to tackle , then , is precisely how to obtain an interpretable parameterization of πb . We proceed in two steps : First , we describe a parsimonious belief-update process for accumulating histories—which we term decision dynamics . Then , we take beliefs to actions via a probabilistic mapping—which gives rise to decision boundaries . Decision Dynamics We model belief-updates by way of an input-output hidden Markov model ( IOHMM ) identified by the tuple ( S , A , Z , T , O , b1 ) , with S being the finite set of underlying states . T ∈ ∆ ( S ) S×A denotes the transition function such that T ( st+1|st , at ) gives the probability of transitioning into state st+1 upon action at in state st , and O ∈ ∆ ( Z ) A×S denotes the observation function such that O ( zt|at , st+1 ) gives the probability of observing zt after taking action at and transitioning into state st+1 . Finally , let beliefs bt ∈ ∆ ( S ) indicate the probability bt ( s ) that the 1While we take it here that Z is finite , our method can easily be generalized to allow continuous observations . environment exists in any state s ∈ S at time t , and let b1 give the initial state distribution . Note that—unlike in existing uses of the IOHMM formalism—these “ probabilities ” are for representing the thought process of the human , and may freely diverge from the actual mechanics of the world . To aggregate observed histories as beliefs , we identify bt ( s ) with P ( st = s|ht ) —an interpretation that leads to the recursive belief-update process ( where in our problem , quantities T , O , b1 are unknown ) : bt+1 ( s ′ ) ∝ ∑ s∈S bt ( s ) T ( s ′|s , at ) O ( zt|at , s′ ) ( 1 ) A key distinction bears emphasis : We do not require that this latter set of quantities correspond to ( external ) environment dynamics—and we do not obligate ourselves to recover any such notion of “ true ” parameters . To do so would imply the assumption that the agent in fact performs exactly unbiased inference on a perfectly known model of the environment , which is restrictive . It is also unnecessary , since our mandate is simply to model the ( internal ) mechanics of decision-making— which could well be generated from possibly biased beliefs or imperfectly known models of the world . In other words , our objective ( see Equation 3 ) of simultaneously determining the most likely beliefs ( cf . decision dynamics ) and policies ( cf . decision boundaries ) is fundamentally more parsimonious . Decision Boundaries Given decision dynamics , a policy is then equivalently a map π ∈ ∆ ( A ) ∆ ( S ) . Now , what is an interpretable parameterization ? Consider the three-state example in Figure 1 . We argue that a probabilistic parameterization that directly induces “ decision regions ” ( cf . panel 1b ) over the belief simplex is uniquely interpretable . For instance , strong beliefs that a patient has underlying mild cognitive impairment may map to the region where a specific follow-up test is promptly prescribed ; this parameterization allows clearly locating such regions—as well as their boundaries . Precisely , we parameterize policies in terms of |A|-many “ mean ” vectors that correspond to actions : π ( a|b ) = e−η‖b−µa‖ 2 / ∑ a′∈A e −η‖b−µa′‖ 2 , ∑ s∈S µa ( s ) = 1 ( 2 ) where η ≥ 0 is the inverse temperature , ‖ · ‖ the ` 2-norm , and µa ∈ R|S| the mean vector corresponding to action a ∈ A . Intuitively , mean vectors induce decision boundaries ( and decision regions ) over the belief space ∆ ( S ) : At any time , the action whose corresponding mean is closest to the current belief is most likely to be chosen . In particular , lines that are equidistant to the means of any pair of actions form decision boundaries between them . The inverse temperature controls the transitions between such boundaries : A larger η captures more deterministic behavior ( i.e . more “ abrupt ” transitions ) , whereas a smaller η captures more stochastic behavior ( i.e . “ smoother ” transitions ) . Note that the case of η = 0 recovers policies that are uniformly random , and η →∞ recovers argmax policies . A second distinction is due : The exponentiated form of Equation 2 should not be confused with typical Boltzmann [ 27 ] or MaxEnt [ 39 ] policies common in RL : These are indirect parameterizations via optimal/soft q-values , which themselves require approximate solutions to optimization problems ; as we shall see in our experiments , the quality of learned policies suffers as a result . Further , using q-values would imply the assumption that the agent in fact behaves optimally w.r.t . an ( objectively ) “ true ” class of reward functions—e.g . linear—which is restrictive . It is also unnecessary , as our mandate is simply to capture their ( subjective ) tendencies toward different actions—which are generated from possibly suboptimal policies . In contrast , by directly partitioning the belief simplex into probabilistic “ decision regions ” , INTERPOLE ’ s mean-vector representation can be immediately explained and understood . Learning Objective In a nutshell , our objective is to identify the most likely parameterizations T , O , b1 for decision dynamics as well as η , { µa } a∈A for decision boundaries , given the observed data : Given : D , S , A , Z Determine : T , O , b1 , η , { µa } a∈A ( 3 ) Next , we propose a Bayesian algorithm that finds the maximum a posteriori ( MAP ) estimate of these quantities . Figure 2 illustrates the problem setup . | The paper proposes an algorithm for learning policies and internal models ("decision dynamics") from demonstrations. The key idea is to fit a distribution over policies, observation models, and transition models using an EM-like method. Offline experiments on a healthcare dataset show that the method learns interpretable decision dynamics, recovers biased internal models, and accurately predicts actions relative to prior methods. | SP:af913437115d717862f353ae238f3fb1fc9d72f4 |
Boundary Effects in CNNs: Feature or Bug? | 1 INTRODUCTION . One of the main intuitions behind the success of CNNs for visual tasks such as image classification ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; Szegedy et al. , 2015 ; Huang et al. , 2017 ) , video classification ( Karpathy et al. , 2014 ; Yue-Hei Ng et al. , 2015 ; Carreira & Zisserman , 2017 ) , object detection ( Ren et al. , 2015 ; Redmon et al. , 2016 ; He et al. , 2017 ) , generative image models ( Brock et al. , 2018 ) , and semantic segmentation ( Long et al. , 2015 ; Noh et al. , 2015 ; Chen et al. , 2017 ; 2018 ) , is that convolutions add a visual inductive bias to neural networks that objects can appear anywhere in the image . To accommodate the finite domain of images , manual heuristics ( e.g. , padding ) have been applied to allow the convolutional kernel ’ s support to extend beyond the border of an image and reduce the impact of the boundary effects ( Wohlberg & Rodriguez , 2017 ; Tang et al. , 2018 ; Liu et al. , 2018a ; Innamorati et al. , 2019 ; Liu et al. , 2018b ) . Recent studies ( Pérez et al. , 2019 ; Islam et al. , 2020 ; Kayhan & Gemert , 2020 ) have shown that zero padding allows CNNs to encode absolute position information despite the presence of pooling layers in their architecture ( e.g. , global average pooling ) . In our work , we argue that the relationship between boundary effects and absolute position information extends beyond zero padding and has major implications in a CNN ’ s ability to encode confident and accurate semantic representations ( see Fig . 1 ) . An unexplored area related to boundary effects is the use of canvases ( i.e. , backgrounds ) with image patches ( see Fig . 1 , top row ) . When using image patches in a deep learning pipeline involving CNNs , the user is required to paste the patch onto a background due to the constraint that the image must be rectangular . Canvases have been used in a wide variety of domains , such as image generation ( Gregor et al. , 2015 ; Huang et al. , 2019 ) , data augmentation ( DeVries & Taylor , 2017 ) , image inpainting ( Demir & Unal , 2018 ; Yu et al. , 2018 ) , and interpretable AI ( Geirhos et al. , 2018 ; Esser et al. , 2020 ) . To the best of our knowledge , this paper contains the first analysis done on canvas value selection . In other works , the canvas value is simply chosen based on the authors intuition . Given the pervasiveness of CNNs in a multitude of applications , it is of paramount importance to fully understand what the internal representations are encoding in these networks , as well as isolating the precise reasons that these representations are learned . This comprehension can also allow for the effective design of architectures that overcome recognized shortcomings ( e.g. , residual connections ( He et al. , 2016 ) for the vanishing gradient problem ) . As boundary effects and position information in CNNs are still largely not fully understood , we aim to provide answers to the following hypotheses which reveal fundamental properties of these phenomenon : Hypothesis I : Zero Padding Encodes Maximal Absolute Position Information : Does zero padding encode maximal position information compared to other padding types ? We evaluate the amount of position information in networks trained with different padding types and show zero padding injects more position information than common padding types , e.g. , reflection , replicate , and circular . Hypothesis II : Different Canvas Colors Affect Performance : Do different background values have an effect on performance ? If the padding value at the boundary has a substantial effect on a CNNs performance and position information contained in the network , one should expect that canvas values may also have a similar effect . Hypothesis III : Position information is Correlated with Semantic Information : Does a network ’ s ability to encode absolute position information affect its ability to encode semantic information ? If zero padding and certain canvas colors can affect performance on classification tasks due to the increased position information , we expect that the position information is correlated with a networks ability to encode semantic information . We demonstrate that encoding position information improves the robustness and separability of semantic features . Hypothesis IV : Boundary Effects Occur at All Image Locations : Does a CNN trained without padding suffer in performance solely at the border , or at all image regions ? How does the performance change across image locations ? Our analysis reveals strong evidence that the border effect impacts a CNN ’ s performance at all regions in the input , contrasting previous assumptions ( Tsotsos et al. , 1995 ; Innamorati et al. , 2019 ) that border effects exist solely at the image border . Hypothesis V : Position Encoding Can Act as a Feature or a Bug : Does absolute position information always correlate with improved performance ? A CNN ’ s ability to leverage position information from boundary information could hurt performance when a task requires translation-invariance , e.g. , texture recognition ; however , it can also be useful if the task relies on position information , e.g. , semantic segmentation . To give answers to these hypotheses ( hereon referred to as H-X ) , we design a series of novel tasks as well as use existing techniques to quantify the location information contained in different CNNs with various settings . In particular , we introduce location dependant experiments ( see Fig . 2 ) which use a grid-based strategy to allow for a per-location analysis of absolute position encoding and performance on semantic tasks . The per-location analysis plays a critical role in representing the boundary effects as a function of the distance to the image border . We also estimate the number of dimensions which encode position information in the latent representations of CNNs . Through these experiments we show both quantitative and qualitative evidence that boundary effects have a substantial effect on CNNs in surprising ways and then demonstrate the practical implications of these findings on multiple real-world applications . Code will be made available for all experiments . 2 ABSOLUTE POSITION INFORMATION IN CNNS . What Type of Padding Injects Optimal Location Information ? With the ultimate goal of revealing characteristics that determine the impact that boundary effects plays in CNNs with respect to absolute position information , we first determine which commonly used padding type encodes the maximum amount of absolute position information . We evaluate the ability of different padding types ( i.e. , zero , circular , reflection , and replicate ) to encode absolute position information by extending the experiments from ( Islam et al. , 2020 ) , which only considered zero padding . We first train a simplified VGG network ( Simonyan & Zisserman , 2015 ) with five layers ( VGG-5 , see Appendix A.2 for implementation details ) on Tiny ImageNet ( Le & Yang , 2015 ) for each padding type . We follow the settings in ( Islam et al. , 2020 ) : a read-out module , trained using DUT-S ( Wang et al. , 2017 ) images , takes the features from a frozen VGG-5 model ’ s last layer , pre-trained on Tiny ImageNet , and predicts a gradient-like position map ( see top row in Table . 1 ) . We experiment with two position maps , which are the same for every image : ( i ) ‘ horizontal ’ and ( ii ) ‘ Gaussian ’ . These gradient-like position maps change smoothly from 0 to 1 , from the dark-blue to yellow , respectively . For a fair comparison with ( Islam et al. , 2020 ) , we report results using Spearman Correlation ( SPC ) and Mean Absolute Error ( MAE ) with input images from PASCAL-S ( Li et al. , 2014 ) . From Table 1 , it is clear that zero padding delivers the strongest position information , compared with replicate , boundary reflection , and circular padding , supporting H-I . Note that partial convolution ( Liu et al. , 2018a ) still pads with zeros , but re-weights the output of the convolution based on how many zeros are padded . Thus , position information is still encoded when partial convolutions are used . Interestingly , circular padding is often the second most capable padding type . We conjecture this is because circular padding takes values from the opposite side of the image where the pixel values are typically less correlated than the directly neighbouring pixels . Thus , circular padding often has a value transition at the border , contrasting reflection and replicate which offer little or no signal to the CNN regarding the whereabouts of the image border . 3 LOCATION DEPENDANT TASKS FOR POSITIONAL ANALYSIS . We begin by describing our experimental settings and the implementation details for the proposed location dependant experiments with grid-based inputs . These experiments are used to analyze the border effects with respect to position information encoded in CNNs . These consist of location dependant image classification ( Fig . 2 ( a ) and Sec . 3.1 ) , and segmentation ( Fig . 2 ( b ) and Sec . 3.2 ) , under different canvas color settings . Our experiments are designed with the goal of determining , for different canvas colors ( H-II ) , where in the input CNNs suffer from the border effect ( H-IV ) , and how the encoding of position information affects the learning of semantic features ( H-III ) . Experimental Settings and Implementation Details . Our image classification and segmentation experiments use ‘ location dependant ’ inputs ( see Fig . 2 above and Fig . 9 in the appendix for more detailed examples ) . The input is a colored canvas ( the colors used are Black [ 0 , 0 , 0 ] , White [ 1 , 1 , 1 ] , and the CIFAR-10 dataset ( Krizhevsky et al. , 2014 ) Mean [ 0.491 , 0.482 , 0.446 ] ) with an image patch randomly placed on a k × k grid . Unless mentioned otherwise , we use CIFAR-10 for all experiments . Given a 32× 32 CIFAR-10 training image as the image patch , we randomly choose a grid location , L , and place the CIFAR-10 training sample in that location . For example , in the case of a k × k grid , the size of the grid canvas is 32k × 32k , where each grid location has a size of 32× 32 and k2 total locations ( see Fig . 9 in the appendix ) . All experiments are run for k ∈ { 3 , 5 , 7 , 9 , 11 , 13 , 15 } . To ensure a fair comparison between grid locations , the evaluation protocol consists of running the entire validation set of CIFAR-10 on each individual grid location ( i.e. , we run the validation set k2 times for a single validation epoch ) . We then average the performance over all grid locations to obtain the overall accuracy . We report classification and segmentation accuracy in terms of precision and mean intersection over union ( mIoU ) , respectively . We use a ResNet-18 network trained from scratch , unless stated otherwise . ResNets with no padding are achieved by setting the padding size to zero in the convolution operation . For fair comparison between the padding and no padding baseline , we use bilinear interpolation ( see Appendix A.1 for discussion ) to match spatial resolutions between the residual output and the feature map for the no padding case , which was not accounted for in previous work ( Kayhan & Gemert , 2020 ) . | This paper studies the effect of padding on the Convolutional Neural Network. The authors try to answer the following questions: 1) what type of padding provides the most position information, 2) does the background value affects model accuracy when processing a patch on a canvas, 3) which part of the image suffers the most from the boundary effect, and 4) whether the position information provided by padding improves or degrades model performance. In order to answer these questions, the authors design multiple tasks and perform extensive experiments. The empirical results show that: 1) zero-padding provides the most location information compared with other common padding methods, 2) the background value of the canvas do affects the accuracy when processing a patch, 3) the boundary effect is not specific to the image boundary---the model is affected by the boundary over the entire image, and 4) the effect of padding on model accuracy depends on the task. | SP:a7cbe71d5767df1afbc7795ff5ee10c6550dddca |
Boundary Effects in CNNs: Feature or Bug? | 1 INTRODUCTION . One of the main intuitions behind the success of CNNs for visual tasks such as image classification ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; Szegedy et al. , 2015 ; Huang et al. , 2017 ) , video classification ( Karpathy et al. , 2014 ; Yue-Hei Ng et al. , 2015 ; Carreira & Zisserman , 2017 ) , object detection ( Ren et al. , 2015 ; Redmon et al. , 2016 ; He et al. , 2017 ) , generative image models ( Brock et al. , 2018 ) , and semantic segmentation ( Long et al. , 2015 ; Noh et al. , 2015 ; Chen et al. , 2017 ; 2018 ) , is that convolutions add a visual inductive bias to neural networks that objects can appear anywhere in the image . To accommodate the finite domain of images , manual heuristics ( e.g. , padding ) have been applied to allow the convolutional kernel ’ s support to extend beyond the border of an image and reduce the impact of the boundary effects ( Wohlberg & Rodriguez , 2017 ; Tang et al. , 2018 ; Liu et al. , 2018a ; Innamorati et al. , 2019 ; Liu et al. , 2018b ) . Recent studies ( Pérez et al. , 2019 ; Islam et al. , 2020 ; Kayhan & Gemert , 2020 ) have shown that zero padding allows CNNs to encode absolute position information despite the presence of pooling layers in their architecture ( e.g. , global average pooling ) . In our work , we argue that the relationship between boundary effects and absolute position information extends beyond zero padding and has major implications in a CNN ’ s ability to encode confident and accurate semantic representations ( see Fig . 1 ) . An unexplored area related to boundary effects is the use of canvases ( i.e. , backgrounds ) with image patches ( see Fig . 1 , top row ) . When using image patches in a deep learning pipeline involving CNNs , the user is required to paste the patch onto a background due to the constraint that the image must be rectangular . Canvases have been used in a wide variety of domains , such as image generation ( Gregor et al. , 2015 ; Huang et al. , 2019 ) , data augmentation ( DeVries & Taylor , 2017 ) , image inpainting ( Demir & Unal , 2018 ; Yu et al. , 2018 ) , and interpretable AI ( Geirhos et al. , 2018 ; Esser et al. , 2020 ) . To the best of our knowledge , this paper contains the first analysis done on canvas value selection . In other works , the canvas value is simply chosen based on the authors intuition . Given the pervasiveness of CNNs in a multitude of applications , it is of paramount importance to fully understand what the internal representations are encoding in these networks , as well as isolating the precise reasons that these representations are learned . This comprehension can also allow for the effective design of architectures that overcome recognized shortcomings ( e.g. , residual connections ( He et al. , 2016 ) for the vanishing gradient problem ) . As boundary effects and position information in CNNs are still largely not fully understood , we aim to provide answers to the following hypotheses which reveal fundamental properties of these phenomenon : Hypothesis I : Zero Padding Encodes Maximal Absolute Position Information : Does zero padding encode maximal position information compared to other padding types ? We evaluate the amount of position information in networks trained with different padding types and show zero padding injects more position information than common padding types , e.g. , reflection , replicate , and circular . Hypothesis II : Different Canvas Colors Affect Performance : Do different background values have an effect on performance ? If the padding value at the boundary has a substantial effect on a CNNs performance and position information contained in the network , one should expect that canvas values may also have a similar effect . Hypothesis III : Position information is Correlated with Semantic Information : Does a network ’ s ability to encode absolute position information affect its ability to encode semantic information ? If zero padding and certain canvas colors can affect performance on classification tasks due to the increased position information , we expect that the position information is correlated with a networks ability to encode semantic information . We demonstrate that encoding position information improves the robustness and separability of semantic features . Hypothesis IV : Boundary Effects Occur at All Image Locations : Does a CNN trained without padding suffer in performance solely at the border , or at all image regions ? How does the performance change across image locations ? Our analysis reveals strong evidence that the border effect impacts a CNN ’ s performance at all regions in the input , contrasting previous assumptions ( Tsotsos et al. , 1995 ; Innamorati et al. , 2019 ) that border effects exist solely at the image border . Hypothesis V : Position Encoding Can Act as a Feature or a Bug : Does absolute position information always correlate with improved performance ? A CNN ’ s ability to leverage position information from boundary information could hurt performance when a task requires translation-invariance , e.g. , texture recognition ; however , it can also be useful if the task relies on position information , e.g. , semantic segmentation . To give answers to these hypotheses ( hereon referred to as H-X ) , we design a series of novel tasks as well as use existing techniques to quantify the location information contained in different CNNs with various settings . In particular , we introduce location dependant experiments ( see Fig . 2 ) which use a grid-based strategy to allow for a per-location analysis of absolute position encoding and performance on semantic tasks . The per-location analysis plays a critical role in representing the boundary effects as a function of the distance to the image border . We also estimate the number of dimensions which encode position information in the latent representations of CNNs . Through these experiments we show both quantitative and qualitative evidence that boundary effects have a substantial effect on CNNs in surprising ways and then demonstrate the practical implications of these findings on multiple real-world applications . Code will be made available for all experiments . 2 ABSOLUTE POSITION INFORMATION IN CNNS . What Type of Padding Injects Optimal Location Information ? With the ultimate goal of revealing characteristics that determine the impact that boundary effects plays in CNNs with respect to absolute position information , we first determine which commonly used padding type encodes the maximum amount of absolute position information . We evaluate the ability of different padding types ( i.e. , zero , circular , reflection , and replicate ) to encode absolute position information by extending the experiments from ( Islam et al. , 2020 ) , which only considered zero padding . We first train a simplified VGG network ( Simonyan & Zisserman , 2015 ) with five layers ( VGG-5 , see Appendix A.2 for implementation details ) on Tiny ImageNet ( Le & Yang , 2015 ) for each padding type . We follow the settings in ( Islam et al. , 2020 ) : a read-out module , trained using DUT-S ( Wang et al. , 2017 ) images , takes the features from a frozen VGG-5 model ’ s last layer , pre-trained on Tiny ImageNet , and predicts a gradient-like position map ( see top row in Table . 1 ) . We experiment with two position maps , which are the same for every image : ( i ) ‘ horizontal ’ and ( ii ) ‘ Gaussian ’ . These gradient-like position maps change smoothly from 0 to 1 , from the dark-blue to yellow , respectively . For a fair comparison with ( Islam et al. , 2020 ) , we report results using Spearman Correlation ( SPC ) and Mean Absolute Error ( MAE ) with input images from PASCAL-S ( Li et al. , 2014 ) . From Table 1 , it is clear that zero padding delivers the strongest position information , compared with replicate , boundary reflection , and circular padding , supporting H-I . Note that partial convolution ( Liu et al. , 2018a ) still pads with zeros , but re-weights the output of the convolution based on how many zeros are padded . Thus , position information is still encoded when partial convolutions are used . Interestingly , circular padding is often the second most capable padding type . We conjecture this is because circular padding takes values from the opposite side of the image where the pixel values are typically less correlated than the directly neighbouring pixels . Thus , circular padding often has a value transition at the border , contrasting reflection and replicate which offer little or no signal to the CNN regarding the whereabouts of the image border . 3 LOCATION DEPENDANT TASKS FOR POSITIONAL ANALYSIS . We begin by describing our experimental settings and the implementation details for the proposed location dependant experiments with grid-based inputs . These experiments are used to analyze the border effects with respect to position information encoded in CNNs . These consist of location dependant image classification ( Fig . 2 ( a ) and Sec . 3.1 ) , and segmentation ( Fig . 2 ( b ) and Sec . 3.2 ) , under different canvas color settings . Our experiments are designed with the goal of determining , for different canvas colors ( H-II ) , where in the input CNNs suffer from the border effect ( H-IV ) , and how the encoding of position information affects the learning of semantic features ( H-III ) . Experimental Settings and Implementation Details . Our image classification and segmentation experiments use ‘ location dependant ’ inputs ( see Fig . 2 above and Fig . 9 in the appendix for more detailed examples ) . The input is a colored canvas ( the colors used are Black [ 0 , 0 , 0 ] , White [ 1 , 1 , 1 ] , and the CIFAR-10 dataset ( Krizhevsky et al. , 2014 ) Mean [ 0.491 , 0.482 , 0.446 ] ) with an image patch randomly placed on a k × k grid . Unless mentioned otherwise , we use CIFAR-10 for all experiments . Given a 32× 32 CIFAR-10 training image as the image patch , we randomly choose a grid location , L , and place the CIFAR-10 training sample in that location . For example , in the case of a k × k grid , the size of the grid canvas is 32k × 32k , where each grid location has a size of 32× 32 and k2 total locations ( see Fig . 9 in the appendix ) . All experiments are run for k ∈ { 3 , 5 , 7 , 9 , 11 , 13 , 15 } . To ensure a fair comparison between grid locations , the evaluation protocol consists of running the entire validation set of CIFAR-10 on each individual grid location ( i.e. , we run the validation set k2 times for a single validation epoch ) . We then average the performance over all grid locations to obtain the overall accuracy . We report classification and segmentation accuracy in terms of precision and mean intersection over union ( mIoU ) , respectively . We use a ResNet-18 network trained from scratch , unless stated otherwise . ResNets with no padding are achieved by setting the padding size to zero in the convolution operation . For fair comparison between the padding and no padding baseline , we use bilinear interpolation ( see Appendix A.1 for discussion ) to match spatial resolutions between the residual output and the feature map for the no padding case , which was not accounted for in previous work ( Kayhan & Gemert , 2020 ) . | The paper seeks to understand how different padding modes and canvas colors affect the performance of a convolutional neural network in classification and semantic segmentation tasks. The question seems somewhat strange - surely a network should be able to counteract a consistent change in padding or background color. If there was a strong effect it would be an interesting finding indeed. Unfortunately, the paper fails to convince that any but the most obvious effects exist. | SP:a7cbe71d5767df1afbc7795ff5ee10c6550dddca |
Continual learning in recurrent neural networks | 1 INTRODUCTION . The ability to continually learn from a non-stationary data distribution while transferring and protecting past knowledge is known as continual learning ( CL ) . This ability requires neural networks to be stable to prevent forgetting , but also plastic to learn novel information , which is referred to as the stability-plasticity dilemma ( Grossberg , 2007 ; Mermillod et al. , 2013 ) . To address this dilemma , a variety of methods which tackle CL for static data with feedforward networks have been proposed ( for reviews refer to Parisi et al . ( 2019 ) and van de Ven and Tolias ( 2019 ) ) . However , CL for sequential data has only received little attention , despite recent work confirming that recurrent neural networks ( RNNs ) also suffer from catastrophic forgetting ( Schak and Gepperth , 2019 ) . A set of methods that holds great promise to address this problem are regularization methods , which work by constraining the update of certain parameters . These methods can be considered more versatile than competing approaches , since they do not require rehearsal of past data , nor an increase in model capacity , but can benefit from either of the two ( e.g. , Nguyen et al. , 2018 ; Yoon et al. , 2018 ) . This makes regularization methods applicable to a broader variety of situations , e.g . when issues related to data privacy , storage , or limited computational resources during inference might arise . The most well-known regularization methods are weight-importance methods , such as elastic weight consolidation ( EWC , Kirkpatrick et al . ( 2017a ) ) and synaptic intelligence ( SI , Zenke et al . ( 2017 ) ) , which are based on assigning importance values to weights . Some of these have a direct probabilistic interpretation as prior-focused CL methods ( Farquhar and Gal , 2018 ) , for which solutions of upcoming tasks must lie in the posterior parameter distribution of the current task ( cf . Fig . 1b ) , highlighting the stability-plasticity dilemma . Whether this dilemma differently affects feedforward networks and RNNs , and whether weight-importance based methods can be used off the shelf for sequential data has remained unclear . Here , we contribute to the development of CL approaches for sequential data in several ways . • We provide a first comprehensive comparison of CL methods applied to sequential data . For this , we port a set of established CL methods for feedforward networks to RNNs and assess their performance thoroughly and fairly in a variety of settings . • We identify elements that critically affect the stability-plasticity dilemma of weightimportance methods in RNNs . We empirically show that high requirements for working memory , i.e . the need to store and manipulate information when processing individual samples , lead to a saturation of weight importance values , making the RNN rigid and hindering its potential to learn new tasks . In contrast , this trade-off is not directly affected by the sheer recurrent reuse of the weights , related to the length of processed sequences . We complement these observations with a theoretical analysis of linear RNNs . • We show that existing CL approaches can constitute strong baselines when compared in a standardized setting and if equivalent hyperparameter-optimization resources are granted . Moreover , we show that a CL regularization approach based on hypernetworks ( von Oswald et al. , 2020 ) mitigates the limitations of weight-importance methods in RNNs . • We provide a code base1 comprising all assessed methods as well as variants of four well known sequential datasets adapted to CL : the Copy Task ( Graves et al. , 2014 ) , Sequential Stroke MNIST ( Gulcehre et al. , 2017 ) , AudioSet ( Gemmeke et al. , 2017 ) and multilingual Part-of-Speech tagging ( Nivre et al. , 2016 ) . Taken together , our experimental and theoretical results facilitate the development of CL methods that are suited for sequential data . 2 RELATED WORK . Continual learning with sequential data . As in Parisi et al . ( 2019 ) , we categorize CL methods for RNNs into regularization approaches , dynamic architectures and complementary memory systems . Regularization approaches set optimization constraints on the update of certain network parameters without requiring a model of past input data . EWC , for example , uses weight importance values to limit further updates of weights that are considered essential for solving previous tasks ( Kirkpatrick et al. , 2017b ) . Throughout this work , we utilize a more mathematically sound and less memoryintensive version of this algorithm , called Online EWC ( Huszár , 2018 ; Schwarz et al. , 2018 ) . Although a highly popular approach in feedforward networks , it has remained unclear how suitable EWC is in the context of sequential processing . Indeed , some studies report promising results in the context of natural language processing ( NLP ) ( Madasu and Rao , 2020 ; Thompson et al. , 2019 ) , while others find that it performs poorly ( Asghar et al. , 2020 ; Cossu et al. , 2020a ; Li et al. , 2020 ) . Here , we conduct the first thorough investigation of EWC ’ s performance on RNNs , and find that it can often be a suitable choice . A related CL approach that also relies on weight importance values is SI ( Zenke et al. , 2017 ) . Variants of SI have been used for different sequential datasets , but have not been systematically compared against other established methods ( Yang et al. , 2019 ; Masse et al. , 2018 ; Lee , 2017 ) . Fixed expansion layers ( Coop and Arel , 2012 ) are another method to limit the plasticity of weights and prevent forgetting , and in RNNs take the form of a sparsely activated layer between consecutive hidden states ( Coop and Arel , 2013 ) . Lastly , some regularization approaches rely on the use of non-overlapping and orthogonal representations to overcome catastrophic forgetting ( French , 1992 ; 1994 ; 1970 ) . Masse et al . ( 2018 ) , for example , proposed the use of context-dependent random subnetworks , where weight changes are regularized by limiting plasticity to task-specific subnetworks . This eliminates forgetting for disjoint networks but leads to a reduction of available capacity per task . In concurrent work , Duncker et al . ( 2020 ) introduced a learning rule which aims to optimize the use of the activity-defined subspace in RNNs learning multiple tasks . When tasks are different , 1Source code for all experiments ( including all baselines ) is available at https : //github.com/ mariacer/cl_in_rnns . catastrophic interference is avoided by forcing the use of task-specific orthogonal subspaces , whereas the reuse of dynamics is encouraged across tasks that are similar . Dynamic architecture approaches , which rely on the addition of neural resources to mitigate catastrophic forgetting , have also been applied to RNNs . Cossu et al . ( 2020a ) presented a combination of progressive networks ( Rusu et al. , 2016 ) and gating autoencoders ( Aljundi et al. , 2017 ) , where an RNN module is added for each new task and the reconstruction error of task-specific autoencoders is used to infer the RNN module to be used . Arguably , the main limitation of this type of approach is the increase in the number of parameters with the number of tasks , although methods have been presented that add resources for each new task only if needed ( Tsuda et al. , 2020 ) . Finally , complementary memory systems have also been applied to the retention of sequential information . In an early work , Ans et al . ( 2004 ) proposed a secondary network that generates patterns for rehearsing previously learned information . Asghar et al . ( 2020 ) suggested using an external memory that is progressively increased when new information is encountered . Sodhani et al . ( 2020 ) combined an external memory with Net2Net ( Chen et al. , 2016 ) , such that the network capacity can be extended while maintaining memories . The major drawback of complementary memory systems is that they either violate CL desiderata by storing past data , or rely on the ability to learn a generative model , a task that arguably scales poorly to complex data . We discuss related work in a broader context in supplementary materials ( SM D ) . Hypernetworks . Introduced by Ha et al . ( 2017 ) , the term hypernetwork refers to a neural network that generates the weights of another network . The idea can be traced back to Schmidhuber ( 1992 ) , who already suggested that a recurrent hypernetwork could be used for learning to learn ( Schmidhuber , 1993 ) . Importantly , hypernetworks can make use of the fact that parameters in a neural network possess compressible structure ( Denil et al. , 2013 ; Han et al. , 2015 ) . Indeed , Ha et al . ( 2017 ) showed that the number of trainable weights of feed-forward architectures can be reduced via hypernetworks . More recently , hypernetworks have been adapted for CL ( He et al. , 2019 ; von Oswald et al. , 2020 ) , but not for learning with sequential data . 3 METHODS . Recurrent Neural Networks . We consider discrete-time RNNs . At timestep t , the network ’ s output ŷt and hidden state ht are given by ( ŷt , ht ) = fstep ( xt , ht−1 , ψ ) , where xt denotes the input at time t and ψ the parameters of the network ( Cho et al. , 2014 ; Elman , 1990 ; Hochreiter and Schmidhuber , 1997a ) . In this work , we consider either vanilla RNNs ( based on Elman ( 1990 ) ) , LSTMs ( Hochreiter and Schmidhuber , 1997a ) or BiLSTMs ( Schuster and Paliwal , 1997 ) . Naive baselines . We consider the following naive baselines . Fine-tuning refers to training an RNN sequentially on all tasks without any CL protection . Each task has a different output head ( multi-head ) , and the heads of previously learned tasks are kept fixed . Multitask describes the parallel training on all tasks ( no CL ) . To keep approaches comparable , the multitask baseline uses a multi-head output . Because we focus on methods with a comparable number of parameters , we summarize approaches that allocate a different model per task in the From-scratch baseline , where a different model is trained separately for each task , noting that performance improvements are likely to arise in related methods ( such as Cossu et al . ( 2020a ) ) whenever knowledge transfer is possible . Continual learning baselines . We consider a diverse set of established CL methods and investigate their performance in RNNs . Online EWC ( Huszár , 2018 ; Kirkpatrick et al. , 2017a ; Schwarz et al. , 2018 ) and SI ( Zenke et al. , 2017 ) are different weight-importance CL methods . A simple weighted L2 regularization ensures that the neural network is more rigid in weight directions that are considered important for previous tasks , i.e. , the loss for the K-th task is given by L ( ψ , DK ) = Ltask ( ψ , DK ) + λ |ψ|∑ i=1 ωi ( ψi − ψ̃ ( K−1 ) i ) 2 ( 1 ) where λ is the regularization strength , ωi is the importance associated with ψi ( cf . SM B.5 and B.6 ) and ψ̃ ( K−1 ) denotes the main network weights ψ that were checkpointed after learning task K − 1 . We denote by HNET a different regularization approach based on hypernetworks that was recently proposed by von Oswald et al . ( 2020 ) . A hypernetwork ( Ha et al. , 2017 ) is a neural network ψ = h ( e , θ ) with parameters θ and input embeddings e that generates the weights of a main network . This method sidesteps the problem of finding a compromise between tasks with a shared model ψ , by generating a task-specific model ψ ( k ) from a low-dimensional embedding space via a shared hypernetwork in which the weights θ and embeddings e are continually learned . In contrast to von Oswald et al . ( 2020 ) , we focus here on RNNs as main networks : fstep ( xt , ht−1 , ψ ) = fstep ( xt , ht−1 , h ( e , θ ) ) ( Fig . 1a ) . Crucially , this method has the advantage of not being noticeably affected by the recurrent nature of the main network , since CL is delegated to a feedforward metamodel , where forgetting is avoided based on a simple L2-regularization of its output . For a fair comparison , we ensure that the number of trainable parameters is comparable to other baselines by focusing on chunked hypernetworks ( von Oswald et al. , 2020 ) , and enforcing : ∣∣θ ∪ { ek } Kk=1∣∣ ≤ |ψ| . Further details can be found in SM B.4 . Masking ( or context-dependent gating , Masse et al . ( 2018 ) ) applies a binary random mask per task for all hidden units of a multi-head network , and can be seen as a simple method for selecting a different subnetwork per task . Since catastrophic interference can occur because of the overlap between subnetworks , this method can be combined with other CL methods such as SI ( Masking+SI ) . We also consider methods based on replaying input data from previous tasks , either via a sequentially trained generative model ( Shin et al. , 2017 ; van de Ven and Tolias , 2018 ) , denoted Generative Replay , or by maintaining a small subset of previous training data ( Rebuffi et al. , 2017 ; Nguyen et al. , 2018 ) , denoted Coresets-N , where N refers to the number of samples stored for each task . Target outputs for replayed data are obtained via a copy of the main network , stored before training on the current task ( detailed baseline descriptions in SM B ) . Task Identity . We assume that task identity is provided to the system during training and inference , either by selecting the correct output head or by feeding the correct task embedding ek into the hypernetwork , and elaborate in SM G.12 on how to overcome this limitation . | The authors do an evaluation of the application of weight-importance continual learning methods to recurrent neural networks (RNNs). They draw out the tradeoff between complexity of precessing and just remembering (working memory) in terms of the applicability of these weight importance methods. They also provide some theoretical interpretation based on stying linear RNNs. | SP:fb77e61ebd1844212439bb59e6a07c998486f30a |
Continual learning in recurrent neural networks | 1 INTRODUCTION . The ability to continually learn from a non-stationary data distribution while transferring and protecting past knowledge is known as continual learning ( CL ) . This ability requires neural networks to be stable to prevent forgetting , but also plastic to learn novel information , which is referred to as the stability-plasticity dilemma ( Grossberg , 2007 ; Mermillod et al. , 2013 ) . To address this dilemma , a variety of methods which tackle CL for static data with feedforward networks have been proposed ( for reviews refer to Parisi et al . ( 2019 ) and van de Ven and Tolias ( 2019 ) ) . However , CL for sequential data has only received little attention , despite recent work confirming that recurrent neural networks ( RNNs ) also suffer from catastrophic forgetting ( Schak and Gepperth , 2019 ) . A set of methods that holds great promise to address this problem are regularization methods , which work by constraining the update of certain parameters . These methods can be considered more versatile than competing approaches , since they do not require rehearsal of past data , nor an increase in model capacity , but can benefit from either of the two ( e.g. , Nguyen et al. , 2018 ; Yoon et al. , 2018 ) . This makes regularization methods applicable to a broader variety of situations , e.g . when issues related to data privacy , storage , or limited computational resources during inference might arise . The most well-known regularization methods are weight-importance methods , such as elastic weight consolidation ( EWC , Kirkpatrick et al . ( 2017a ) ) and synaptic intelligence ( SI , Zenke et al . ( 2017 ) ) , which are based on assigning importance values to weights . Some of these have a direct probabilistic interpretation as prior-focused CL methods ( Farquhar and Gal , 2018 ) , for which solutions of upcoming tasks must lie in the posterior parameter distribution of the current task ( cf . Fig . 1b ) , highlighting the stability-plasticity dilemma . Whether this dilemma differently affects feedforward networks and RNNs , and whether weight-importance based methods can be used off the shelf for sequential data has remained unclear . Here , we contribute to the development of CL approaches for sequential data in several ways . • We provide a first comprehensive comparison of CL methods applied to sequential data . For this , we port a set of established CL methods for feedforward networks to RNNs and assess their performance thoroughly and fairly in a variety of settings . • We identify elements that critically affect the stability-plasticity dilemma of weightimportance methods in RNNs . We empirically show that high requirements for working memory , i.e . the need to store and manipulate information when processing individual samples , lead to a saturation of weight importance values , making the RNN rigid and hindering its potential to learn new tasks . In contrast , this trade-off is not directly affected by the sheer recurrent reuse of the weights , related to the length of processed sequences . We complement these observations with a theoretical analysis of linear RNNs . • We show that existing CL approaches can constitute strong baselines when compared in a standardized setting and if equivalent hyperparameter-optimization resources are granted . Moreover , we show that a CL regularization approach based on hypernetworks ( von Oswald et al. , 2020 ) mitigates the limitations of weight-importance methods in RNNs . • We provide a code base1 comprising all assessed methods as well as variants of four well known sequential datasets adapted to CL : the Copy Task ( Graves et al. , 2014 ) , Sequential Stroke MNIST ( Gulcehre et al. , 2017 ) , AudioSet ( Gemmeke et al. , 2017 ) and multilingual Part-of-Speech tagging ( Nivre et al. , 2016 ) . Taken together , our experimental and theoretical results facilitate the development of CL methods that are suited for sequential data . 2 RELATED WORK . Continual learning with sequential data . As in Parisi et al . ( 2019 ) , we categorize CL methods for RNNs into regularization approaches , dynamic architectures and complementary memory systems . Regularization approaches set optimization constraints on the update of certain network parameters without requiring a model of past input data . EWC , for example , uses weight importance values to limit further updates of weights that are considered essential for solving previous tasks ( Kirkpatrick et al. , 2017b ) . Throughout this work , we utilize a more mathematically sound and less memoryintensive version of this algorithm , called Online EWC ( Huszár , 2018 ; Schwarz et al. , 2018 ) . Although a highly popular approach in feedforward networks , it has remained unclear how suitable EWC is in the context of sequential processing . Indeed , some studies report promising results in the context of natural language processing ( NLP ) ( Madasu and Rao , 2020 ; Thompson et al. , 2019 ) , while others find that it performs poorly ( Asghar et al. , 2020 ; Cossu et al. , 2020a ; Li et al. , 2020 ) . Here , we conduct the first thorough investigation of EWC ’ s performance on RNNs , and find that it can often be a suitable choice . A related CL approach that also relies on weight importance values is SI ( Zenke et al. , 2017 ) . Variants of SI have been used for different sequential datasets , but have not been systematically compared against other established methods ( Yang et al. , 2019 ; Masse et al. , 2018 ; Lee , 2017 ) . Fixed expansion layers ( Coop and Arel , 2012 ) are another method to limit the plasticity of weights and prevent forgetting , and in RNNs take the form of a sparsely activated layer between consecutive hidden states ( Coop and Arel , 2013 ) . Lastly , some regularization approaches rely on the use of non-overlapping and orthogonal representations to overcome catastrophic forgetting ( French , 1992 ; 1994 ; 1970 ) . Masse et al . ( 2018 ) , for example , proposed the use of context-dependent random subnetworks , where weight changes are regularized by limiting plasticity to task-specific subnetworks . This eliminates forgetting for disjoint networks but leads to a reduction of available capacity per task . In concurrent work , Duncker et al . ( 2020 ) introduced a learning rule which aims to optimize the use of the activity-defined subspace in RNNs learning multiple tasks . When tasks are different , 1Source code for all experiments ( including all baselines ) is available at https : //github.com/ mariacer/cl_in_rnns . catastrophic interference is avoided by forcing the use of task-specific orthogonal subspaces , whereas the reuse of dynamics is encouraged across tasks that are similar . Dynamic architecture approaches , which rely on the addition of neural resources to mitigate catastrophic forgetting , have also been applied to RNNs . Cossu et al . ( 2020a ) presented a combination of progressive networks ( Rusu et al. , 2016 ) and gating autoencoders ( Aljundi et al. , 2017 ) , where an RNN module is added for each new task and the reconstruction error of task-specific autoencoders is used to infer the RNN module to be used . Arguably , the main limitation of this type of approach is the increase in the number of parameters with the number of tasks , although methods have been presented that add resources for each new task only if needed ( Tsuda et al. , 2020 ) . Finally , complementary memory systems have also been applied to the retention of sequential information . In an early work , Ans et al . ( 2004 ) proposed a secondary network that generates patterns for rehearsing previously learned information . Asghar et al . ( 2020 ) suggested using an external memory that is progressively increased when new information is encountered . Sodhani et al . ( 2020 ) combined an external memory with Net2Net ( Chen et al. , 2016 ) , such that the network capacity can be extended while maintaining memories . The major drawback of complementary memory systems is that they either violate CL desiderata by storing past data , or rely on the ability to learn a generative model , a task that arguably scales poorly to complex data . We discuss related work in a broader context in supplementary materials ( SM D ) . Hypernetworks . Introduced by Ha et al . ( 2017 ) , the term hypernetwork refers to a neural network that generates the weights of another network . The idea can be traced back to Schmidhuber ( 1992 ) , who already suggested that a recurrent hypernetwork could be used for learning to learn ( Schmidhuber , 1993 ) . Importantly , hypernetworks can make use of the fact that parameters in a neural network possess compressible structure ( Denil et al. , 2013 ; Han et al. , 2015 ) . Indeed , Ha et al . ( 2017 ) showed that the number of trainable weights of feed-forward architectures can be reduced via hypernetworks . More recently , hypernetworks have been adapted for CL ( He et al. , 2019 ; von Oswald et al. , 2020 ) , but not for learning with sequential data . 3 METHODS . Recurrent Neural Networks . We consider discrete-time RNNs . At timestep t , the network ’ s output ŷt and hidden state ht are given by ( ŷt , ht ) = fstep ( xt , ht−1 , ψ ) , where xt denotes the input at time t and ψ the parameters of the network ( Cho et al. , 2014 ; Elman , 1990 ; Hochreiter and Schmidhuber , 1997a ) . In this work , we consider either vanilla RNNs ( based on Elman ( 1990 ) ) , LSTMs ( Hochreiter and Schmidhuber , 1997a ) or BiLSTMs ( Schuster and Paliwal , 1997 ) . Naive baselines . We consider the following naive baselines . Fine-tuning refers to training an RNN sequentially on all tasks without any CL protection . Each task has a different output head ( multi-head ) , and the heads of previously learned tasks are kept fixed . Multitask describes the parallel training on all tasks ( no CL ) . To keep approaches comparable , the multitask baseline uses a multi-head output . Because we focus on methods with a comparable number of parameters , we summarize approaches that allocate a different model per task in the From-scratch baseline , where a different model is trained separately for each task , noting that performance improvements are likely to arise in related methods ( such as Cossu et al . ( 2020a ) ) whenever knowledge transfer is possible . Continual learning baselines . We consider a diverse set of established CL methods and investigate their performance in RNNs . Online EWC ( Huszár , 2018 ; Kirkpatrick et al. , 2017a ; Schwarz et al. , 2018 ) and SI ( Zenke et al. , 2017 ) are different weight-importance CL methods . A simple weighted L2 regularization ensures that the neural network is more rigid in weight directions that are considered important for previous tasks , i.e. , the loss for the K-th task is given by L ( ψ , DK ) = Ltask ( ψ , DK ) + λ |ψ|∑ i=1 ωi ( ψi − ψ̃ ( K−1 ) i ) 2 ( 1 ) where λ is the regularization strength , ωi is the importance associated with ψi ( cf . SM B.5 and B.6 ) and ψ̃ ( K−1 ) denotes the main network weights ψ that were checkpointed after learning task K − 1 . We denote by HNET a different regularization approach based on hypernetworks that was recently proposed by von Oswald et al . ( 2020 ) . A hypernetwork ( Ha et al. , 2017 ) is a neural network ψ = h ( e , θ ) with parameters θ and input embeddings e that generates the weights of a main network . This method sidesteps the problem of finding a compromise between tasks with a shared model ψ , by generating a task-specific model ψ ( k ) from a low-dimensional embedding space via a shared hypernetwork in which the weights θ and embeddings e are continually learned . In contrast to von Oswald et al . ( 2020 ) , we focus here on RNNs as main networks : fstep ( xt , ht−1 , ψ ) = fstep ( xt , ht−1 , h ( e , θ ) ) ( Fig . 1a ) . Crucially , this method has the advantage of not being noticeably affected by the recurrent nature of the main network , since CL is delegated to a feedforward metamodel , where forgetting is avoided based on a simple L2-regularization of its output . For a fair comparison , we ensure that the number of trainable parameters is comparable to other baselines by focusing on chunked hypernetworks ( von Oswald et al. , 2020 ) , and enforcing : ∣∣θ ∪ { ek } Kk=1∣∣ ≤ |ψ| . Further details can be found in SM B.4 . Masking ( or context-dependent gating , Masse et al . ( 2018 ) ) applies a binary random mask per task for all hidden units of a multi-head network , and can be seen as a simple method for selecting a different subnetwork per task . Since catastrophic interference can occur because of the overlap between subnetworks , this method can be combined with other CL methods such as SI ( Masking+SI ) . We also consider methods based on replaying input data from previous tasks , either via a sequentially trained generative model ( Shin et al. , 2017 ; van de Ven and Tolias , 2018 ) , denoted Generative Replay , or by maintaining a small subset of previous training data ( Rebuffi et al. , 2017 ; Nguyen et al. , 2018 ) , denoted Coresets-N , where N refers to the number of samples stored for each task . Target outputs for replayed data are obtained via a copy of the main network , stored before training on the current task ( detailed baseline descriptions in SM B ) . Task Identity . We assume that task identity is provided to the system during training and inference , either by selecting the correct output head or by feeding the correct task embedding ek into the hypernetwork , and elaborate in SM G.12 on how to overcome this limitation . | This paper provides a systematic evaluation of the performance of different CL methods on RNN. The study suggests that high working memory requirements increase difficulty of learning new tasks, while the average length of input sequence is not strictly related to the difficulty of learning new tasks. The author proposes to overcome this problem by using a hypernetwork-based CL approach, which shows promising results in the experiments. | SP:fb77e61ebd1844212439bb59e6a07c998486f30a |
Primal Wasserstein Imitation Learning | 1 INTRODUCTION . Reinforcement Learning ( RL ) has solved a number of difficult tasks whether in games ( Tesauro , 1995 ; Mnih et al. , 2015 ; Silver et al. , 2016 ) or robotics ( Abbeel & Ng , 2004 ; Andrychowicz et al. , 2020 ) . However , RL relies on the existence of a reward function , that can be either hard to specify or too sparse to be used in practice . Imitation Learning ( IL ) is a paradigm that applies to these environments with hard to specify rewards : we seek to solve a task by learning a policy from a fixed number of demonstrations generated by an expert . IL methods can typically be folded into two paradigms : Behavioral Cloning , or BC ( Pomerleau , 1991 ; Bagnell et al. , 2007 ; Ross & Bagnell , 2010 ) and Inverse Reinforcement Learning , or IRL ( Russell , 1998 ; Ng et al. , 2000 ) . In BC , we seek to recover the expert ’ s behavior by directly learning a policy that matches the expert behavior in some sense . In IRL , we assume that the demonstrations come from an agent that acts optimally with respect to an unknown reward function that we seek to recover , to subsequently train an agent on it . Although IRL methods introduce an intermediary problem ( i.e . recovering the environment ’ s reward ) they are less sensitive to distributional shift ( Pomerleau , 1991 ) , they generalize to environments with different dynamics ( Piot et al. , 2013 ) , and they can recover a near-optimal agent from suboptimal demonstrations ( Brown et al. , 2019 ; Jacq et al. , 2019 ) . However , IRL methods are usually based on an iterative process alternating between reward estimation and RL , which might result in poor sample-efficiency . Earlier IRL methods ( Ng et al. , 2000 ; Abbeel & Ng , 2004 ; Ziebart et al. , 2008 ) require multiple calls to a Markov decision process solver ( Puterman , 2014 ) , whereas recent adversarial IL approaches ( Finn et al. , 2016 ; Ho & Ermon , 2016 ; Fu et al. , 2018 ) interleave the learning of the reward function with the learning process of the agent . Adversarial IL methods are based on an adversarial training paradigm similar to Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , where the learned reward function can be thought of as the confusion of a discriminator that learns to differentiate expert transitions from non expert ones . These methods are well suited to the IL problem since they implicitly minimize an f -divergence between the state-action distribution of an expert and the state-action distribution of the learning agent ( Ghasemipour et al. , 2019 ; Ke et al. , 2019 ) . However the interaction between a generator ( the policy ) and the discriminator ( the reward function ) makes it a minmax optimization problem , and therefore comes with practical challenges that might include training instability , sensitivity to hyperparameters and poor sample efficiency . ∗Correspondence to Robert Dadashi : dadashi @ google.com . In this work , we use the Wasserstein distance as a measure between the state-action distributions of the expert and of the agent . Contrary to f -divergences , the Wasserstein distance is a true distance , it is smooth and it is based on the geometry of the metric space it operates on . The Wasserstein distance has gained popularity in GAN approaches ( Arjovsky et al. , 2017 ) through its dual formulation which comes with challenges ( see Section 5 ) . Our approach is novel in the fact that we consider the problem of minimizing the Wasserstein distance through its primal formulation . Crucially , the primal formulation prevents the minmax optimization problem , and requires little fine tuning . We introduce a reward function computed offline based on an upper bound of the primal form of the Wasserstein distance . As the Wasserstein distance requires a distance between state-action pairs , we show that it can be hand-defined for locomotion tasks , and that it can be learned from pixels for a hand manipulation task . The inferred reward function is non-stationary , like adversarial IL methods , but it is not re-evaluated as the agent interacts with the environment , therefore the reward function we define is computed offline . We present a true distance to compare the behavior of the expert and the behavior of the agent , rather than using the common proxy of performance with respect to the true return of the task we consider ( as it is unknown in general ) . Our method recovers expert behaviour comparably to existing state-of-the-art methods while being based on significantly fewer hyperparameters ; it operates even in the extreme low data regime of demonstrations , and is the first method that makes Humanoid run with a single ( subsampled ) demonstration . 2 BACKGROUND AND NOTATIONS . Markov decision processes . We describe environments as episodic Markov Decision Processes ( MDP ) with finite time horizon ( Sutton & Barto , 2018 ) ( S , A , P , r , γ , ρ0 , T ) , where S is the state space , A is the action space , P is the transition kernel , r is the reward function , γ is the discount factor , ρ0 is the initial state distribution and T is the time horizon . We will denote the dimensionality of S and A as |S| and |A| respectively . A policy π is a mapping from states to distributions over actions ; we denote the space of all policies by Π . In RL , the goal is to learn a policy π∗ that maximizes the expected sum of discounted rewards it encounters , that is , the expected return . Depending on the context , we might use the concept of a cost c rather than a reward r ( Puterman , 2014 ) , which essentially moves the goal of the policy from maximizing its return to minimizing its cumlative cost . State action distributions . Suppose a policy π visits the successive states and actions s1 , a1 , . . . , sT , aT during an episode , we define the empirical state-action distribution ρ̂π as : ρ̂π = 1 T ∑T t=1 δst , at , where δst , at is a Dirac distribution centered on ( st , at ) . Similarly , suppose we have a set of expert demonstrations D = { se , ae } of size D , then the associated empirical expert state-action distribution ρ̂e is defined as : ρ̂e = 1D ∑ ( s , a ) ∈D δs , a . Wasserstein distance . Suppose we have the metric space ( M , d ) where M is a set and d is a metric on M . Suppose we have µ and ν two distributions on M with finite moments , the p-th order Wasserstein distance ( Villani , 2008 ) is defined asWpp ( µ , ν ) = infθ∈Θ ( µ , ν ) ∫ M×M d ( x , y ) pdθ ( x , y ) , where Θ ( µ , ν ) is the set of all couplings between µ and ν . In the remainder , we only consider distributions with finite support . A coupling between two distributions of support cardinal T and D is a doubly stochastic matrix of size T ×D . We note Θ the set of all doubly stochastic matrices of size T ×D : Θ = { θ ∈ RT×D+ | ∀j ∈ [ 1 : D ] , T∑ i′=1 θ [ i′ , j ] = 1 D , ∀i ∈ [ 1 : T ] , D∑ j′=1 θ [ i , j′ ] = 1 T } . The Wasserstein distance between distributions of state-action pairs requires the definition of a metric d in the space ( S , A ) . Defining a metric in an MDP is non trivial ( Ferns et al. , 2004 ; Mahadevan & Maggioni , 2007 ) ; we show an example where the metric is learned from demonstrations in Section 4.4 . For now , we assume the existence of a metric d : ( S , A ) × ( S , A ) 7→ R+ . 3 METHOD . We present the theoretical motivation of our approach : the minimization of the Wasserstein distance between the state-action distributions of the agent and the expert . We introduce a reward based on an upper-bound of the primal form of the Wasserstein distance inferred from a relaxation of the optimal coupling condition , and present the resulting algorithm : Primal Wasserstein Imitation Learning ( PWIL ) . 3.1 WASSERSTEIN DISTANCE MINIMIZATION . Central to our approach is the minimization of the Wasserstein distance between the state-action distribution of the policy we seek to train ρ̂π and the state-action distribution of the expert ρ̂e . In other words , we aim at optimizing the following problem : inf π∈Π Wpp ( ρ̂π , ρ̂e ) = inf π∈Π inf θ∈Θ T∑ i=1 D∑ j=1 d ( ( sπi , a π i ) , ( s e j , a e j ) ) pθ [ i , j ] . ( 1 ) In the rest of the paper , we only consider the 1-Wasserstein ( p = 1 in Equation 1 ) and leave the extensive study of the influence of the order p for future work . We can interpret the Wasserstein distance using the earth ’ s movers analogy ( Villani , 2008 ) . Consider that the state-action pairs of the expert are D holes of mass D−1 and that the state-action pairs of the policy are piles of dirt of mass T−1 . A coupling θ is a transport strategy between the piles of dirt and the holes , where θ [ i , j ] stands for how much of the pile of dirt i should be moved towards the hole j . The optimal coupling is the one that minimizes the distance that the earth mover travels to put all piles of dirt to holes . Note that to compute the optimal coupling , we need knowledge of the locations of all piles of dirt . In the context of RL , this means having access to the full trajectory generated by π . From now on , we write θ∗π as the optimal coupling for the policy π , that we inject in Equation ( 1 ) : θ∗π = arg min θ∈Θ T∑ i=1 D∑ j=1 d ( ( sπi , a π i ) , ( s e j , a e j ) ) θ [ i , j ] inf π∈Π W1 ( ρ̂π , ρ̂e ) = inf π∈Π T∑ i=1 c∗i , π with c ∗ i , π = D∑ j=1 d ( ( sπi , a π i ) , ( s e j , a e j ) ) θ ∗ π [ i , j ] . ( 2 ) In Equation ( 2 ) , we have introduced c∗i , π , which we interpret as a cost to minimize using RL . As c∗i , π depends on the optimal coupling θ ∗ π , we can only define c ∗ i , π at the very end of an episode . This can be problematic if an agent learns in an online manner or in large time-horizon tasks . Thus , we introduce an upper bound to the Wasserstein distance that yields a cost we can compute online , based on a suboptimal coupling strategy . | The authors proposed an imitation learning algorithm that utilizes the primal form of Wasserstein distance to match agent’s and expert’s state-action visitation distributions. They considered the upper bound of the primal form and devise the optimization method based on greedy coupling which makes learning suitable for sequential problems. With standardized Euclidean distance and exponential smoothing, the proposed method PWIL is shown to perform well for both MuJoCo and Door Opening Tasks and highly outperforms the baseline (DAC) for Humanoid. | SP:475019f17c9bf4c7e167222d56f920d12f8c8439 |
Primal Wasserstein Imitation Learning | 1 INTRODUCTION . Reinforcement Learning ( RL ) has solved a number of difficult tasks whether in games ( Tesauro , 1995 ; Mnih et al. , 2015 ; Silver et al. , 2016 ) or robotics ( Abbeel & Ng , 2004 ; Andrychowicz et al. , 2020 ) . However , RL relies on the existence of a reward function , that can be either hard to specify or too sparse to be used in practice . Imitation Learning ( IL ) is a paradigm that applies to these environments with hard to specify rewards : we seek to solve a task by learning a policy from a fixed number of demonstrations generated by an expert . IL methods can typically be folded into two paradigms : Behavioral Cloning , or BC ( Pomerleau , 1991 ; Bagnell et al. , 2007 ; Ross & Bagnell , 2010 ) and Inverse Reinforcement Learning , or IRL ( Russell , 1998 ; Ng et al. , 2000 ) . In BC , we seek to recover the expert ’ s behavior by directly learning a policy that matches the expert behavior in some sense . In IRL , we assume that the demonstrations come from an agent that acts optimally with respect to an unknown reward function that we seek to recover , to subsequently train an agent on it . Although IRL methods introduce an intermediary problem ( i.e . recovering the environment ’ s reward ) they are less sensitive to distributional shift ( Pomerleau , 1991 ) , they generalize to environments with different dynamics ( Piot et al. , 2013 ) , and they can recover a near-optimal agent from suboptimal demonstrations ( Brown et al. , 2019 ; Jacq et al. , 2019 ) . However , IRL methods are usually based on an iterative process alternating between reward estimation and RL , which might result in poor sample-efficiency . Earlier IRL methods ( Ng et al. , 2000 ; Abbeel & Ng , 2004 ; Ziebart et al. , 2008 ) require multiple calls to a Markov decision process solver ( Puterman , 2014 ) , whereas recent adversarial IL approaches ( Finn et al. , 2016 ; Ho & Ermon , 2016 ; Fu et al. , 2018 ) interleave the learning of the reward function with the learning process of the agent . Adversarial IL methods are based on an adversarial training paradigm similar to Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , where the learned reward function can be thought of as the confusion of a discriminator that learns to differentiate expert transitions from non expert ones . These methods are well suited to the IL problem since they implicitly minimize an f -divergence between the state-action distribution of an expert and the state-action distribution of the learning agent ( Ghasemipour et al. , 2019 ; Ke et al. , 2019 ) . However the interaction between a generator ( the policy ) and the discriminator ( the reward function ) makes it a minmax optimization problem , and therefore comes with practical challenges that might include training instability , sensitivity to hyperparameters and poor sample efficiency . ∗Correspondence to Robert Dadashi : dadashi @ google.com . In this work , we use the Wasserstein distance as a measure between the state-action distributions of the expert and of the agent . Contrary to f -divergences , the Wasserstein distance is a true distance , it is smooth and it is based on the geometry of the metric space it operates on . The Wasserstein distance has gained popularity in GAN approaches ( Arjovsky et al. , 2017 ) through its dual formulation which comes with challenges ( see Section 5 ) . Our approach is novel in the fact that we consider the problem of minimizing the Wasserstein distance through its primal formulation . Crucially , the primal formulation prevents the minmax optimization problem , and requires little fine tuning . We introduce a reward function computed offline based on an upper bound of the primal form of the Wasserstein distance . As the Wasserstein distance requires a distance between state-action pairs , we show that it can be hand-defined for locomotion tasks , and that it can be learned from pixels for a hand manipulation task . The inferred reward function is non-stationary , like adversarial IL methods , but it is not re-evaluated as the agent interacts with the environment , therefore the reward function we define is computed offline . We present a true distance to compare the behavior of the expert and the behavior of the agent , rather than using the common proxy of performance with respect to the true return of the task we consider ( as it is unknown in general ) . Our method recovers expert behaviour comparably to existing state-of-the-art methods while being based on significantly fewer hyperparameters ; it operates even in the extreme low data regime of demonstrations , and is the first method that makes Humanoid run with a single ( subsampled ) demonstration . 2 BACKGROUND AND NOTATIONS . Markov decision processes . We describe environments as episodic Markov Decision Processes ( MDP ) with finite time horizon ( Sutton & Barto , 2018 ) ( S , A , P , r , γ , ρ0 , T ) , where S is the state space , A is the action space , P is the transition kernel , r is the reward function , γ is the discount factor , ρ0 is the initial state distribution and T is the time horizon . We will denote the dimensionality of S and A as |S| and |A| respectively . A policy π is a mapping from states to distributions over actions ; we denote the space of all policies by Π . In RL , the goal is to learn a policy π∗ that maximizes the expected sum of discounted rewards it encounters , that is , the expected return . Depending on the context , we might use the concept of a cost c rather than a reward r ( Puterman , 2014 ) , which essentially moves the goal of the policy from maximizing its return to minimizing its cumlative cost . State action distributions . Suppose a policy π visits the successive states and actions s1 , a1 , . . . , sT , aT during an episode , we define the empirical state-action distribution ρ̂π as : ρ̂π = 1 T ∑T t=1 δst , at , where δst , at is a Dirac distribution centered on ( st , at ) . Similarly , suppose we have a set of expert demonstrations D = { se , ae } of size D , then the associated empirical expert state-action distribution ρ̂e is defined as : ρ̂e = 1D ∑ ( s , a ) ∈D δs , a . Wasserstein distance . Suppose we have the metric space ( M , d ) where M is a set and d is a metric on M . Suppose we have µ and ν two distributions on M with finite moments , the p-th order Wasserstein distance ( Villani , 2008 ) is defined asWpp ( µ , ν ) = infθ∈Θ ( µ , ν ) ∫ M×M d ( x , y ) pdθ ( x , y ) , where Θ ( µ , ν ) is the set of all couplings between µ and ν . In the remainder , we only consider distributions with finite support . A coupling between two distributions of support cardinal T and D is a doubly stochastic matrix of size T ×D . We note Θ the set of all doubly stochastic matrices of size T ×D : Θ = { θ ∈ RT×D+ | ∀j ∈ [ 1 : D ] , T∑ i′=1 θ [ i′ , j ] = 1 D , ∀i ∈ [ 1 : T ] , D∑ j′=1 θ [ i , j′ ] = 1 T } . The Wasserstein distance between distributions of state-action pairs requires the definition of a metric d in the space ( S , A ) . Defining a metric in an MDP is non trivial ( Ferns et al. , 2004 ; Mahadevan & Maggioni , 2007 ) ; we show an example where the metric is learned from demonstrations in Section 4.4 . For now , we assume the existence of a metric d : ( S , A ) × ( S , A ) 7→ R+ . 3 METHOD . We present the theoretical motivation of our approach : the minimization of the Wasserstein distance between the state-action distributions of the agent and the expert . We introduce a reward based on an upper-bound of the primal form of the Wasserstein distance inferred from a relaxation of the optimal coupling condition , and present the resulting algorithm : Primal Wasserstein Imitation Learning ( PWIL ) . 3.1 WASSERSTEIN DISTANCE MINIMIZATION . Central to our approach is the minimization of the Wasserstein distance between the state-action distribution of the policy we seek to train ρ̂π and the state-action distribution of the expert ρ̂e . In other words , we aim at optimizing the following problem : inf π∈Π Wpp ( ρ̂π , ρ̂e ) = inf π∈Π inf θ∈Θ T∑ i=1 D∑ j=1 d ( ( sπi , a π i ) , ( s e j , a e j ) ) pθ [ i , j ] . ( 1 ) In the rest of the paper , we only consider the 1-Wasserstein ( p = 1 in Equation 1 ) and leave the extensive study of the influence of the order p for future work . We can interpret the Wasserstein distance using the earth ’ s movers analogy ( Villani , 2008 ) . Consider that the state-action pairs of the expert are D holes of mass D−1 and that the state-action pairs of the policy are piles of dirt of mass T−1 . A coupling θ is a transport strategy between the piles of dirt and the holes , where θ [ i , j ] stands for how much of the pile of dirt i should be moved towards the hole j . The optimal coupling is the one that minimizes the distance that the earth mover travels to put all piles of dirt to holes . Note that to compute the optimal coupling , we need knowledge of the locations of all piles of dirt . In the context of RL , this means having access to the full trajectory generated by π . From now on , we write θ∗π as the optimal coupling for the policy π , that we inject in Equation ( 1 ) : θ∗π = arg min θ∈Θ T∑ i=1 D∑ j=1 d ( ( sπi , a π i ) , ( s e j , a e j ) ) θ [ i , j ] inf π∈Π W1 ( ρ̂π , ρ̂e ) = inf π∈Π T∑ i=1 c∗i , π with c ∗ i , π = D∑ j=1 d ( ( sπi , a π i ) , ( s e j , a e j ) ) θ ∗ π [ i , j ] . ( 2 ) In Equation ( 2 ) , we have introduced c∗i , π , which we interpret as a cost to minimize using RL . As c∗i , π depends on the optimal coupling θ ∗ π , we can only define c ∗ i , π at the very end of an episode . This can be problematic if an agent learns in an online manner or in large time-horizon tasks . Thus , we introduce an upper bound to the Wasserstein distance that yields a cost we can compute online , based on a suboptimal coupling strategy . | This paper proposes to use Wasserstein distance in the primal form for imitation learning. Compared with its dual form and f-divergence minimization variants, it avoids the unstable minimax optimization. In order to compute the Wasserstein distance in primal form, they also propose a greedy approximation. Their experiments demonstrate that this method has a better performance compared with baseline methods. | SP:475019f17c9bf4c7e167222d56f920d12f8c8439 |
Relational Learning with Variational Bayes | 1 INTRODUCTION . American Psychological Association defines relational learning as ( VandenBos & APA , 2007 ) : Definition 1.1 ( Relational learning ) . Learning to differentiate among stimuli on the basis of relational properties rather than absolute properties . In other words , relational learning refers to the ability to recognize and respond to relationship ( called relational property ) among objects irrespective of the nature of those objects ( called absolute property ) . For example ( attributed to Doumas & Hummel ( 2013 ) ) , how do we come to understand that two circles are the same-shape in the same way that two squares are ? In this example , “ same-shape ” is the relational property and object shape is the absolute property . Relational learning has long been recognized as a hallmark of human cognition with strong implications for both human-like learning capabilities and generalization capacity ( Biederman , 1987 ; Medin et al. , 1993 ; Gentner , 2003 ; Penn et al. , 2008 ; Holyoak , 2012 ; Gentner & Smith , 2012 ) . We refer the interested readers to the provided references for a comprehensive discussion on this subject . Contemporaneously , the research on learning data relationships—also commonly called “ relational learning ” —has flourished in the machine learning community where the overarching goal is learning in a context where there may be relationships between learning examples , or where these examples may have a complex internal structure ( i.e. , consist of multiple components and there may be relationships between these components ) ( Getoor & Taskar , 2007 ; De Raedt et al. , 2016 ) . We argue that the key difference between the two “ relational learning ” definitions and their learning objectives is that Definition 1.1 takes the relationship learning problem one step further by requiring the data relationships be learned only on the basis of relational properties rather than absolute properties . To the best of our knowledge , this important distinction—learning relationships irrespective of the absolute properties—has not been rigorously studied in the unsupervised learning community , where most existing methods either encourage or do not constrain the relationships learning through absolute properties . In this work , we propose an unsupervised learning method—variational relational learning ( VRL ) — for addressing the relational learning problem as defined by Definition 1.1 . At its core , VRL encapsulates the relational learning problem with a probabilistic graphical model ( PGM ) in which we perform inference to learn about relational property and other relational processing tasks . Our contribution in this paper is threefold : First , we propose a probabilistic formulation for the relational learning problem defined by Definition 1.1 . Second , we encapsulate the relational learning problem with a PGM in which we perform learning and inference . Third , we propose an efficient and effective learning algorithm that can be trained end-to-end and completely unsupervised . ∗The author completed this research while working at ExxonMobil . Corresponding author e-mail : liu.kuanghung @ gmail.com 2 PROBLEM DEFINITION . We focus on a canonical form of the relational learning problem where we observed a paired dataset X = { ( a ( i ) , b ( i ) ) | i∈ [ 1 .. N ] } consisting of N i.i.d . samples generated from a joint distribution p ( a∈A , b∈B ) . We dissect the information in X into absolute property and relational property where absolute property represents specific features that describe individual a and b , and relational property represents the relationship between a and b irrespective of their absolute property . In this work , we interpret the absolute property of a and b as any information that characterizes ( even if only partially ) the marginal distribution p ( a ) and p ( b ) . We propose to represent the relational property as a latent random variable ( r.v . ) z that satisfies the following constraints : ( i ) p ( a , z ) = p ( a ) p ( z ) , ( ii ) p ( b , z ) = p ( b ) p ( z ) , ( iii ) p ( a , z | b ) 6= p ( a | b ) p ( z | b ) , ( iv ) p ( b , z | a ) 6= p ( b | a ) p ( z | a ) , ( 1 ) where in Eq . 1 ( i ) and 1 ( ii ) we interpret the specification of relational property in Definition 1.1— learning relationships irrespective of the absolute properties—as meaning statistical independence , while in Eq . 1 ( iii ) and 1 ( iv ) we ensure r.v . z contains relevant ( relationship ) information that further informs a and b about one another , i.e. , H ( a | b , z ) < H ( a | b ) , H ( b | a , z ) < H ( b | a ) where H ( · | · ) is the conditional entropy . It is easy to see that the following conditions are necessary for r.v . z to exist : ( 1 ) H ( b | a ) > 0 and H ( a | b ) > 0 , i.e. , a and b can not be fully determined by each other ; ( 2 ) r.v . a , b , z are not mutually independent , i.e. , p ( a , b , z ) 6= p ( a ) p ( b ) p ( z ) . Our goal for relational learning is to learn about relational property z that satisfies Eq . 1 in a completely unsupervised fashion . A motivating example for Eq . 1 is provided in Appendix A.1 . In addition , we are interested in two related relational processing tasks : relational discrimination and relational mapping defined as ( VandenBos & APA , 2007 ) : Definition 2.1 ( Relational discrimination in condition ) . A discrimination based on the relationship between or among stimuli rather than on absolute features of the stimuli . Definition 2.2 ( Relational mapping ) . The ability to apply what one knows about one set of elements to a different set of elements . Relational discrimination allows us to differentiate ( a ( i ) , b ( i ) ) from ( a ( j ) , b ( j ) ) based on their relational properties . And relational mapping allows us to apply the relational property of ( a ( i ) , b ( i ) ) to a different set of data , for example , deduce that b ( j ) is related to a ( j ) in the same way that b ( i ) is related to a ( i ) . 3 METHOD . Learning and inference relational property z that satisfies all four constraints in Eq . 1 is a challenging problem due to the hard independence constraints in Eq . 1 ( i ) and 1 ( ii ) . To overcome this challenge , we first introduce VRL as a tractable learning method that satisfies 3 ( out of 4 ) constraints in Eq . 1— Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) . We then discuss VRL ’ s unique optimization challenges , which are partially attributable to its relaxation of the independence requirement in Eq . 1 ( ii ) . 3.1 VARIATIONAL RELATIONAL LEARNING . The proposed VRL method consists of two parts : first , we encapsulate the relational learning problem with a PGM , called VRL-PGM ; we then formulate various relational processing tasks as performing inference and learning in VRL-PGM . The VRL-PGM model , shown in Fig . 1 , samples data a , z , and b from parametric families of distributions—pθ ( a ) , pθ ( z ) , pθ ( b |a , z ) —that are differentiable almost everywhere with respect to ( w.r.t . ) a , z , and θ . In practice , we observe only a set of independent realizations { ( a ( i ) , b ( i ) ) | i ∈ [ 1 .. N ] } while the true parameter θ∗ and the corresponding latent variables z ( i ) are unobserved . A well-known property of the PGM shown in Fig . 1 is that r.v . a and z are independent with no variables observed , but not conditionally independent when b is observed , i.e. , pθ ( a , z ) = pθ ( a ) pθ ( z ) , pθ ( a , z | b ) 6= pθ ( a | b ) pθ ( z | b ) ( Bishop , 2006 ) . Consequently , VRL-PGM can be viewed as a parametric relational learning model that satisfies 3 ( out of 4 ) constraints in Eq . 1—Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) ( note that Eq . 1 ( iv ) is trivially satisfied in VRL-PGM ) . Further discussions on the connection between VRL-PGM and the relational learning problem is provided in Appendix A.2 . Having established VRL-PGM , our primary learning objective is to approximate the unknown true likelihood function pθ ( b | a , z ) and posterior pθ ( z | a , b ) . Learning pθ ( z | a , b ) provides us a way to infer ( a ( i ) , b ( i ) ) ’ s relational property z ( i ) ; moreover , it serves as a basis for performing relational discrimination where we compare relational properties between different pairs of data . Learning pθ ( b | a , z ) allows us to perform relational mapping where we use the relational property of ( a ( i ) , b ( i ) ) to map a ( j ) to b ( j ) , i.e. , b ( j ) ∼ pθ ( b | a ( j ) , z ( i ) ) where z ( i ) ∼ pθ ( z | a ( i ) , b ( i ) ) . We estimate the parameter for pθ ( b |a , z ) by following the maximum-likelihood ( ML ) principle , and approximate the true posterior pθ ( z | a , b ) with variational Bayesian approach . More specifically , we use a variational distribution qφ ( z | a , b ) , parameterized by φ , to approximate the unknown ( and often intractable ) true posterior . Both θ and φ are learned through maximizing a variational lower bound , L ( θ , φ ; a ( i ) , b ( i ) ) ( abbreviated as L ( i ) ) , for the conditional log-likelihood log pθ ( b ( i ) | a ( i ) ) ( derivation is provided in Appendix C ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] . ( 2 ) Recall that learning z independent of a is central to our relational learning goal . While this independence assumption is built into VRL-PGM , the learning objective L ( i ) does not explicitly force z to be independent of a nor penalize learning a dependent z . In practice , there may be numerous reasons that could break this independence assumption , e.g. , insufficient training data , failure to reach the global optimum , non-identifiability of the model , etc. , and it may be desirable to explicitly enforce independence between z and a . One way to achieve this is to introduce a non-positive function that measures the dependency between a and z with maximum attained when they are independent . For example , we can append the negative mutual information between z and a , −I ( z ; a ) = −DKL ( pθ ( z , a ) ‖ pθ ( z ) pθ ( a ) ) , to L ( i ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] − I ( z ; a ) . ( 3 ) Since I ( z ; a ) ≥ 0 and I ( z ; a ) = 0 if and only if z and a are independent , the addition of −I ( z ; a ) to L ( i ) not only maintain the validity of the lower bound , but also retain its quality ( z and a are independent in VRL-PGM ) . | This paper proposes a model to infer the relationship between multiple instances in a dataset by inferring a latent variable. The authors accomplish this by defining an optimization problem that optimizes the ELBO of the proposed graphical model. The paper presents a nice solution to some of the identification issues that can arise when inferring the latent variable, in particular the so called “information shortcut” when the model overfits to only learning the “absolute” property of the dataset, rather than inferring the shared latent traits. | SP:e3a0b2cb1a7e2ed24eb413cbd4545cfcddc30a69 |
Relational Learning with Variational Bayes | 1 INTRODUCTION . American Psychological Association defines relational learning as ( VandenBos & APA , 2007 ) : Definition 1.1 ( Relational learning ) . Learning to differentiate among stimuli on the basis of relational properties rather than absolute properties . In other words , relational learning refers to the ability to recognize and respond to relationship ( called relational property ) among objects irrespective of the nature of those objects ( called absolute property ) . For example ( attributed to Doumas & Hummel ( 2013 ) ) , how do we come to understand that two circles are the same-shape in the same way that two squares are ? In this example , “ same-shape ” is the relational property and object shape is the absolute property . Relational learning has long been recognized as a hallmark of human cognition with strong implications for both human-like learning capabilities and generalization capacity ( Biederman , 1987 ; Medin et al. , 1993 ; Gentner , 2003 ; Penn et al. , 2008 ; Holyoak , 2012 ; Gentner & Smith , 2012 ) . We refer the interested readers to the provided references for a comprehensive discussion on this subject . Contemporaneously , the research on learning data relationships—also commonly called “ relational learning ” —has flourished in the machine learning community where the overarching goal is learning in a context where there may be relationships between learning examples , or where these examples may have a complex internal structure ( i.e. , consist of multiple components and there may be relationships between these components ) ( Getoor & Taskar , 2007 ; De Raedt et al. , 2016 ) . We argue that the key difference between the two “ relational learning ” definitions and their learning objectives is that Definition 1.1 takes the relationship learning problem one step further by requiring the data relationships be learned only on the basis of relational properties rather than absolute properties . To the best of our knowledge , this important distinction—learning relationships irrespective of the absolute properties—has not been rigorously studied in the unsupervised learning community , where most existing methods either encourage or do not constrain the relationships learning through absolute properties . In this work , we propose an unsupervised learning method—variational relational learning ( VRL ) — for addressing the relational learning problem as defined by Definition 1.1 . At its core , VRL encapsulates the relational learning problem with a probabilistic graphical model ( PGM ) in which we perform inference to learn about relational property and other relational processing tasks . Our contribution in this paper is threefold : First , we propose a probabilistic formulation for the relational learning problem defined by Definition 1.1 . Second , we encapsulate the relational learning problem with a PGM in which we perform learning and inference . Third , we propose an efficient and effective learning algorithm that can be trained end-to-end and completely unsupervised . ∗The author completed this research while working at ExxonMobil . Corresponding author e-mail : liu.kuanghung @ gmail.com 2 PROBLEM DEFINITION . We focus on a canonical form of the relational learning problem where we observed a paired dataset X = { ( a ( i ) , b ( i ) ) | i∈ [ 1 .. N ] } consisting of N i.i.d . samples generated from a joint distribution p ( a∈A , b∈B ) . We dissect the information in X into absolute property and relational property where absolute property represents specific features that describe individual a and b , and relational property represents the relationship between a and b irrespective of their absolute property . In this work , we interpret the absolute property of a and b as any information that characterizes ( even if only partially ) the marginal distribution p ( a ) and p ( b ) . We propose to represent the relational property as a latent random variable ( r.v . ) z that satisfies the following constraints : ( i ) p ( a , z ) = p ( a ) p ( z ) , ( ii ) p ( b , z ) = p ( b ) p ( z ) , ( iii ) p ( a , z | b ) 6= p ( a | b ) p ( z | b ) , ( iv ) p ( b , z | a ) 6= p ( b | a ) p ( z | a ) , ( 1 ) where in Eq . 1 ( i ) and 1 ( ii ) we interpret the specification of relational property in Definition 1.1— learning relationships irrespective of the absolute properties—as meaning statistical independence , while in Eq . 1 ( iii ) and 1 ( iv ) we ensure r.v . z contains relevant ( relationship ) information that further informs a and b about one another , i.e. , H ( a | b , z ) < H ( a | b ) , H ( b | a , z ) < H ( b | a ) where H ( · | · ) is the conditional entropy . It is easy to see that the following conditions are necessary for r.v . z to exist : ( 1 ) H ( b | a ) > 0 and H ( a | b ) > 0 , i.e. , a and b can not be fully determined by each other ; ( 2 ) r.v . a , b , z are not mutually independent , i.e. , p ( a , b , z ) 6= p ( a ) p ( b ) p ( z ) . Our goal for relational learning is to learn about relational property z that satisfies Eq . 1 in a completely unsupervised fashion . A motivating example for Eq . 1 is provided in Appendix A.1 . In addition , we are interested in two related relational processing tasks : relational discrimination and relational mapping defined as ( VandenBos & APA , 2007 ) : Definition 2.1 ( Relational discrimination in condition ) . A discrimination based on the relationship between or among stimuli rather than on absolute features of the stimuli . Definition 2.2 ( Relational mapping ) . The ability to apply what one knows about one set of elements to a different set of elements . Relational discrimination allows us to differentiate ( a ( i ) , b ( i ) ) from ( a ( j ) , b ( j ) ) based on their relational properties . And relational mapping allows us to apply the relational property of ( a ( i ) , b ( i ) ) to a different set of data , for example , deduce that b ( j ) is related to a ( j ) in the same way that b ( i ) is related to a ( i ) . 3 METHOD . Learning and inference relational property z that satisfies all four constraints in Eq . 1 is a challenging problem due to the hard independence constraints in Eq . 1 ( i ) and 1 ( ii ) . To overcome this challenge , we first introduce VRL as a tractable learning method that satisfies 3 ( out of 4 ) constraints in Eq . 1— Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) . We then discuss VRL ’ s unique optimization challenges , which are partially attributable to its relaxation of the independence requirement in Eq . 1 ( ii ) . 3.1 VARIATIONAL RELATIONAL LEARNING . The proposed VRL method consists of two parts : first , we encapsulate the relational learning problem with a PGM , called VRL-PGM ; we then formulate various relational processing tasks as performing inference and learning in VRL-PGM . The VRL-PGM model , shown in Fig . 1 , samples data a , z , and b from parametric families of distributions—pθ ( a ) , pθ ( z ) , pθ ( b |a , z ) —that are differentiable almost everywhere with respect to ( w.r.t . ) a , z , and θ . In practice , we observe only a set of independent realizations { ( a ( i ) , b ( i ) ) | i ∈ [ 1 .. N ] } while the true parameter θ∗ and the corresponding latent variables z ( i ) are unobserved . A well-known property of the PGM shown in Fig . 1 is that r.v . a and z are independent with no variables observed , but not conditionally independent when b is observed , i.e. , pθ ( a , z ) = pθ ( a ) pθ ( z ) , pθ ( a , z | b ) 6= pθ ( a | b ) pθ ( z | b ) ( Bishop , 2006 ) . Consequently , VRL-PGM can be viewed as a parametric relational learning model that satisfies 3 ( out of 4 ) constraints in Eq . 1—Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) ( note that Eq . 1 ( iv ) is trivially satisfied in VRL-PGM ) . Further discussions on the connection between VRL-PGM and the relational learning problem is provided in Appendix A.2 . Having established VRL-PGM , our primary learning objective is to approximate the unknown true likelihood function pθ ( b | a , z ) and posterior pθ ( z | a , b ) . Learning pθ ( z | a , b ) provides us a way to infer ( a ( i ) , b ( i ) ) ’ s relational property z ( i ) ; moreover , it serves as a basis for performing relational discrimination where we compare relational properties between different pairs of data . Learning pθ ( b | a , z ) allows us to perform relational mapping where we use the relational property of ( a ( i ) , b ( i ) ) to map a ( j ) to b ( j ) , i.e. , b ( j ) ∼ pθ ( b | a ( j ) , z ( i ) ) where z ( i ) ∼ pθ ( z | a ( i ) , b ( i ) ) . We estimate the parameter for pθ ( b |a , z ) by following the maximum-likelihood ( ML ) principle , and approximate the true posterior pθ ( z | a , b ) with variational Bayesian approach . More specifically , we use a variational distribution qφ ( z | a , b ) , parameterized by φ , to approximate the unknown ( and often intractable ) true posterior . Both θ and φ are learned through maximizing a variational lower bound , L ( θ , φ ; a ( i ) , b ( i ) ) ( abbreviated as L ( i ) ) , for the conditional log-likelihood log pθ ( b ( i ) | a ( i ) ) ( derivation is provided in Appendix C ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] . ( 2 ) Recall that learning z independent of a is central to our relational learning goal . While this independence assumption is built into VRL-PGM , the learning objective L ( i ) does not explicitly force z to be independent of a nor penalize learning a dependent z . In practice , there may be numerous reasons that could break this independence assumption , e.g. , insufficient training data , failure to reach the global optimum , non-identifiability of the model , etc. , and it may be desirable to explicitly enforce independence between z and a . One way to achieve this is to introduce a non-positive function that measures the dependency between a and z with maximum attained when they are independent . For example , we can append the negative mutual information between z and a , −I ( z ; a ) = −DKL ( pθ ( z , a ) ‖ pθ ( z ) pθ ( a ) ) , to L ( i ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] − I ( z ; a ) . ( 3 ) Since I ( z ; a ) ≥ 0 and I ( z ; a ) = 0 if and only if z and a are independent , the addition of −I ( z ; a ) to L ( i ) not only maintain the validity of the lower bound , but also retain its quality ( z and a are independent in VRL-PGM ) . | The paper proposes variational relational learning by learning relations between two inputs via variational inference on a probabilistic graphical model (PGM). The PGM that they use factors as p(a)p(z)p(b|a,z) where a,b are the two inputs and z is the supposed relationship between them. The example shown in the experiments is rotational mnist, where b is a rotated version of a, and z should encode the degree of rotation. The paper learns both the forward network and in inference network in a VAE-like approach; the elbo derivations appear correct to me. | SP:e3a0b2cb1a7e2ed24eb413cbd4545cfcddc30a69 |
Generative Fairness Teaching | 1 INTRODUCTION . Automated learning systems are ubiquitous across a wide variety of sectors . Such systems can be used in many sensitive environments to make important and even life-changing decisions . Traditionally , decisions are made primary by human and the basis are usually highly regulated . For example in the Equal Credit Opportunity ACts ( ECOA ) , incorporating attributes such as race , color , or sex into credit lending decisions are illegal in United States ( Mehrabi et al. , 2019 ) . As more and more of this process nowadays is implemented by automated learning systems instead , algorithmic fairness becomes a topic of paramount importance . Lending ( Hardt et al. , 2016 ) , hiring ( Alder & Gilbert , 2006 ) , and educational rights ( Kusner et al. , 2017 ) are examples where gender or race biased decisions from automatic systems can have serious consequences . Even for more mechanical tasks such as image classification ( Buolamwini & Gebru , 2018 ) , image captioning ( Hendricks et al. , 2018 ) , word embedding learning ( Garg et al. , 2018 ; Bolukbasi et al. , 2016 ) , and named co-reference resolution ( Zhao et al. , 2018 ) , algorithmic discrimination can be a major concern . As the society relies more and more on such automated systems , algorithmic fairness becomes a pressing issue . Although much of the focus of developing automated learning systems has been on the performance , it is important to take fairness into consideration while designing and deploying the systems . Unfortunately , state-of-the-art automated systems are usually data driven , which makes it more likely to inherit or even amplify the biases rooted in a dataset . This is an especially serious issue for deep learning and gradient based models , which can easily fit itself into the biased patterns of the dataset . For example , in a dataset with very few female candidates being labeled as hired in a job candidate prediction task , models might choose to give unfavorable predictions to qualified female candidates due to their under-representations in the training data . If deployed , such a biased predictor will deprive minority groups from acquiring the same opportunities as the others . Much of the work in the domain of machine learning fairness has been focusing exclusively on leveraging knowledge from samples in a dataset . One straightforward way is to adjust the distributions of the training data through pre-processing . In the job candidate prediction example above , this means that we can either down-sample the majority class or up-sample the minority ones ( Kamiran & Calders , 2012 ) . Another family of fairness methods aims at matching the model performance on the majority class to that of the minority ones during training by using one of the fairness criteria ( Gajane & Pechenizkiy , 2017 ) . Some examples of such methods includes adding regularizations ( Kamishima et al. , 2012 ) or applying adversarial learning ( Madras et al. , 2018a ) . One issue with these approaches is that in many cases minority groups might be heavily under-represented in the dataset . Model training with fairness constraints will typically give up much of the performance ad- vantages ( e.g. , prediction accuracies ) in favor of the fairness metrics . Methods concentrate on solely on a dataset will often find themselves difficult to maintain a good performance - fairness trade-off . One way to make models learn beyond the dataset is to take advantage of causal reasoning ( Pearl et al. , 2009 ) , which borrows knowledge from external structures often formulated as a causal graph . Counterfactual Fairness ( Kusner et al. , 2017 ) and Causal Fairness ( Kilbertus et al. , 2017 ) are examples of such approaches . One unique characteristic of causal fairness is the fact that they need to be built based on a causal graph . And because those metrics are usually optimized and evaluated their own objective , which involves a causal graph , it ’ s not clear how that added knowledge can be used to benefit other more commonly used fairness criteria such as Demographic Parity and Equalized Odds . Although it is possible to create causal structures that subsume conditional independencies in order to benefit DP or EO , we will need those structure information to be known in advance and we will have to derive one such structure for each metric we find . This is , what we believed , a significant limitation of the current causal methods which we aim to improve . In this paper , we propose a generative approach for fairness training that is capable of leveraging both real data and “ counterfactual data ” generated from a causal graph . The counterfactual data is generated in a way that alters the sensitive attribute while keeping other latent factors unchanged . We formulate such generative model using a novel combination of adversarial training with mutual information regularization . Next , the two types of data are organized by an architecture called the teacher , which dynamically determines the proportion of real and counterfactual samples to train a particular model . Our model - Generative Fairness Teacher ( GFT ) can be used to improve an arbitrary fairness criteria based on need . Our experimental results indicate that we are able to take advantage of the counterfactual generative model and make it able to achieve a significantly better model fairness on a wide range of datasets across models . we are able to improve upon models with different levels of biases . 2 BACKGROUND . We provides a basic overview for the foundations of our method . Here we assume X to be the input features , while A being the set of sensitive features . We define Y the be favorable outcome and Ŷ to be the models ’ prediction of the favorable outcome given the features . The core idea of Fairness in machine learning is to distribute those favorable outcomes evenly across each of the sensitive group A . 2.1 FORMAL FAIRNESS CRITERIA . There has been many existing work on fairness focusing on studying criteria to achieve algorithmic fairness . A straightforward way to define fairness is Demographic Parity Madras et al . ( 2018a ) . In Demographic Parity , the chances of allocating the favorable outcomes Ŷ is the same across sensitive groupsA . Under that definition , the predictive variable Ŷ is independent withA , making predictions free from discrimination against sensitive groups . Note that even thoughA takes the form of a binary variable , we can easily extend the definition into the case of multiple values . Definition 1 Demographic Parity P ( Ŷ |X = x , A = a ) = P ( Ŷ |X = x , A = a′ ) ( 1 ) Other fairness criteria that are built based on input features includes include Fairness Through Unawareness Gajane & Pechenizkiy ( 2017 ) , and Individual Fairness Kusner et al . ( 2017 ) . More recently , Hardt et al . argued that criteria that only takes into account sample features making it difficult for the algorithms to allocate favorable outcomes to the actual qualified samples in both the minority and the majority groups . Such an observation leading to a new fairness criteria called Equalized Odds ( and its special case Equal Opportunity ) Hardt et al . ( 2016 ) , where the fairness statement includes a condition on target variable Y . Definition 2 Equalized Odds P ( Ŷ = 1|X = x , A = a , Y = y ) = P ( Ŷ = 1|X = x , A = a′ , Y = y ) ( 2 ) 3 CAUSAL MODELS AND COUNTERFACTUAL EXAMPLES . A causal model Pearl et al . ( 2000 ) is defined over a triple ( U , V , F ) where V is a set of observed variables and U being a set of latent background variables . F is defined to be a set of equations for each variable in V , Vi = fi ( pai , Upai ) . Here pai refers to the parent of i in a causal graph . One importance concept in causal reasoning is intervention , in which case we substitute the variable of certain equation vi = v. We define a counterfactual example to be a synthesized sample generated from an existing data X by manipulating its sensitive feature from a to a′ . Here we assume that both the real sample X and the counterfactual sample X̂A←a′ are generated from a latent code U . Definition 3 Counterfactual Example X̂A←a′ ( U ) |X , a ( 3 ) 3.1 COMMON TECHNIQUES FOR FAIRNESS . Depending on when the fairness criteria are applied , methods for achieving fairness can be categorized as pre-processing , in-processing and post-processing Mehrabi et al . ( 2019 ) . In-processing Techniques . In-processing techniques apply fairness criteria during the training . Common techniques including fairness regularizer Kamishima et al . ( 2012 ) and adversarial training Madras et al . ( 2018a ) . Other methods fall into this category including the reduction based method Agarwal et al . ( 2018 ) and the more traditional discrimination approach in data miningHajian & Domingo-Ferrer ( 2012 ) . Our implementation applies in-processing techniques although our framework does not deal with in-processing methods directly . Pre-processing Techniques . Pre-processing methods applies to the models before the actual training happens . Methods fall into this category are almost exclusively data processing techniques that aims at making the dataset free from biases . Re-sampling and re-weighting are two common techniques of pre-processing techniques for fairness . Calmon et al . ( 2017 ) ; Kamiran & Calders ( 2012 ) ; Agarwal et al . ( 2018 ) . Other techniques include that repairs biases in a database Salimi et al . ( 2019 ) . Our method is closely related to the pre-processing techniques because from the perspective of the student model our teacher model can be viewed as a data pre-processor . Post-processing Techniques . When fairness adjustments are applied after the training is finished , techniques are called post-processing methods . Post-processing methods can be used to adapt models with all kinds of biases levels into a fair model Madras et al . ( 2018b ) . Other recently proposed including the method to model fairness as a score transformation problem Wei et al . ( 2019 ) and methods enforces Independence between sensitive features and model outcomes through Wasserstein-1 distances Jiang et al . ( 2020 ) Our approach is closely related to the post-processing technique as our teacher model can work with an arbitrarily biased student model . 4 GENERATIVE FAIRNESS TEACHING . In this section , we propose a teaching framework for training a student model that is able to work with a wide range of fairness criteria . We first present the overview of our approach in section 4.1 . Then in section 4.2 , we elaborate a novel generative model that can create “ counterfactual examples ” . In section 4.4 we will show how to train such a teacher policy with the given student model and counterfactual generative model . 4.1 FAIRNESS TEACHING FRAMEWORK . Given a training dataset Dtrain = { ( X = xi , Y = yi , A = ai ) } |Dtrain|i=1 , where X and Y are observed features and label , respectively , and A is some sensitive attribute , we are interested in learning a predictive model pθ ( Y |X ) that is parameterized by θ , such that it maximizes the reward on the validation set : R ( θ ) = E ( x , y ) ∼Dvalid log pθ ( y|x ) − λfcFC ( pθ ( Y |X ) , Dvalid ) ( 4 ) Algorithm 1 Generative Teaching Procedure 1 : Input : initial student model pθ0 ( Y |X ) , counterfactual generator p̂ ( X̂ ) 2 : Input : dataset D , Teacher policy πψ 3 : for t← 1 to episode length T do 4 : Sample a minibatch D′ = { di = [ xi , yi , ai ] } Mi=1 ∼ D. 5 : Get counterfactual data D̂′ = { d̂i = [ x̂i ∼ p̂ ( ·|X = xi , A = ai ) , yi , a′i ] } for each di ∈ D′ . 6 : Get current student ’ s state s = S ( D′ , pθt−1 ( Y |X ) ) using Eq 11 . 7 : Obtain decision at ∼ πψ ( s ) , and get D̃ = { di ∈ D′|a ( i ) t = 0 } ⋃ { d̂i ∈ D̂′|a ( i ) t = 1 } . 8 : Update student : θt = θt−1 + η∇θ=θt−1 1|D̃| ∑ ( x , y ) ∈D̃ log pθ ( y|x ) 9 : end for 10 : Return : updated student model pθT ( Y |X ) where FC ( · , · ) stands for the evaluation metric under certain fairness requirement , such as equalized odds . This objective tries to balance between the generalization error and the fairness constraint , which is controlled by the hyperparameter λfc . In our teaching framework , we will teach a student predictive model pθ ( Y |X ) to minimize Eq 4 . Our teacher model πψ is responsible for providing proper data samples for the student at each step of optimization to achieve this goal . In most teaching frameworks , the teacher is only responsible for selecting proper samples from existing dataset . However , due to the potential bias in the dataset , such assumption is too limited to achieve the fairness requirement . Recall the definition of counterfactual example in Eq 3 . Given a tuple of ( U , X = x , A = a ) , changingA byA← a′ while keeping U fixed will also changeX . Thus the change to the predictive distribution p ( ŶA←a′ ( U ) |x , a ) depends on two aspects : 1 ) the predictive model pθ ( Y |X ) , and 2 ) a counterfactual generative model p̂ ( X̂ ) : = p ( X̂A←a′ ( U ) |X = x , A = a ) . Suppose we have the model p̂ ( X̂ ) ready , then it would be possible to regulate pθ ( Y |X ) by generate counterfactual samples during training . Given the teacher model πψ and the counterfactual generative model p̂ ( X̂ ) , we are ready to present our iterative teaching approach for learning pθ ( Y |X ) in algorithm 1 . At each teaching stage , the teacher will make binary decisions on using 1 ) the sample selection from the given dataset Dtrain , or 2 ) the counterfactual data ( X̂ , Y , A = a′ ) coming from data sample ( X , Y , A = a ) ∈ Dtrain and altered by p̂ ( X̂ ) . The student will then use the selected samples to perform gradient update of θ . In the following sections , we will present how such counterfactual generative model p̂ ( X̂ ) are learned through teaching policy πψ . 4.2 LEARNING COUNTERFACTUAL GENERATOR p̂ ( X̂ ) To learn the counterfactual data distribution , we first need an understanding of the empirical data distribution . In next subsection , we first present our latent variable modeling of the data distribution : | This paper describes a pre-processing method to reduce certain statistical disparities in the classifier obtained from the training data. The proposed approach involves learning a latent probability model that simulates the training data. The authors then manipulate the learned model to generate "counterfactual" samples that belong to the membership of underrepresented demography. A more "fair" classifier is trained on the manipulated data mixing with the "counterfactual" samples. | SP:7ff567aac68a3492029136828f1b45dd7c358e8a |
Generative Fairness Teaching | 1 INTRODUCTION . Automated learning systems are ubiquitous across a wide variety of sectors . Such systems can be used in many sensitive environments to make important and even life-changing decisions . Traditionally , decisions are made primary by human and the basis are usually highly regulated . For example in the Equal Credit Opportunity ACts ( ECOA ) , incorporating attributes such as race , color , or sex into credit lending decisions are illegal in United States ( Mehrabi et al. , 2019 ) . As more and more of this process nowadays is implemented by automated learning systems instead , algorithmic fairness becomes a topic of paramount importance . Lending ( Hardt et al. , 2016 ) , hiring ( Alder & Gilbert , 2006 ) , and educational rights ( Kusner et al. , 2017 ) are examples where gender or race biased decisions from automatic systems can have serious consequences . Even for more mechanical tasks such as image classification ( Buolamwini & Gebru , 2018 ) , image captioning ( Hendricks et al. , 2018 ) , word embedding learning ( Garg et al. , 2018 ; Bolukbasi et al. , 2016 ) , and named co-reference resolution ( Zhao et al. , 2018 ) , algorithmic discrimination can be a major concern . As the society relies more and more on such automated systems , algorithmic fairness becomes a pressing issue . Although much of the focus of developing automated learning systems has been on the performance , it is important to take fairness into consideration while designing and deploying the systems . Unfortunately , state-of-the-art automated systems are usually data driven , which makes it more likely to inherit or even amplify the biases rooted in a dataset . This is an especially serious issue for deep learning and gradient based models , which can easily fit itself into the biased patterns of the dataset . For example , in a dataset with very few female candidates being labeled as hired in a job candidate prediction task , models might choose to give unfavorable predictions to qualified female candidates due to their under-representations in the training data . If deployed , such a biased predictor will deprive minority groups from acquiring the same opportunities as the others . Much of the work in the domain of machine learning fairness has been focusing exclusively on leveraging knowledge from samples in a dataset . One straightforward way is to adjust the distributions of the training data through pre-processing . In the job candidate prediction example above , this means that we can either down-sample the majority class or up-sample the minority ones ( Kamiran & Calders , 2012 ) . Another family of fairness methods aims at matching the model performance on the majority class to that of the minority ones during training by using one of the fairness criteria ( Gajane & Pechenizkiy , 2017 ) . Some examples of such methods includes adding regularizations ( Kamishima et al. , 2012 ) or applying adversarial learning ( Madras et al. , 2018a ) . One issue with these approaches is that in many cases minority groups might be heavily under-represented in the dataset . Model training with fairness constraints will typically give up much of the performance ad- vantages ( e.g. , prediction accuracies ) in favor of the fairness metrics . Methods concentrate on solely on a dataset will often find themselves difficult to maintain a good performance - fairness trade-off . One way to make models learn beyond the dataset is to take advantage of causal reasoning ( Pearl et al. , 2009 ) , which borrows knowledge from external structures often formulated as a causal graph . Counterfactual Fairness ( Kusner et al. , 2017 ) and Causal Fairness ( Kilbertus et al. , 2017 ) are examples of such approaches . One unique characteristic of causal fairness is the fact that they need to be built based on a causal graph . And because those metrics are usually optimized and evaluated their own objective , which involves a causal graph , it ’ s not clear how that added knowledge can be used to benefit other more commonly used fairness criteria such as Demographic Parity and Equalized Odds . Although it is possible to create causal structures that subsume conditional independencies in order to benefit DP or EO , we will need those structure information to be known in advance and we will have to derive one such structure for each metric we find . This is , what we believed , a significant limitation of the current causal methods which we aim to improve . In this paper , we propose a generative approach for fairness training that is capable of leveraging both real data and “ counterfactual data ” generated from a causal graph . The counterfactual data is generated in a way that alters the sensitive attribute while keeping other latent factors unchanged . We formulate such generative model using a novel combination of adversarial training with mutual information regularization . Next , the two types of data are organized by an architecture called the teacher , which dynamically determines the proportion of real and counterfactual samples to train a particular model . Our model - Generative Fairness Teacher ( GFT ) can be used to improve an arbitrary fairness criteria based on need . Our experimental results indicate that we are able to take advantage of the counterfactual generative model and make it able to achieve a significantly better model fairness on a wide range of datasets across models . we are able to improve upon models with different levels of biases . 2 BACKGROUND . We provides a basic overview for the foundations of our method . Here we assume X to be the input features , while A being the set of sensitive features . We define Y the be favorable outcome and Ŷ to be the models ’ prediction of the favorable outcome given the features . The core idea of Fairness in machine learning is to distribute those favorable outcomes evenly across each of the sensitive group A . 2.1 FORMAL FAIRNESS CRITERIA . There has been many existing work on fairness focusing on studying criteria to achieve algorithmic fairness . A straightforward way to define fairness is Demographic Parity Madras et al . ( 2018a ) . In Demographic Parity , the chances of allocating the favorable outcomes Ŷ is the same across sensitive groupsA . Under that definition , the predictive variable Ŷ is independent withA , making predictions free from discrimination against sensitive groups . Note that even thoughA takes the form of a binary variable , we can easily extend the definition into the case of multiple values . Definition 1 Demographic Parity P ( Ŷ |X = x , A = a ) = P ( Ŷ |X = x , A = a′ ) ( 1 ) Other fairness criteria that are built based on input features includes include Fairness Through Unawareness Gajane & Pechenizkiy ( 2017 ) , and Individual Fairness Kusner et al . ( 2017 ) . More recently , Hardt et al . argued that criteria that only takes into account sample features making it difficult for the algorithms to allocate favorable outcomes to the actual qualified samples in both the minority and the majority groups . Such an observation leading to a new fairness criteria called Equalized Odds ( and its special case Equal Opportunity ) Hardt et al . ( 2016 ) , where the fairness statement includes a condition on target variable Y . Definition 2 Equalized Odds P ( Ŷ = 1|X = x , A = a , Y = y ) = P ( Ŷ = 1|X = x , A = a′ , Y = y ) ( 2 ) 3 CAUSAL MODELS AND COUNTERFACTUAL EXAMPLES . A causal model Pearl et al . ( 2000 ) is defined over a triple ( U , V , F ) where V is a set of observed variables and U being a set of latent background variables . F is defined to be a set of equations for each variable in V , Vi = fi ( pai , Upai ) . Here pai refers to the parent of i in a causal graph . One importance concept in causal reasoning is intervention , in which case we substitute the variable of certain equation vi = v. We define a counterfactual example to be a synthesized sample generated from an existing data X by manipulating its sensitive feature from a to a′ . Here we assume that both the real sample X and the counterfactual sample X̂A←a′ are generated from a latent code U . Definition 3 Counterfactual Example X̂A←a′ ( U ) |X , a ( 3 ) 3.1 COMMON TECHNIQUES FOR FAIRNESS . Depending on when the fairness criteria are applied , methods for achieving fairness can be categorized as pre-processing , in-processing and post-processing Mehrabi et al . ( 2019 ) . In-processing Techniques . In-processing techniques apply fairness criteria during the training . Common techniques including fairness regularizer Kamishima et al . ( 2012 ) and adversarial training Madras et al . ( 2018a ) . Other methods fall into this category including the reduction based method Agarwal et al . ( 2018 ) and the more traditional discrimination approach in data miningHajian & Domingo-Ferrer ( 2012 ) . Our implementation applies in-processing techniques although our framework does not deal with in-processing methods directly . Pre-processing Techniques . Pre-processing methods applies to the models before the actual training happens . Methods fall into this category are almost exclusively data processing techniques that aims at making the dataset free from biases . Re-sampling and re-weighting are two common techniques of pre-processing techniques for fairness . Calmon et al . ( 2017 ) ; Kamiran & Calders ( 2012 ) ; Agarwal et al . ( 2018 ) . Other techniques include that repairs biases in a database Salimi et al . ( 2019 ) . Our method is closely related to the pre-processing techniques because from the perspective of the student model our teacher model can be viewed as a data pre-processor . Post-processing Techniques . When fairness adjustments are applied after the training is finished , techniques are called post-processing methods . Post-processing methods can be used to adapt models with all kinds of biases levels into a fair model Madras et al . ( 2018b ) . Other recently proposed including the method to model fairness as a score transformation problem Wei et al . ( 2019 ) and methods enforces Independence between sensitive features and model outcomes through Wasserstein-1 distances Jiang et al . ( 2020 ) Our approach is closely related to the post-processing technique as our teacher model can work with an arbitrarily biased student model . 4 GENERATIVE FAIRNESS TEACHING . In this section , we propose a teaching framework for training a student model that is able to work with a wide range of fairness criteria . We first present the overview of our approach in section 4.1 . Then in section 4.2 , we elaborate a novel generative model that can create “ counterfactual examples ” . In section 4.4 we will show how to train such a teacher policy with the given student model and counterfactual generative model . 4.1 FAIRNESS TEACHING FRAMEWORK . Given a training dataset Dtrain = { ( X = xi , Y = yi , A = ai ) } |Dtrain|i=1 , where X and Y are observed features and label , respectively , and A is some sensitive attribute , we are interested in learning a predictive model pθ ( Y |X ) that is parameterized by θ , such that it maximizes the reward on the validation set : R ( θ ) = E ( x , y ) ∼Dvalid log pθ ( y|x ) − λfcFC ( pθ ( Y |X ) , Dvalid ) ( 4 ) Algorithm 1 Generative Teaching Procedure 1 : Input : initial student model pθ0 ( Y |X ) , counterfactual generator p̂ ( X̂ ) 2 : Input : dataset D , Teacher policy πψ 3 : for t← 1 to episode length T do 4 : Sample a minibatch D′ = { di = [ xi , yi , ai ] } Mi=1 ∼ D. 5 : Get counterfactual data D̂′ = { d̂i = [ x̂i ∼ p̂ ( ·|X = xi , A = ai ) , yi , a′i ] } for each di ∈ D′ . 6 : Get current student ’ s state s = S ( D′ , pθt−1 ( Y |X ) ) using Eq 11 . 7 : Obtain decision at ∼ πψ ( s ) , and get D̃ = { di ∈ D′|a ( i ) t = 0 } ⋃ { d̂i ∈ D̂′|a ( i ) t = 1 } . 8 : Update student : θt = θt−1 + η∇θ=θt−1 1|D̃| ∑ ( x , y ) ∈D̃ log pθ ( y|x ) 9 : end for 10 : Return : updated student model pθT ( Y |X ) where FC ( · , · ) stands for the evaluation metric under certain fairness requirement , such as equalized odds . This objective tries to balance between the generalization error and the fairness constraint , which is controlled by the hyperparameter λfc . In our teaching framework , we will teach a student predictive model pθ ( Y |X ) to minimize Eq 4 . Our teacher model πψ is responsible for providing proper data samples for the student at each step of optimization to achieve this goal . In most teaching frameworks , the teacher is only responsible for selecting proper samples from existing dataset . However , due to the potential bias in the dataset , such assumption is too limited to achieve the fairness requirement . Recall the definition of counterfactual example in Eq 3 . Given a tuple of ( U , X = x , A = a ) , changingA byA← a′ while keeping U fixed will also changeX . Thus the change to the predictive distribution p ( ŶA←a′ ( U ) |x , a ) depends on two aspects : 1 ) the predictive model pθ ( Y |X ) , and 2 ) a counterfactual generative model p̂ ( X̂ ) : = p ( X̂A←a′ ( U ) |X = x , A = a ) . Suppose we have the model p̂ ( X̂ ) ready , then it would be possible to regulate pθ ( Y |X ) by generate counterfactual samples during training . Given the teacher model πψ and the counterfactual generative model p̂ ( X̂ ) , we are ready to present our iterative teaching approach for learning pθ ( Y |X ) in algorithm 1 . At each teaching stage , the teacher will make binary decisions on using 1 ) the sample selection from the given dataset Dtrain , or 2 ) the counterfactual data ( X̂ , Y , A = a′ ) coming from data sample ( X , Y , A = a ) ∈ Dtrain and altered by p̂ ( X̂ ) . The student will then use the selected samples to perform gradient update of θ . In the following sections , we will present how such counterfactual generative model p̂ ( X̂ ) are learned through teaching policy πψ . 4.2 LEARNING COUNTERFACTUAL GENERATOR p̂ ( X̂ ) To learn the counterfactual data distribution , we first need an understanding of the empirical data distribution . In next subsection , we first present our latent variable modeling of the data distribution : | This paper combines counterfactual modeling with adversarial training for fair machine learning tasks. For a given fairness metric chosen from a variety of canonical examples, the method ensures fairness by augmenting the data with counterfactual examples during training. The approach has potential, which is best demonstrated on examples where the counterfactual data generation is interesting, like the CelebA data. | SP:7ff567aac68a3492029136828f1b45dd7c358e8a |
Faster Training of Word Embeddings | 1 INTRODUCTION . Word embeddings have a long history ( Rumelhart et al. , 1986 ; Bengio et al. , 2003 ; Collobert & Weston , 2008 ) , but have received much attention in recent years due to word2vec ( Mikolov et al. , 2013 ) and its computationally efficient implementation via skip-gram with negative sampling . Word embeddings capture contextual relationships between the words , and have become a standard input representation for the majority of NLP tasks , benefitting , e.g. , classification ( Joulin et al. , 2016 ; Deriu et al. , 2017 ) or machine translation ( Jansen , 2017 ; Conneau et al. , 2017 ) . More recently , state-of-the-art results on many language understanding tasks were achieved by deep transformer architectures such as BERT ( Devlin et al. , 2019 ) , which however are very compute intensive both at training and inference time , even with pre-trained models and reduced parameter space . Thus , simpler and more lightweight static word embeddings such as fastText ( Bojanowski et al. , 2017 ) are still widely used , due to their fast execution , comparable results for particular tasks ( Tseng et al. , 2019 ) , and ability to produce a single vector per word , which helps in information retrieval with interpretability and search index construction . Contributions . In this paper , we present algorithmic and code optimization techniques to improve the training time of word2vec and fastText embeddings on modern general-purpose multicore and manycore computers . We present an optimized open-source implementation of word2vec and fastText that encapsulates a number of algorithmic variants including negative sample sharing , batched updates , and subword units based on byte-pair encoding approach . Our extensive evaluation on three languages shows that the best combinations of optimizations speed up training time by 2.7– 20.6 times while maintaining accuracy of selected NLP tasks . 2 WORD EMBEDDINGS . Word2vec . Word2vec is built upon a simple bilinear regression model trained on word cooccurrence , resulting in numerical feature representations as floating point vectors of dimensionality d. Given a word in a sentence , the goal of the algorithm is to maximize the likelihood of predicting surrounding ( context ) words . To achieve this , the model is trained to increase the probability of predicting particular words if they appear close to a given current word in the training corpus . A popular variant also decreases the probability of predicting words that do not appear close to the current word ( negative sampling ( Mikolov et al. , 2013 ; Goldberg & Levy , 2014 ) ) . During training , the algorithm processes the corpus in a streaming fashion . Each word wi ( called current word ) is processed together with its surrounding context words { wi−C , ... , wi−1 } , { wi+1 , ... , wi+C } , where C is the range of the context window . There are two modes of operation training the model for the following prediction tasks : • Skip-gram ( SG ) : predict target context words using the current word wi as the source . • CBOW : predict the target current word wi using context words as the source . Each word w in the vocabulary of size V is represented as a source ws by one row in the V × d input matrix Min containing word embeddings , and each word is represented as a target wt by one row in the V × d output matrix Mout that is used to calculate the training objective function . The goal of the optimization is to maximize the inner products ( minimize difference ) of real pairs of source current words with the target context words or vice versa , using the binary logistic loss . This approach can be improved by the use of negative sampling , where the algorithm additionally maximizes the difference between the source current words and the words picked randomly out of the source ’ s context . Training is performed using stochastic gradient descent ( SGD ) . SGD is performed in parallel with p threads by splitting the training corpus into p parts and processing them asynchronously ( “ Hogwild ” approach ( Recht et al. , 2011 ) ) . The final embedding of each word is its corresponding row in Min . Mout is discarded at the end of training . FastText . FastText ( Bojanowski et al. , 2017 ) improves word2vec by utilizing subwords of target words during the training . A typical run of fastText uses subwords of lengths k = 3 . . . 6 , containing delimiters 〈 , 〉 at the boundaries . For example , for the word paris and k = 3 the subwords are : 〈pa , par , ari , ris , is〉 . In fastText , the embeddings Min are extended to contain rows representing both entire words as well as hashes of all their subwords . Additionally , the representation of the entire word is added to the set of its subwords . During the execution of the algorithm , the hidden layer h is built by averaging vectors in Min representing the source word ’ s subwords . The final vector embedding for each word is obtained in the same way . Mout remains unchanged . Fig . 1 shows an example of how the word vectors are stored and accessed . A single update is described in Alg . 1 . Related work . FastText has been implemented as a part of the popular Gensim library ( Rehurek & Sojka , 2011 ) using Cython and a default machine ’ s BLAS library ( e.g. , Intel MKL ) for algebraic operations . In our experiments we found the code memory-expensive and slow : training 5 epochs on a 1 GB English Wikipedia dump with 24 threads took approximately 11 hours on a Knights Landing CPU , about 10 times slower than the original fastText . Therefore , we use the original code provided by Facebook Research ( 2016a ) as the baseline in all our experiments . For skip-gram with negative sampling , pWord2Vec ( Ji et al. , 2016 ) transforms the “ Hogwild ” approach into “ Hogbatch ” , by performing updates on multiple context words at once ( effectively turning a series of dot products into a matrix-matrix operation ) and sharing negative samples for the entire batch . We employ similar techniques in our implementation . The work by Rengasamy et al . ( 2017 ) extends this approach by context combining , where multiple contexts can share a set of negative samples and be updated all at once . We do not adapt this approach as it requires careful preprocessing rewarded by a relatively small speedup over pWord2Vec . Algorithm 1 : A single iteration of the original fastText algorithm . In skip-gram , it is performed on each current-context word pair ( as source-target ) . In CBOW , all context words are used as source words at the same time . Data : source word ( s ) ws , target word wt , learning rate l , number of negative samples n 1 if skip-gram then // Initialize . 2 h = Min ( ws ) + ∑ z∈subwords ( ws ) Min ( z ) count ( subwords ( ws ) ) +1 // Average vectors of the source word ( s ) and their 3 else if CBOW then // subwords to obtain the hidden layer . 4 h = ∑ s ( Min ( ws ) + ∑ z∈subwords ( ws ) Min ( z ) ) ∑ s ( count ( subwords ( ws ) ) +1 ) 5 g = 0 // Reset the gradient . // Update the target word . 6 α = l ( 1− σ ( h ·Mout ( wt ) ) ) // Compute positive score reflecting similarity between // h and the row Mout ( wt ) representing wt . 7 g = g + α ·Mout ( wt ) // Build the gradient . 8 Mout ( wt ) =Mout ( wt ) + α · h // Update the target word . 9 for t′ ← 1 to n do // Update negative samples : negative score . 10 pick a random negative sample wt′ 6= wt 11 α = l ( 0− σ ( h ·Mout ( wt′ ) ) ) ) // Compute negative score . 12 g = g + α ·Mout ( wt′ ) ) // Build the gradient . 13 Mout ( wt′ ) ) =Mout ( wt′ ) ) + α · h // Update the target word . 14 end 15 if skip-gram then // Update the source rows ( s ) . 16 Min ( ws ) =Min ( ws ) + g 17 foreach z ∈ subwords ( ws ) do 18 Min ( z ) =Min ( z ) + g 19 end 20 else if CBOW then 21 foreach ws do // As a result , the difference between rows in Min ( ws ) 22 Min ( ws ) =Min ( ws ) + g // and Mout which corresponds to the positive samples 23 foreach z ∈ subwords ( ws ) do // drops , and the difference between rows in Mout 24 Min ( z ) =Min ( z ) + g // which correspond to the negative samples increases . 25 end 26 end Word2vec and fastText have been also implemented for GPU clusters . BlazingText ( Gupta & Khare , 2017 ) tackles the problem of efficient batch size and synchronization for multiple GPUs . While this issue is of no concern on CPU , they report the execution time on a single GPU comparable to a 16- threaded CPU fastText baseline . We further speed up the CPU implementation . The work by Bae & Yi ( 2016 ) reports up to 11× speedup of word2vec with negative sampling run on a K20 GPU over the single-threaded CPU word2vec . However , they report only up to 1.6× speedup over a 12-threaded CPU run . Our no subword version of the code are roughly 5 ( skip-gram ) and 6 ( CBOW ) faster than the 20-threaded runs of the original word2vec . Word2vec and fastText are memory-intensive algorithms . Additionally , fine-grained parallelism is limited by relatively small vectors typically used in the computations . These characteristics severely limit the potential advantages of GPU over CPU . Li et al . ( 2019 ) discuss a distributed version for many GPUs aiming at the reduction of write conflicts in updates . Similarly ( and independently ) , we made attempts at pre-scheduling a list of currentcontext word updates , but we found the overhead of this preprocessing prohibitive . Nonetheless , the algorithmic variants presented in our paper can be applied in distributed setting , as long as the data used for a single update fits inside a batch used in the distributed computation . This will be the case for our variants , since they either execute separate updates on each current-context word pair , or update current words with their entire ( typically small ) contexts . This is also the case in the original fastText and therefore , the communication cost should not increase . Another popular word embedding model is GloVe ( Pennington et al. , 2014 ) . While the Authors claim superiority over word2vec , a more thorough evaluation ( e. g. , by Wang et al . ( 2019 ) , or Kumar et al . ( 2020 ) ) shows that there is no clear winner , as the results may vary depending on the training corpus , evaluation task and hyperparameters used . Additionally , GloVe lacks the information on word morphology provided by fastText , and does not scale well for large vocabularies . Since GloVe is based on a completely different algorithmic structure , that is , creation and reduction of a global word co-occurrence matrix , there is no straightforward way to apply to it our code optimizations and variants . | This paper applies several algorithmic and code optimization techniques to reduce the training time for word2vec and fastText. The final implementation runs 3-20x times faster than the baselines on manycore CPUs, with almost no quality loss. The improved implementation is a result of the combination of many techniques, including namely no_subword, Minibatching, Negative sharing etc. | SP:c00be5119de90c410bab1b23fc36e559aca76041 |
Faster Training of Word Embeddings | 1 INTRODUCTION . Word embeddings have a long history ( Rumelhart et al. , 1986 ; Bengio et al. , 2003 ; Collobert & Weston , 2008 ) , but have received much attention in recent years due to word2vec ( Mikolov et al. , 2013 ) and its computationally efficient implementation via skip-gram with negative sampling . Word embeddings capture contextual relationships between the words , and have become a standard input representation for the majority of NLP tasks , benefitting , e.g. , classification ( Joulin et al. , 2016 ; Deriu et al. , 2017 ) or machine translation ( Jansen , 2017 ; Conneau et al. , 2017 ) . More recently , state-of-the-art results on many language understanding tasks were achieved by deep transformer architectures such as BERT ( Devlin et al. , 2019 ) , which however are very compute intensive both at training and inference time , even with pre-trained models and reduced parameter space . Thus , simpler and more lightweight static word embeddings such as fastText ( Bojanowski et al. , 2017 ) are still widely used , due to their fast execution , comparable results for particular tasks ( Tseng et al. , 2019 ) , and ability to produce a single vector per word , which helps in information retrieval with interpretability and search index construction . Contributions . In this paper , we present algorithmic and code optimization techniques to improve the training time of word2vec and fastText embeddings on modern general-purpose multicore and manycore computers . We present an optimized open-source implementation of word2vec and fastText that encapsulates a number of algorithmic variants including negative sample sharing , batched updates , and subword units based on byte-pair encoding approach . Our extensive evaluation on three languages shows that the best combinations of optimizations speed up training time by 2.7– 20.6 times while maintaining accuracy of selected NLP tasks . 2 WORD EMBEDDINGS . Word2vec . Word2vec is built upon a simple bilinear regression model trained on word cooccurrence , resulting in numerical feature representations as floating point vectors of dimensionality d. Given a word in a sentence , the goal of the algorithm is to maximize the likelihood of predicting surrounding ( context ) words . To achieve this , the model is trained to increase the probability of predicting particular words if they appear close to a given current word in the training corpus . A popular variant also decreases the probability of predicting words that do not appear close to the current word ( negative sampling ( Mikolov et al. , 2013 ; Goldberg & Levy , 2014 ) ) . During training , the algorithm processes the corpus in a streaming fashion . Each word wi ( called current word ) is processed together with its surrounding context words { wi−C , ... , wi−1 } , { wi+1 , ... , wi+C } , where C is the range of the context window . There are two modes of operation training the model for the following prediction tasks : • Skip-gram ( SG ) : predict target context words using the current word wi as the source . • CBOW : predict the target current word wi using context words as the source . Each word w in the vocabulary of size V is represented as a source ws by one row in the V × d input matrix Min containing word embeddings , and each word is represented as a target wt by one row in the V × d output matrix Mout that is used to calculate the training objective function . The goal of the optimization is to maximize the inner products ( minimize difference ) of real pairs of source current words with the target context words or vice versa , using the binary logistic loss . This approach can be improved by the use of negative sampling , where the algorithm additionally maximizes the difference between the source current words and the words picked randomly out of the source ’ s context . Training is performed using stochastic gradient descent ( SGD ) . SGD is performed in parallel with p threads by splitting the training corpus into p parts and processing them asynchronously ( “ Hogwild ” approach ( Recht et al. , 2011 ) ) . The final embedding of each word is its corresponding row in Min . Mout is discarded at the end of training . FastText . FastText ( Bojanowski et al. , 2017 ) improves word2vec by utilizing subwords of target words during the training . A typical run of fastText uses subwords of lengths k = 3 . . . 6 , containing delimiters 〈 , 〉 at the boundaries . For example , for the word paris and k = 3 the subwords are : 〈pa , par , ari , ris , is〉 . In fastText , the embeddings Min are extended to contain rows representing both entire words as well as hashes of all their subwords . Additionally , the representation of the entire word is added to the set of its subwords . During the execution of the algorithm , the hidden layer h is built by averaging vectors in Min representing the source word ’ s subwords . The final vector embedding for each word is obtained in the same way . Mout remains unchanged . Fig . 1 shows an example of how the word vectors are stored and accessed . A single update is described in Alg . 1 . Related work . FastText has been implemented as a part of the popular Gensim library ( Rehurek & Sojka , 2011 ) using Cython and a default machine ’ s BLAS library ( e.g. , Intel MKL ) for algebraic operations . In our experiments we found the code memory-expensive and slow : training 5 epochs on a 1 GB English Wikipedia dump with 24 threads took approximately 11 hours on a Knights Landing CPU , about 10 times slower than the original fastText . Therefore , we use the original code provided by Facebook Research ( 2016a ) as the baseline in all our experiments . For skip-gram with negative sampling , pWord2Vec ( Ji et al. , 2016 ) transforms the “ Hogwild ” approach into “ Hogbatch ” , by performing updates on multiple context words at once ( effectively turning a series of dot products into a matrix-matrix operation ) and sharing negative samples for the entire batch . We employ similar techniques in our implementation . The work by Rengasamy et al . ( 2017 ) extends this approach by context combining , where multiple contexts can share a set of negative samples and be updated all at once . We do not adapt this approach as it requires careful preprocessing rewarded by a relatively small speedup over pWord2Vec . Algorithm 1 : A single iteration of the original fastText algorithm . In skip-gram , it is performed on each current-context word pair ( as source-target ) . In CBOW , all context words are used as source words at the same time . Data : source word ( s ) ws , target word wt , learning rate l , number of negative samples n 1 if skip-gram then // Initialize . 2 h = Min ( ws ) + ∑ z∈subwords ( ws ) Min ( z ) count ( subwords ( ws ) ) +1 // Average vectors of the source word ( s ) and their 3 else if CBOW then // subwords to obtain the hidden layer . 4 h = ∑ s ( Min ( ws ) + ∑ z∈subwords ( ws ) Min ( z ) ) ∑ s ( count ( subwords ( ws ) ) +1 ) 5 g = 0 // Reset the gradient . // Update the target word . 6 α = l ( 1− σ ( h ·Mout ( wt ) ) ) // Compute positive score reflecting similarity between // h and the row Mout ( wt ) representing wt . 7 g = g + α ·Mout ( wt ) // Build the gradient . 8 Mout ( wt ) =Mout ( wt ) + α · h // Update the target word . 9 for t′ ← 1 to n do // Update negative samples : negative score . 10 pick a random negative sample wt′ 6= wt 11 α = l ( 0− σ ( h ·Mout ( wt′ ) ) ) ) // Compute negative score . 12 g = g + α ·Mout ( wt′ ) ) // Build the gradient . 13 Mout ( wt′ ) ) =Mout ( wt′ ) ) + α · h // Update the target word . 14 end 15 if skip-gram then // Update the source rows ( s ) . 16 Min ( ws ) =Min ( ws ) + g 17 foreach z ∈ subwords ( ws ) do 18 Min ( z ) =Min ( z ) + g 19 end 20 else if CBOW then 21 foreach ws do // As a result , the difference between rows in Min ( ws ) 22 Min ( ws ) =Min ( ws ) + g // and Mout which corresponds to the positive samples 23 foreach z ∈ subwords ( ws ) do // drops , and the difference between rows in Mout 24 Min ( z ) =Min ( z ) + g // which correspond to the negative samples increases . 25 end 26 end Word2vec and fastText have been also implemented for GPU clusters . BlazingText ( Gupta & Khare , 2017 ) tackles the problem of efficient batch size and synchronization for multiple GPUs . While this issue is of no concern on CPU , they report the execution time on a single GPU comparable to a 16- threaded CPU fastText baseline . We further speed up the CPU implementation . The work by Bae & Yi ( 2016 ) reports up to 11× speedup of word2vec with negative sampling run on a K20 GPU over the single-threaded CPU word2vec . However , they report only up to 1.6× speedup over a 12-threaded CPU run . Our no subword version of the code are roughly 5 ( skip-gram ) and 6 ( CBOW ) faster than the 20-threaded runs of the original word2vec . Word2vec and fastText are memory-intensive algorithms . Additionally , fine-grained parallelism is limited by relatively small vectors typically used in the computations . These characteristics severely limit the potential advantages of GPU over CPU . Li et al . ( 2019 ) discuss a distributed version for many GPUs aiming at the reduction of write conflicts in updates . Similarly ( and independently ) , we made attempts at pre-scheduling a list of currentcontext word updates , but we found the overhead of this preprocessing prohibitive . Nonetheless , the algorithmic variants presented in our paper can be applied in distributed setting , as long as the data used for a single update fits inside a batch used in the distributed computation . This will be the case for our variants , since they either execute separate updates on each current-context word pair , or update current words with their entire ( typically small ) contexts . This is also the case in the original fastText and therefore , the communication cost should not increase . Another popular word embedding model is GloVe ( Pennington et al. , 2014 ) . While the Authors claim superiority over word2vec , a more thorough evaluation ( e. g. , by Wang et al . ( 2019 ) , or Kumar et al . ( 2020 ) ) shows that there is no clear winner , as the results may vary depending on the training corpus , evaluation task and hyperparameters used . Additionally , GloVe lacks the information on word morphology provided by fastText , and does not scale well for large vocabularies . Since GloVe is based on a completely different algorithmic structure , that is , creation and reduction of a global word co-occurrence matrix , there is no straightforward way to apply to it our code optimizations and variants . | This paper proposes the approaches that reduce the training times of word2vec and fastText. To improve the efficiency, it uses several techniques such as negative sample sharing, batched updates, and a byte-pair encoding-based alternative for subword units. By using English, German, and Russian languages data, it shows that the proposed approach is much faster than the existing approaches. | SP:c00be5119de90c410bab1b23fc36e559aca76041 |
Continual Invariant Risk Minimization | Empirical risk minimization can lead to poor generalization behaviour on unseen environments if the learned model does not capture invariant feature representations . Invariant risk minimization ( IRM ) is a recent proposal for discovering environment-invariant representations . It was introduced by Arjovsky et al . ( 2019 ) and extended by Ahuja et al . ( 2020 ) . The assumption of IRM is that all environments are available to the learning system at the same time . With this work , we generalize the concept of IRM to scenarios where environments are observed sequentially . We show that existing approaches , including those designed for continual learning , fail to identify the invariant features and models across sequentially presented environments . We extend IRM under a variational Bayesian and bilevel framework , creating a general approach to continual invariant risk minimization . We also describe a strategy to solve the optimization problems using a variant of the alternating direction method of multiplier ( ADMM ) . We show empirically using multiple datasets and with multiple sequential environments that the proposed methods outperforms or is competitive with prior approaches . 1 INTRODUCTION . Empirical risk minimization ( ERM ) is the predominant principle for designing machine learning models . In numerous application domains , however , the test data distribution can differ from the training data distribution . For instance , at test time , the same task might be observed in a different environment . Neural networks trained by minimizing ERM objectives over the training distribution tend to generalize poorly in these situations . Improving generalization of learning systems has become a major research topic in recent years , with many different threads of research including , but not limited to , robust optimization ( e.g. , Hoffman et al . ( 2018 ) ) and domain adaptation ( e.g. , Johansson et al . ( 2019 ) ) . Both of these research directions , however , have their own intrinsic limitations ( Ahuja et al . ( 2020 ) ) . Recently , there have been proposals of approaches that learn environment-invariant representations . The motivating idea is that the behavior of a model being invariant across environments makes it more likely that the model has captured a causal relationship between features and prediction targets . This in turn should lead to a better generalization behavior . Invariant risk minimization ( IRM , Arjovsky et al . ( 2019 ) ) , which pioneered this idea , introduces a new optimization loss function to identify non-spurious causal feature-target interactions . Invariant risk minimization games ( IRMG , Ahuja et al . ( 2020 ) ) expands on IRM from a game-theoretic perspective . The assumption of IRM and its extensions , however , is that all environments are available to the learning system at the same time , which is unrealistic in numerous applications . A learning agent experiences environments often sequentially and not concurrently . For instance , in a federated learning scenario with patient medical records , each hospital ’ s ( environment ) data might be used to train a shared machine learning model which receives the data from these environments in a sequential manner . The model might then be applied to data from an additional hospital ( environment ) that was not available at training time . Unfortunately , both IRM and IRMG are incompatible with such a continual learning setup in which the learner receives training data from environments presented in a sequential manner . As already noted by Javed et al . ( 2020 ) , “ IRM Arjovsky et al . ( 2019 ) requires sampling data from multiple environments simultaneously for computing a regularization term pertinent to its learning objective , where different environments are defined by intervening on one or more variables of the world. ” The same applies to IRMG ( Ahuja et al . ( 2020 ) ) To address the problem of learning environment-invariant ML models in sequential environements , we make the following contributions : • We expand both IRM and IRMG under a Bayesian variational framework and develop novel objectives ( for the discovery of invariant models ) in two scenarios : ( 1 ) the standard multienvironment scenario where the learner receives training data from all environments at the same time ; and ( 2 ) the scenario where data from each environment arrives in a sequential manner . • We demonstrate that the resulting bilevel problem objectives have an alternative formulation , which allows us to compute a solution efficiently using the alternating direction method of multipliers ( ADMM ) . • We compare our method to ERM , IRM , IRMG , and various continual learning methods ( EWC , GEM , MER , VCL ) on a diverse set of tasks , demonstrating comparable or superior performance in most situations . 2 BACKGROUND : OFFLINE INVARIANT RISK MINIMIZATION . We consider a multi-environment setting where , given a set of training environments E = { e1 , e2 , · · · , em } , the goal is to find parameters θ that generalize well to unseen ( test ) environments . Each environment e has an associated training data set De and a corresponding risk Re Re ( w ◦ φ ) .= E ( x , y ) ∼De ` e ( ( w ◦ φ ) ( x ) , y ) , ( 1 ) where fθ = w ◦φ is the composition of a feature extraction function φ and a classifier ( or regression function ) w. Empirical Risk Minimization ( ERM ) minimizes the average loss across all training examples , regardless of environment : RERM ( θ ) . = E ( x , y ) ∼∪e∈EDe ` ( fθ ( x ) , y ) . ( 2 ) ERM has strong theoretical foundations in the case of iid data ( Vapnik ( 1992 ) ) but can fail dramatically when test environments differ significantly from training environments . To remove spurious features from the model , Invariant Risk Minimization ( IRM , Arjovsky et al . ( 2019 ) ) instead aims to capture invariant representations φ such that the optimal classifier w given φ is the same across all training environments . This leads to the following multiple bi-level optimization problem min φ∈Hφ , w∈Hw ∑ e∈E Re ( w ◦ φ ) s.t . w ∈ arg min we∈Hw Re ( we ◦ φ ) , ∀e ∈ E , ( 3 ) where Hφ , Hw are the hypothesis sets for , respectively , feature extractors and classifiers . Unfortunately , solving the IRM bi-level programming problem directly is difficult since solving the outer problem requires solving multiple dependent minimization problems jointly . We can , however , relax IRM to IRMv1 by fixing a scalar classifier and learning a representation φ such that the classifier is “ approximately locally optimal ” ( Arjovsky et al . ( 2019 ) ) min φ∈Hφ ∑ e∈E Re ( φ ) + λ||∇w|w=1.0Re ( wφ ) ||2 , ∀e ∈ E , ( 4 ) where w is a scalar evaluated in 1 and λ controls the strength of the penalty term on gradients on w. Alternatively , the recently proposed Invariant Risk Minimization Games ( IRMG ) ( Ahuja et al . ( 2020 ) ) proposes to learn an ensemble of classifiers with each environment controlling one component of the ensemble . Intuitively , the environments play a game where each environment ’ s action is to decide its contribution to the ensemble aiming to minimize its risk . Specifically , IRMG optimizes the following objective : min φ∈Hφ ∑ e∈E Re ( w̄ ◦ φ ) s.t . we = arg min w∈Hw Re ( 1 |E| ( w + w−e ) ◦ φ ) , ∀e ∈ E , ( 5 ) where w̄ = 1|E| ∑ e∈E we is the average and w−e = ∑ e′∈E , e′ 6=e we′ the complement classifier . 3 CONTINUAL IRM BY APPROXIMATE BAYESIAN INFERENCE . Both IRM and IRMG assume the availability of training data from all environments at the same time , which is impractical and unrealistic in numerous applications . A natural approach would be to combine principles from IRM and continual learning . Experience replay , that is , memorizing examples of past environments and reusing them later , could be possible in some scenarios but it is often difficult to estimate a-priori the extend of replay necessary to achieve satisfactory generalization capabilities . Here , we propose to adopt a probabilistic approach , exploiting the propagation of the model distribution over environments using Bayes ’ rule . We integrate both IRM and IRMG with stochastic models , introducing their variational counterparts that admit a continual extension . In addition , our approach is justified by the property of the Kullback–Leibler ( KL ) divergence that promotes invariant distributions when used in sequential learning ( as shown in Theorem 3 ) . 3.1 VARIATIONAL CONTINUAL LEARNING . Following prior work in continual learning ( Nguyen et al . ( 2018 ) ) , let Dt be the training data from the t-th environment et , let Dt1 be the cumulative data up to the t-th environment , and let θ be the parameters of the feature extractor . When each environment is given in a sequential manner , we can use Bayes ’ rule and we have ( all proofs are provided in the supplementary material ) p ( θ|Dt1 ) ∝ p ( θ|Dt−11 ) p ( Dt|θ ) , ( 6 ) that is , once we have the posterior distribution p ( θ|Dt−11 ) at time t − 1 , we can obtain , by applying Bayes rule , the posterior p ( θ|Dt1 ) at time t up to a normalization constant . This is achieved by multiplying the previous posterior with the current data likelihood p ( Dt|θ ) . The posterior distribution is in general not tractable and we use an approximation . With the variational approximation , p ( θ|Dt1 ) ≈ qt ( θ ) , it is thus possible to propagate the variational distribution from one environment to the next . From Corollary 14 ( in the supplementary material ) we can write the continual variational Bayesian inference objective as qt ( θ ) = arg min q ( θ ) E ( x , y ) ∼DtEθ∼q ( θ ) { ` ( y , fθ ( x ) ) } +DKL ( q ( θ ) ||qt−1 ( θ ) ) , ( 7 ) from the variational distribution at step qt−1 ( θ ) , with fθ = w ◦ φ , a function with parameters θ . 3.2 EQUIVALENT FORMULATION OF IRM AS A BILEVEL OPTIMIZATION PROBLEM ( BIRM ) . In order to extend the IRM principle of Equation 3 using the principle of approximate Bayesian inference , by applying Lemma 5 ( in supplementary material ) , we first introduce the following new equivalent definition of IRM ( equation 3 ) . Definition 1 ( Bilevel IRM ( BIRM ) ) . Let Hφ be a set of feature extractors and let Hw be the set of possible classifiers . An invariant predictor w ◦ φ on a set of environments E is said to satisfy the Invariant Risk Minimization ( IRM ) property , if it is the solution to the following bi-level Invariant Risk Minimization ( BIRM ) problem min φ∈Hφ , w∈Hw ∑ e∈E Re ( w ◦ φ ) ( 8a ) s.t . ∇wRe ( w ◦ φ ) = 0 , ∀e ∈ E. ( 8b ) This formulation results from substituting the minimization conditions in the constraint set of the original IRM formulation with the Karush–Kuhn–Tucker ( KKT ) optimality conditions . This new formulation allows us to introduce efficient solution methods and simplifies the conditions of IRM . It also justifies the IRMv1 model ; indeed , when the classifier is a scalar value and the equality constraint is included in the optimization cost function , we obtain Equation 4 . To solve the BIRM problem , we propose to use the Alternating Direction Method of Multipliers ( ADMM ) ( Boyd et al . ( 2011 ) ) . ADMM is an alternate optimization procedure that improves convergence and exploits the decomposability of the objective function and constraints . Details of the BIRM-ADMM algorithm are presented in the supplementary material . | In this work, the authors consider the problem of continual learning with distribution shifts. The work extends the recent work invariant risk minimization (IRM) from Arjovsky et al. to a continual learning setup. IRM was designed as an offline learning framework. In this work, the authors consider the setting where the different domains arrive sequentially. The authors propose a Bayesian extension of the IRM framework that allows sequential updates as the environments arrive. They provide a justification for how the KL divergence term helps in shrinking the support continually to arrive at an approximately invariant support. Several experiments were carried out on colored MNIST and its variants to show how the proposed scheme is better than existing continual learning methods and existing IRM frameworks. | SP:326dad16a8e4a1aaf80950dbed74ac096c0d5fef |
Continual Invariant Risk Minimization | Empirical risk minimization can lead to poor generalization behaviour on unseen environments if the learned model does not capture invariant feature representations . Invariant risk minimization ( IRM ) is a recent proposal for discovering environment-invariant representations . It was introduced by Arjovsky et al . ( 2019 ) and extended by Ahuja et al . ( 2020 ) . The assumption of IRM is that all environments are available to the learning system at the same time . With this work , we generalize the concept of IRM to scenarios where environments are observed sequentially . We show that existing approaches , including those designed for continual learning , fail to identify the invariant features and models across sequentially presented environments . We extend IRM under a variational Bayesian and bilevel framework , creating a general approach to continual invariant risk minimization . We also describe a strategy to solve the optimization problems using a variant of the alternating direction method of multiplier ( ADMM ) . We show empirically using multiple datasets and with multiple sequential environments that the proposed methods outperforms or is competitive with prior approaches . 1 INTRODUCTION . Empirical risk minimization ( ERM ) is the predominant principle for designing machine learning models . In numerous application domains , however , the test data distribution can differ from the training data distribution . For instance , at test time , the same task might be observed in a different environment . Neural networks trained by minimizing ERM objectives over the training distribution tend to generalize poorly in these situations . Improving generalization of learning systems has become a major research topic in recent years , with many different threads of research including , but not limited to , robust optimization ( e.g. , Hoffman et al . ( 2018 ) ) and domain adaptation ( e.g. , Johansson et al . ( 2019 ) ) . Both of these research directions , however , have their own intrinsic limitations ( Ahuja et al . ( 2020 ) ) . Recently , there have been proposals of approaches that learn environment-invariant representations . The motivating idea is that the behavior of a model being invariant across environments makes it more likely that the model has captured a causal relationship between features and prediction targets . This in turn should lead to a better generalization behavior . Invariant risk minimization ( IRM , Arjovsky et al . ( 2019 ) ) , which pioneered this idea , introduces a new optimization loss function to identify non-spurious causal feature-target interactions . Invariant risk minimization games ( IRMG , Ahuja et al . ( 2020 ) ) expands on IRM from a game-theoretic perspective . The assumption of IRM and its extensions , however , is that all environments are available to the learning system at the same time , which is unrealistic in numerous applications . A learning agent experiences environments often sequentially and not concurrently . For instance , in a federated learning scenario with patient medical records , each hospital ’ s ( environment ) data might be used to train a shared machine learning model which receives the data from these environments in a sequential manner . The model might then be applied to data from an additional hospital ( environment ) that was not available at training time . Unfortunately , both IRM and IRMG are incompatible with such a continual learning setup in which the learner receives training data from environments presented in a sequential manner . As already noted by Javed et al . ( 2020 ) , “ IRM Arjovsky et al . ( 2019 ) requires sampling data from multiple environments simultaneously for computing a regularization term pertinent to its learning objective , where different environments are defined by intervening on one or more variables of the world. ” The same applies to IRMG ( Ahuja et al . ( 2020 ) ) To address the problem of learning environment-invariant ML models in sequential environements , we make the following contributions : • We expand both IRM and IRMG under a Bayesian variational framework and develop novel objectives ( for the discovery of invariant models ) in two scenarios : ( 1 ) the standard multienvironment scenario where the learner receives training data from all environments at the same time ; and ( 2 ) the scenario where data from each environment arrives in a sequential manner . • We demonstrate that the resulting bilevel problem objectives have an alternative formulation , which allows us to compute a solution efficiently using the alternating direction method of multipliers ( ADMM ) . • We compare our method to ERM , IRM , IRMG , and various continual learning methods ( EWC , GEM , MER , VCL ) on a diverse set of tasks , demonstrating comparable or superior performance in most situations . 2 BACKGROUND : OFFLINE INVARIANT RISK MINIMIZATION . We consider a multi-environment setting where , given a set of training environments E = { e1 , e2 , · · · , em } , the goal is to find parameters θ that generalize well to unseen ( test ) environments . Each environment e has an associated training data set De and a corresponding risk Re Re ( w ◦ φ ) .= E ( x , y ) ∼De ` e ( ( w ◦ φ ) ( x ) , y ) , ( 1 ) where fθ = w ◦φ is the composition of a feature extraction function φ and a classifier ( or regression function ) w. Empirical Risk Minimization ( ERM ) minimizes the average loss across all training examples , regardless of environment : RERM ( θ ) . = E ( x , y ) ∼∪e∈EDe ` ( fθ ( x ) , y ) . ( 2 ) ERM has strong theoretical foundations in the case of iid data ( Vapnik ( 1992 ) ) but can fail dramatically when test environments differ significantly from training environments . To remove spurious features from the model , Invariant Risk Minimization ( IRM , Arjovsky et al . ( 2019 ) ) instead aims to capture invariant representations φ such that the optimal classifier w given φ is the same across all training environments . This leads to the following multiple bi-level optimization problem min φ∈Hφ , w∈Hw ∑ e∈E Re ( w ◦ φ ) s.t . w ∈ arg min we∈Hw Re ( we ◦ φ ) , ∀e ∈ E , ( 3 ) where Hφ , Hw are the hypothesis sets for , respectively , feature extractors and classifiers . Unfortunately , solving the IRM bi-level programming problem directly is difficult since solving the outer problem requires solving multiple dependent minimization problems jointly . We can , however , relax IRM to IRMv1 by fixing a scalar classifier and learning a representation φ such that the classifier is “ approximately locally optimal ” ( Arjovsky et al . ( 2019 ) ) min φ∈Hφ ∑ e∈E Re ( φ ) + λ||∇w|w=1.0Re ( wφ ) ||2 , ∀e ∈ E , ( 4 ) where w is a scalar evaluated in 1 and λ controls the strength of the penalty term on gradients on w. Alternatively , the recently proposed Invariant Risk Minimization Games ( IRMG ) ( Ahuja et al . ( 2020 ) ) proposes to learn an ensemble of classifiers with each environment controlling one component of the ensemble . Intuitively , the environments play a game where each environment ’ s action is to decide its contribution to the ensemble aiming to minimize its risk . Specifically , IRMG optimizes the following objective : min φ∈Hφ ∑ e∈E Re ( w̄ ◦ φ ) s.t . we = arg min w∈Hw Re ( 1 |E| ( w + w−e ) ◦ φ ) , ∀e ∈ E , ( 5 ) where w̄ = 1|E| ∑ e∈E we is the average and w−e = ∑ e′∈E , e′ 6=e we′ the complement classifier . 3 CONTINUAL IRM BY APPROXIMATE BAYESIAN INFERENCE . Both IRM and IRMG assume the availability of training data from all environments at the same time , which is impractical and unrealistic in numerous applications . A natural approach would be to combine principles from IRM and continual learning . Experience replay , that is , memorizing examples of past environments and reusing them later , could be possible in some scenarios but it is often difficult to estimate a-priori the extend of replay necessary to achieve satisfactory generalization capabilities . Here , we propose to adopt a probabilistic approach , exploiting the propagation of the model distribution over environments using Bayes ’ rule . We integrate both IRM and IRMG with stochastic models , introducing their variational counterparts that admit a continual extension . In addition , our approach is justified by the property of the Kullback–Leibler ( KL ) divergence that promotes invariant distributions when used in sequential learning ( as shown in Theorem 3 ) . 3.1 VARIATIONAL CONTINUAL LEARNING . Following prior work in continual learning ( Nguyen et al . ( 2018 ) ) , let Dt be the training data from the t-th environment et , let Dt1 be the cumulative data up to the t-th environment , and let θ be the parameters of the feature extractor . When each environment is given in a sequential manner , we can use Bayes ’ rule and we have ( all proofs are provided in the supplementary material ) p ( θ|Dt1 ) ∝ p ( θ|Dt−11 ) p ( Dt|θ ) , ( 6 ) that is , once we have the posterior distribution p ( θ|Dt−11 ) at time t − 1 , we can obtain , by applying Bayes rule , the posterior p ( θ|Dt1 ) at time t up to a normalization constant . This is achieved by multiplying the previous posterior with the current data likelihood p ( Dt|θ ) . The posterior distribution is in general not tractable and we use an approximation . With the variational approximation , p ( θ|Dt1 ) ≈ qt ( θ ) , it is thus possible to propagate the variational distribution from one environment to the next . From Corollary 14 ( in the supplementary material ) we can write the continual variational Bayesian inference objective as qt ( θ ) = arg min q ( θ ) E ( x , y ) ∼DtEθ∼q ( θ ) { ` ( y , fθ ( x ) ) } +DKL ( q ( θ ) ||qt−1 ( θ ) ) , ( 7 ) from the variational distribution at step qt−1 ( θ ) , with fθ = w ◦ φ , a function with parameters θ . 3.2 EQUIVALENT FORMULATION OF IRM AS A BILEVEL OPTIMIZATION PROBLEM ( BIRM ) . In order to extend the IRM principle of Equation 3 using the principle of approximate Bayesian inference , by applying Lemma 5 ( in supplementary material ) , we first introduce the following new equivalent definition of IRM ( equation 3 ) . Definition 1 ( Bilevel IRM ( BIRM ) ) . Let Hφ be a set of feature extractors and let Hw be the set of possible classifiers . An invariant predictor w ◦ φ on a set of environments E is said to satisfy the Invariant Risk Minimization ( IRM ) property , if it is the solution to the following bi-level Invariant Risk Minimization ( BIRM ) problem min φ∈Hφ , w∈Hw ∑ e∈E Re ( w ◦ φ ) ( 8a ) s.t . ∇wRe ( w ◦ φ ) = 0 , ∀e ∈ E. ( 8b ) This formulation results from substituting the minimization conditions in the constraint set of the original IRM formulation with the Karush–Kuhn–Tucker ( KKT ) optimality conditions . This new formulation allows us to introduce efficient solution methods and simplifies the conditions of IRM . It also justifies the IRMv1 model ; indeed , when the classifier is a scalar value and the equality constraint is included in the optimization cost function , we obtain Equation 4 . To solve the BIRM problem , we propose to use the Alternating Direction Method of Multipliers ( ADMM ) ( Boyd et al . ( 2011 ) ) . ADMM is an alternate optimization procedure that improves convergence and exploits the decomposability of the objective function and constraints . Details of the BIRM-ADMM algorithm are presented in the supplementary material . | This paper extends the idea of invariant risk minimization (IRM) initially introduced by Arjovsky et al. (2019) to the setting of continual learning in which environments are observed sequentially rather than concurrently. This extension is implemented under a variational Bayesian and bilevel framework and the optimization is solved using a variant of the alternating direction method of multiplier (ADMM). The authors demonstrate the superiority of the proposed methods on variants of Colored MNIST. | SP:326dad16a8e4a1aaf80950dbed74ac096c0d5fef |
Learned ISTA with Error-based Thresholding for Adaptive Sparse Coding | 1 INTRODUCTION . Sparse coding is widely used in many machine learning applications ( Xu et al. , 2012 ; Dabov et al. , 2007 ; Yang et al. , 2010 ; Ikehata et al. , 2012 ) , and its core problem is to deduce the high-dimensional sparse code from the obtained low-dimensional observation , for example , under the assumption of y = Axs + ε , where y ∈ Rm is the observation corrupted by the inevitable noise ε ∈ Rm , xs ∈ Rn is the sparse code to be estimated , andA ∈ Rm×n is an over-complete dictionary matrix . To recover xs purely from y is called sparse linear inverse problem ( SLIP ) . The main challenge for solving SLIP is its ill-posed nature because of the over-complete modeling , i.e. , m < n. A possible solution to SLIP can be obtained via solving a LASSO problem using the l1 regularization : min x ‖y −Ax‖2 + λ‖x‖1 . ( 1 ) Possible solutions for Eq . ( 1 ) are iterative shrinking thresholding algorithm ( ISTA ) ( Daubechies et al. , 2004 ) and its variants , e.g. , fast ISTA ( FISTA ) ( Beck & Teboulle , 2009 ) . Despite their simplicity , these traditional optimization algorithm suffer from slow convergence speed in large scale problems . Therefore , Gregor & LeCun ( 2010 ) proposed the learned ISTA ( LISTA ) which was a deep neural network ( DNN ) whose architecture followed the iterative process of ISTA . The thresholding mechanism was modified into shrinkage functions in the DNNs together with learnable thresholds . LISTA achieved superior performance in sparse coding , and many theoretical analyses have been proposed to modify LISTA to further improve its performance ( Chen et al. , 2018 ; Liu et al. , 2019 ; Zhou et al. , 2018 ; Ablin et al. , 2019 ; Wu et al. , 2020 ) . Yet , LISTA and many other deep networks based on it suffer from two issues . ( a ) Though the thresholds of the shrinkage functions in LISTA were learnable , their values were shared among all training samples and thus lack adaptability to the variety of training samples and robustness to outliers . According to prior work ( Chen et al. , 2018 ; Liu et al. , 2019 ) , the thresholds should be proportional to the upper bound of the norm of the current estimation error to guarantee fast convergence in LISTA . However , outliers with drastically higher estimation errors will affect the thresholds more , making the learned thresholds less suitable to other ( training ) samples . ( b ) For the same reason , it may also lead to poor generalization to test data with different distribution ( or sparsity ( Chen et al. , 2018 ) ) from the training data . For instance , in practice , we may only be given some synthetic sparse codes but not the real ones for training , and current LISTA models may fail to generalize under such circumstances . In this paper , we propose an error-based thresholding ( EBT ) mechanism to address the aforementioned issues of LISTA-based models to improve their performance . Drawing on theoretical insights , EBT introduces a function of the evolving estimation error to provide each threshold in the shrinkage functions . It has no extra parameter to learn compared with original LISTA-based models yet shows significantly better performance . The main contributions of our paper are listed as follows : • The EBT mechanism can be readily incorporated into popular sparse coding DNNs ( e.g. , LISTA ( Gregor & LeCun , 2010 ) and LISTA with support selection ( Chen et al. , 2018 ) ) to speed up the convergence with no extra parameters . • We give a rigorous analysis to prove that the estimation error of EBT-LISTA ( i.e. , a combination of our EBT and LISTA ) and EBT-LISTA with support selection ( i.e. , a combination of our EBT and LISTA with support selection ) is theoretically lower than the original LISTA ( Gregor & LeCun , 2010 ) and LISTA with support selection ( Chen et al. , 2018 ) , respectively . In addition , the introduced parameters in our EBT are well-disentangled from the reconstruction errors and need only to be correlated with the dictionary matrix to ensure convergence . These results guarantee the superiority of our EBT in theory . • We demonstrate the effectiveness of our EBT in the original LISTA and several of its variants in simulation experiments . We also show that it can be applied to practical applications ( e.g. , photometric stereo analysis ) and achieve superior performance as well . The organization of this paper is structured as follows . In Section 2 , we will review some preliminary knowledge of our study . In Section 3 , we will introduce a basic form of our EBT and several of its improved versions . Section 4 provides a theoretical study of the convergence of EBT-LISTA . Experimental results in Section 5 valid the effectiveness of our method in practice . Section 6 summarizes this paper . 2 BACKGROUND AND PRELIMINARY KNOWLEDGE . As mentioned in Section 1 , ISTA is an iterative algorithm for solving LASSO in Eq . ( 1 ) . Its update rule is : x ( 0 ) = 0 and x ( t+1 ) = shλ/γ ( ( I −ATA/γ ) x ( t ) +AT y/γ ) , ∀t ≥ 0 , ( 2 ) where shb ( x ) = sign ( x ) ( |x| − b ) + is a shrinkage function with a threshold b ≥ 0 and ( · ) + = max { 0 , · } , γ is a positive constant scalar greater than or equal to the maximal eigenvalue of the symmetric matrix ATA . LISTA kept the update rule of ISTA but learned parameters via end-to-end training . Its inference process can be formulated as x ( 0 ) = 0 and x ( t+1 ) = shb ( t ) ( W ( t ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , ( 3 ) where Θ = { W ( t ) , U ( t ) , b ( t ) } t=0 , ... , d is a set of learnable parameters , and , specifically , b ( t ) is the layer-wise threshold which is learnable but shared among all samples . LISTA achieved lower reconstruction error between its output and the ground-truth xs compared with ISTA , and it is proved to convergence linearly ( Chen et al. , 2018 ) with W ( t ) = I−U ( t ) A holds for any layer t. Thus , Eq . ( 3 ) can be written as . x ( t+1 ) = shb ( t ) ( ( I − U ( t ) A ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d. ( 4 ) Chen et al . ( 2018 ) further proposed support selection for LISTA , which introduced shp ( b ( t ) , p ) ( x ) whose elements are defined as ( shp ( b ( t ) , p ) ( x ) ) i = sign ( xi ) ( |xi| − b ) , if |xi| > b , i /∈ Sp xi , if |xi| > b , i ∈ Sp 0 , otherwise , ( 5 ) to substitute the original shrinking function shb ( t ) ( x ) , where Sp is the set of the index of the largest p % elements ( in absolute value ) in vector x . Formally , the update rule of LISTA with support selection is formulated as x ( 0 ) = 0 and x ( t+1 ) = shp ( b ( t ) , p ( t ) ) ( ( I − UA ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , ( 6 ) where p ( t ) is a hyper-parameter and it increases from early layers to later layers . LISTA with support selection can achieve faster convergence compared with LISTA ( Chen et al. , 2018 ) . Theoretical studies ( Chen et al. , 2018 ; Liu et al. , 2019 ) also demonstrate that the threshold of LISTA and its variants should satisfy the equality b ( t ) ← µ ( A ) sup xs∈S ‖x ( t ) − xs‖p ( 7 ) to ensure fast convergence in the noiseless case ( i.e. , ε = 0 ) , where S is the training set and µ ( A ) is the general mutual coherence coefficient of the dictionary matrixA . Note that µ ( A ) is a crucial term in this paper , here we formally give its definition together with the definition ofW ( A ) as follows . Definition 1 For A ∈ Rm×n , its generalized coherence coefficient is defined as µ ( A ) = infW∈Rn×m , Wi , :A : ,i=1 maxi 6=j | ( Wi , :A : ,j ) | , and we say W ∈ W ( A ) if maxi 6=j ( Wi , :A : ,j ) = µ ( A ) . 3 METHODS . In LISTA and its variants , the threshold b ( t ) is commonly treated as a learnable parameter . As demonstrated in Eq . ( 7 ) , b ( t ) should be proportional to the upper bound of the estimation error of the t-th layer in the noiseless case to ensure fast convergence . Thus , some outliers or extreme training samples largely influence the value of b ( t ) , making the obtained threshold not fit the majority of the data . To be specific , we know that the suggested value of b ( t ) is b ( t ) = µ ( A ) supi=0,1 , ... , n‖x ( t ) i − xsi‖p for ‖x‖1a training set { xsi } i=0,1 , ... , n , and normal training of LISTA leads to it in theory ( Chen et al. , 2018 ) . Yet , if a new training sample xsn+1 with higher reconstruction error is introduced , the expected b ( t ) shall change to µ ( A ) ‖x ( t ) n+1 − xsn+1‖p , which is probably undesirable for the other samples . Similar problems occur if there exists a large variety in the value of reconstruction errors . In order to solve this problem , we propose to disentangle the reconstruction error term from the learnable part of the threshold and introduce adaptive thresholds for LISTA and related networks . We attempt to rewrite the threshold at the t-th layer as something like b ( t ) = ρ ( t ) ‖x ( t ) − xs‖p , ( 8 ) where ρ ( t ) is a layer-specific learnable parameter . However , the ground-truth xs is actually unknown for the inference process in SLIP . Therefore , we need to find an alternative formulation . Notice that in the noiseless case , it holds that Ax ( t ) − y = A ( x ( t ) − xs ) , thus we further rewrite Eq . ( 8 ) into b ( t ) = ρ ( t ) ‖Q ( Ax ( t ) − y ) ‖p , ( 9 ) whereQ ∈ Rn×m is a compensation matrix introduced to let Eq . ( 9 ) approximate Eq . ( 8 ) better , i.e. , a matrix that makesQA approaches the identity matrix is more desired . However , note that although QA is a low-rank matrix and can never be an identity matrix , we can encourage its diagonal elements to be 1 and the non-diagonal elements to be nearly zero . This can be directly achieved by letting Q ∈ W ( A ) , where W ( A ) is defined in Definition 1 . According to some prior works ( Liu et al. , 2019 ; Wu et al. , 2020 ; Chen et al. , 2018 ) , we also know that U ( t ) ∈ W ( A ) to guarantee linear convergence , thus U ( t ) can probably be a reasonable option for the matrix Q in our method , making it layer-specific as well . Therefore , our EBT-LISTA is formulated as x ( 0 ) = 0 and x ( t+1 ) = shb ( t ) ( ( I − U ( t ) A ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , b ( t ) = ρ ( t ) ‖U ( t ) ( Ax ( t ) − y ) ‖p . ( 10 ) Note that only ρ ( t ) and U ( t ) are learnable parameters in the above formulation , thus our EBT-LISTA actually introduces no extra parameters compared with the original LISTA . The architecture of LISTA and EBT-LISTA are shown in Figure 1 . We can also apply our EBT mechanism on LISTA with support selection ( Chen et al. , 2018 ) . It is straightforward to keep the support set selection operation and replace the fixed threshold with our EBT , and such a combination can be similarly formulated as x ( 0 ) = 0 and x ( t+1 ) = shp ( b ( t ) , p ( t ) ) ( ( I − U ( t ) A ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , b ( t ) = ρ ( t ) ‖U ( t ) ( Ax ( t ) − y ) ‖p . ( 11 ) Our former analysis is based on the noiseless case . For noise case , there is A ( x ( t ) − xs ) = Ax ( t ) − y + ε . Since the noise is generally unknown in practical problems , we may add an extra learnable parameter on the threshold to compensate the noise , i.e. , b ( t ) = ρ ( t ) ‖U ( t ) ( Ax ( t ) − y ) ‖p + α ( t ) , ( 12 ) where α ( t ) is the learnable parameter for the observation noise . | In the paper, authors propose a new error-based thresholding mechanism for LISTA which introduces a function of the evolving estimation error to provide each threshold in the shrinkage functions. They provided the theoretical analysis for EBT-LISTA and EBT-LISTA with support selection and proved that the estimation error of the proposed algorithm is theoretically lower than compared methods. The authors also evaluated the proposed method on multiple synthetic or real tasks. Experimental results show that the proposed method achieves a better estimation error and higher adaptivity to different observations with a variety of sparsity. | SP:f93ca5c4d2f3a07efc1eea35c1e188156c981287 |
Learned ISTA with Error-based Thresholding for Adaptive Sparse Coding | 1 INTRODUCTION . Sparse coding is widely used in many machine learning applications ( Xu et al. , 2012 ; Dabov et al. , 2007 ; Yang et al. , 2010 ; Ikehata et al. , 2012 ) , and its core problem is to deduce the high-dimensional sparse code from the obtained low-dimensional observation , for example , under the assumption of y = Axs + ε , where y ∈ Rm is the observation corrupted by the inevitable noise ε ∈ Rm , xs ∈ Rn is the sparse code to be estimated , andA ∈ Rm×n is an over-complete dictionary matrix . To recover xs purely from y is called sparse linear inverse problem ( SLIP ) . The main challenge for solving SLIP is its ill-posed nature because of the over-complete modeling , i.e. , m < n. A possible solution to SLIP can be obtained via solving a LASSO problem using the l1 regularization : min x ‖y −Ax‖2 + λ‖x‖1 . ( 1 ) Possible solutions for Eq . ( 1 ) are iterative shrinking thresholding algorithm ( ISTA ) ( Daubechies et al. , 2004 ) and its variants , e.g. , fast ISTA ( FISTA ) ( Beck & Teboulle , 2009 ) . Despite their simplicity , these traditional optimization algorithm suffer from slow convergence speed in large scale problems . Therefore , Gregor & LeCun ( 2010 ) proposed the learned ISTA ( LISTA ) which was a deep neural network ( DNN ) whose architecture followed the iterative process of ISTA . The thresholding mechanism was modified into shrinkage functions in the DNNs together with learnable thresholds . LISTA achieved superior performance in sparse coding , and many theoretical analyses have been proposed to modify LISTA to further improve its performance ( Chen et al. , 2018 ; Liu et al. , 2019 ; Zhou et al. , 2018 ; Ablin et al. , 2019 ; Wu et al. , 2020 ) . Yet , LISTA and many other deep networks based on it suffer from two issues . ( a ) Though the thresholds of the shrinkage functions in LISTA were learnable , their values were shared among all training samples and thus lack adaptability to the variety of training samples and robustness to outliers . According to prior work ( Chen et al. , 2018 ; Liu et al. , 2019 ) , the thresholds should be proportional to the upper bound of the norm of the current estimation error to guarantee fast convergence in LISTA . However , outliers with drastically higher estimation errors will affect the thresholds more , making the learned thresholds less suitable to other ( training ) samples . ( b ) For the same reason , it may also lead to poor generalization to test data with different distribution ( or sparsity ( Chen et al. , 2018 ) ) from the training data . For instance , in practice , we may only be given some synthetic sparse codes but not the real ones for training , and current LISTA models may fail to generalize under such circumstances . In this paper , we propose an error-based thresholding ( EBT ) mechanism to address the aforementioned issues of LISTA-based models to improve their performance . Drawing on theoretical insights , EBT introduces a function of the evolving estimation error to provide each threshold in the shrinkage functions . It has no extra parameter to learn compared with original LISTA-based models yet shows significantly better performance . The main contributions of our paper are listed as follows : • The EBT mechanism can be readily incorporated into popular sparse coding DNNs ( e.g. , LISTA ( Gregor & LeCun , 2010 ) and LISTA with support selection ( Chen et al. , 2018 ) ) to speed up the convergence with no extra parameters . • We give a rigorous analysis to prove that the estimation error of EBT-LISTA ( i.e. , a combination of our EBT and LISTA ) and EBT-LISTA with support selection ( i.e. , a combination of our EBT and LISTA with support selection ) is theoretically lower than the original LISTA ( Gregor & LeCun , 2010 ) and LISTA with support selection ( Chen et al. , 2018 ) , respectively . In addition , the introduced parameters in our EBT are well-disentangled from the reconstruction errors and need only to be correlated with the dictionary matrix to ensure convergence . These results guarantee the superiority of our EBT in theory . • We demonstrate the effectiveness of our EBT in the original LISTA and several of its variants in simulation experiments . We also show that it can be applied to practical applications ( e.g. , photometric stereo analysis ) and achieve superior performance as well . The organization of this paper is structured as follows . In Section 2 , we will review some preliminary knowledge of our study . In Section 3 , we will introduce a basic form of our EBT and several of its improved versions . Section 4 provides a theoretical study of the convergence of EBT-LISTA . Experimental results in Section 5 valid the effectiveness of our method in practice . Section 6 summarizes this paper . 2 BACKGROUND AND PRELIMINARY KNOWLEDGE . As mentioned in Section 1 , ISTA is an iterative algorithm for solving LASSO in Eq . ( 1 ) . Its update rule is : x ( 0 ) = 0 and x ( t+1 ) = shλ/γ ( ( I −ATA/γ ) x ( t ) +AT y/γ ) , ∀t ≥ 0 , ( 2 ) where shb ( x ) = sign ( x ) ( |x| − b ) + is a shrinkage function with a threshold b ≥ 0 and ( · ) + = max { 0 , · } , γ is a positive constant scalar greater than or equal to the maximal eigenvalue of the symmetric matrix ATA . LISTA kept the update rule of ISTA but learned parameters via end-to-end training . Its inference process can be formulated as x ( 0 ) = 0 and x ( t+1 ) = shb ( t ) ( W ( t ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , ( 3 ) where Θ = { W ( t ) , U ( t ) , b ( t ) } t=0 , ... , d is a set of learnable parameters , and , specifically , b ( t ) is the layer-wise threshold which is learnable but shared among all samples . LISTA achieved lower reconstruction error between its output and the ground-truth xs compared with ISTA , and it is proved to convergence linearly ( Chen et al. , 2018 ) with W ( t ) = I−U ( t ) A holds for any layer t. Thus , Eq . ( 3 ) can be written as . x ( t+1 ) = shb ( t ) ( ( I − U ( t ) A ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d. ( 4 ) Chen et al . ( 2018 ) further proposed support selection for LISTA , which introduced shp ( b ( t ) , p ) ( x ) whose elements are defined as ( shp ( b ( t ) , p ) ( x ) ) i = sign ( xi ) ( |xi| − b ) , if |xi| > b , i /∈ Sp xi , if |xi| > b , i ∈ Sp 0 , otherwise , ( 5 ) to substitute the original shrinking function shb ( t ) ( x ) , where Sp is the set of the index of the largest p % elements ( in absolute value ) in vector x . Formally , the update rule of LISTA with support selection is formulated as x ( 0 ) = 0 and x ( t+1 ) = shp ( b ( t ) , p ( t ) ) ( ( I − UA ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , ( 6 ) where p ( t ) is a hyper-parameter and it increases from early layers to later layers . LISTA with support selection can achieve faster convergence compared with LISTA ( Chen et al. , 2018 ) . Theoretical studies ( Chen et al. , 2018 ; Liu et al. , 2019 ) also demonstrate that the threshold of LISTA and its variants should satisfy the equality b ( t ) ← µ ( A ) sup xs∈S ‖x ( t ) − xs‖p ( 7 ) to ensure fast convergence in the noiseless case ( i.e. , ε = 0 ) , where S is the training set and µ ( A ) is the general mutual coherence coefficient of the dictionary matrixA . Note that µ ( A ) is a crucial term in this paper , here we formally give its definition together with the definition ofW ( A ) as follows . Definition 1 For A ∈ Rm×n , its generalized coherence coefficient is defined as µ ( A ) = infW∈Rn×m , Wi , :A : ,i=1 maxi 6=j | ( Wi , :A : ,j ) | , and we say W ∈ W ( A ) if maxi 6=j ( Wi , :A : ,j ) = µ ( A ) . 3 METHODS . In LISTA and its variants , the threshold b ( t ) is commonly treated as a learnable parameter . As demonstrated in Eq . ( 7 ) , b ( t ) should be proportional to the upper bound of the estimation error of the t-th layer in the noiseless case to ensure fast convergence . Thus , some outliers or extreme training samples largely influence the value of b ( t ) , making the obtained threshold not fit the majority of the data . To be specific , we know that the suggested value of b ( t ) is b ( t ) = µ ( A ) supi=0,1 , ... , n‖x ( t ) i − xsi‖p for ‖x‖1a training set { xsi } i=0,1 , ... , n , and normal training of LISTA leads to it in theory ( Chen et al. , 2018 ) . Yet , if a new training sample xsn+1 with higher reconstruction error is introduced , the expected b ( t ) shall change to µ ( A ) ‖x ( t ) n+1 − xsn+1‖p , which is probably undesirable for the other samples . Similar problems occur if there exists a large variety in the value of reconstruction errors . In order to solve this problem , we propose to disentangle the reconstruction error term from the learnable part of the threshold and introduce adaptive thresholds for LISTA and related networks . We attempt to rewrite the threshold at the t-th layer as something like b ( t ) = ρ ( t ) ‖x ( t ) − xs‖p , ( 8 ) where ρ ( t ) is a layer-specific learnable parameter . However , the ground-truth xs is actually unknown for the inference process in SLIP . Therefore , we need to find an alternative formulation . Notice that in the noiseless case , it holds that Ax ( t ) − y = A ( x ( t ) − xs ) , thus we further rewrite Eq . ( 8 ) into b ( t ) = ρ ( t ) ‖Q ( Ax ( t ) − y ) ‖p , ( 9 ) whereQ ∈ Rn×m is a compensation matrix introduced to let Eq . ( 9 ) approximate Eq . ( 8 ) better , i.e. , a matrix that makesQA approaches the identity matrix is more desired . However , note that although QA is a low-rank matrix and can never be an identity matrix , we can encourage its diagonal elements to be 1 and the non-diagonal elements to be nearly zero . This can be directly achieved by letting Q ∈ W ( A ) , where W ( A ) is defined in Definition 1 . According to some prior works ( Liu et al. , 2019 ; Wu et al. , 2020 ; Chen et al. , 2018 ) , we also know that U ( t ) ∈ W ( A ) to guarantee linear convergence , thus U ( t ) can probably be a reasonable option for the matrix Q in our method , making it layer-specific as well . Therefore , our EBT-LISTA is formulated as x ( 0 ) = 0 and x ( t+1 ) = shb ( t ) ( ( I − U ( t ) A ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , b ( t ) = ρ ( t ) ‖U ( t ) ( Ax ( t ) − y ) ‖p . ( 10 ) Note that only ρ ( t ) and U ( t ) are learnable parameters in the above formulation , thus our EBT-LISTA actually introduces no extra parameters compared with the original LISTA . The architecture of LISTA and EBT-LISTA are shown in Figure 1 . We can also apply our EBT mechanism on LISTA with support selection ( Chen et al. , 2018 ) . It is straightforward to keep the support set selection operation and replace the fixed threshold with our EBT , and such a combination can be similarly formulated as x ( 0 ) = 0 and x ( t+1 ) = shp ( b ( t ) , p ( t ) ) ( ( I − U ( t ) A ) x ( t ) + U ( t ) y ) , t = 0 , . . . , d , b ( t ) = ρ ( t ) ‖U ( t ) ( Ax ( t ) − y ) ‖p . ( 11 ) Our former analysis is based on the noiseless case . For noise case , there is A ( x ( t ) − xs ) = Ax ( t ) − y + ε . Since the noise is generally unknown in practical problems , we may add an extra learnable parameter on the threshold to compensate the noise , i.e. , b ( t ) = ρ ( t ) ‖U ( t ) ( Ax ( t ) − y ) ‖p + α ( t ) , ( 12 ) where α ( t ) is the learnable parameter for the observation noise . | This paper disentangles the threshold parameters in LISTA-type models from the reconstruction errors, proposing the Error-Based Threholding (EBT) mechanism which mainly follows a theoretical results in (Chen et al., 2019; Liu et al., 2018), where the threshold at one layer is proportional to the recovery error of current iterate. The benefits brought by the proposed EBT method are faster convergence and better adaptivity to a wider range of samples. To bypass the requirement of ground truth sparse signals, EBT uses the reconstruction error following a learned linear transform, which in theory has good coherence property with the dictionary and therefore can approximate the recovery error well. The authors theoretically show that the proposed EBT mechanism enjoys faster convergence in both cases with and without the support selection technique. Emprirical experiments on standard synthetic setting and cross-sparsity setting are shown to support the efficacy of EBT. The authors also do real-world photometric stereo analysis to show the superiority of EBT. | SP:f93ca5c4d2f3a07efc1eea35c1e188156c981287 |
Lossless Compression of Structured Convolutional Models via Lifting | 1 INTRODUCTION . Lifted , often referred to as templated , models use highly expressive representation languages , typically based in weighted predicate logic , to capture symmetries in relational learning problems ( Koller et al. , 2007 ) . This includes learning from data such as chemical , biological , social , or traffic networks , and various knowledge graphs , relational databases and ontologies . The idea has been studied extensively in probabilistic settings under the notion of lifted graphical models ( Kimmig et al. , 2015 ) , with instances such as Markov Logic Networks ( MLNs ) ( Richardson & Domingos , 2006 ) or Bayesian Logic Programs ( BLPs ) ( Kersting & De Raedt , 2001 ) . In a wider view , convolutions can be seen as instances of the templating idea in neural models , where the same parameterized pattern is being carried around to exploit the underlying symmetries , i.e . some forms of shared correlations in the data . In this analogy , the popular Convolutional Neural Networks ( CNN ) ( Krizhevsky et al. , 2012 ) themselves can be seen as a simple form of a templated model , where the template corresponds to the convolutional filters , unfolded over regular spatial grids of pixels . But the symmetries are further even more noticeable in structured , relational domains with discrete element types . With convolutional templates for regular trees , the analogy covers Recursive Neural Networks ( Socher et al. , 2013 ) , popular in natural language processing . Extending to arbitrary graphs , the same notion covers works such as Graph Convolutional Networks ( Kipf & Welling , 2016 ) and their variants ( Wu et al. , 2019 ) , as well as various Knowledge-Base Embedding methods ( Wang et al. , 2017 ) . Extending even further to relational structures , there are works integrating parameterized relational logic templates with neural networks ( Sourek et al. , 2018 ; Rocktäschel & Riedel , 2017 ; Marra & Kuželka , 2019 ; Manhaeve et al. , 2018 ) . The common underlying principle of templated models is a joint parameterization of the symmetries , allowing for better generalization . However , standard lifted models , such as MLNs , provide another key advantage that , under certain conditions , the model computations can be efficiently carried out without complete template unfolding , often leading to even exponential speedups ( Kimmig et al. , 2015 ) . This is known as “ lifted inference ” ( Kersting , 2012 ) and is utilized heavily in lifted graphical models as well as database query engines ( Suciu et al. , 2011 ) . However , to our best knowledge , this idea has been so far unexploited in the neural ( convolutional ) models . The main contribution of this paper is thus a “ lifting ” technique to compress symmetries in convolutional models applied to structured data , which we refer to generically as “ structured convolutional models ” . 1.1 RELATED WORK . The idea for the compression is inspired by lifted inference ( Kersting , 2012 ) used in templated graphical models . The core principle is that all equivalent sub-computations can be effectively carried out in a single instance and broadcasted into successive operations together with their respective multiplicities , potentially leading to significant speedups . While the corresponding “ liftable ” template formulae ( or database queries ) generating the isomorphisms are typically assumed to be given ( Kimmig et al. , 2015 ) , we explore the symmetries from the unfolded ground structures , similarly to the approximate methods based on graph bisimulation ( Sen et al. , 2012 ) . All the lifting techniques are then based in some form of first-order variable elimination ( summation ) , and are inherently designed to explore structural symmetries in graphical models . In contrast , we aim to additionally explore functional symmetries , motivated by the fact that even structurally different neural computation graphs may effectively perform identical function . The learning in neural networks is also principally different from the model counting-based computations in lifted graphical models in that it requires many consecutive evaluations of the models as part of the encompassing iterative training routine . Consequently , even though we assume to unfold a complete computation graph before it is compressed with the proposed technique , the resulting speedup due to the subsequent training is still substantial . From the deep learning perspective , there have been various model compression techniques proposed to speedup the training , such as pruning , decreasing precision , and low-rank factorization ( Cheng et al. , 2017 ) . However , to our best knowledge , the existing techniques are lossy in nature , with a recent exception of compressing ReLU networks based on identifying neurons with linear behavior ( Serra et al. , 2020 ) . None of these works exploit the model computation symmetries . The most relevant line of work here are Lifted Relational Neural Networks ( LRNNs ) ( Sourek et al. , 2018 ) which however , despite the name , provide only templating capabilities without lifted inference , i.e . with complete , uncompressed ground computation graphs . 2 BACKGROUND . The compression technique described in this paper is applicable to a number of structured convolutional models , ranging from simple recursive ( Socher et al. , 2013 ) to fully relational neural models ( Sourek et al. , 2018 ) . The common characteristic of the targeted learners is the utilization of convolution ( templating ) , where the same parameterized pattern is carried over different subparts of the data ( representation ) with the same local structure , effectively introducing repetitive sub-computations in the resulting computation graphs , which we exploit in this work . 2.1 GRAPH NEURAL NETWORKS . Graph neural networks ( GNNs ) are currently the most prominent representatives of structured convolutional models , which is why we choose them for brevity of demonstration of the proposed compression technique . GNNs can be seen as an extension of the common CNN principles to completely irregular graph structures . Given a particularly structured input sample graph Sj , they dynamically unfold a multi-layered computation graph Gj , where the structure of each layer i follows the structure of the whole input graph Sj . For computation of the next layer i+1 values , each node v from the input graph Sj calculates its own value h ( v ) by aggregating A ( “ pooling ” ) the values of the adjacent nodes u : edge ( u , v ) , transformed by some parametric function CW1 ( “ convolution ” ) , which is being reused with the same parameterization W1 within each layer i as : h̃ ( v ) ( i ) = A ( i ) ( { C ( i ) W i1 ( h ( u ) ( i−1 ) ) |u : edge ( u , v ) } ) ( 1 ) The h̃ ( i ) ( v ) can be further combined through another CW2 with the central node ’ s representation from the previous layer to obtain the final updated value h ( i ) ( v ) for layer i as : h ( v ) ( i ) = C ( i ) W i2 ( h ( v ) ( i−1 ) , h̃ ( v ) ( i ) ) ( 2 ) This general principle covers a wide variety of GNN models , such the popular GCNs ( Kipf & Welling , 2016 ) , graph-SAGE ( Hamilton et al. , 2017 ) , GIN ( Xu et al. , 2018a ) , and others ( Xu et al. , 2018b ; Gilmer et al. , 2017 ) , which then reduces to the respective choices of particular aggregations A and transformations CW . An example computation graph of a generic GNN unfolded over an example molecule of methane is shown in Fig . 2 . 2.2 COMPUTATION GRAPHS . For the sake of this paper , let us now define the common notion of a computation graph more formally . A computation graph is a tuple G = ( N , E , F ) , where N = ( 1 , 2 , . . . , n ) is a list of nodes and E ⊆ N 2 × N is a list of directed labeled edges . Each labeled edge is a triple of integers ( n1 , n2 , l ) where n1 and n2 are nodes of the computation graph and l is the label . The labels are used to assign weights to the edges in the computation graph . Note this allows to define the weight sharing scheme as part of the graph ( cf Example 1 below ) . Finally , F = { f1 , f2 , . . . , fn } is the list of activation functions , one for each node from N . As usual , the graph is assumed to be acyclic . Children of a node N are naturally defined as all those nodes M such that ( M , N , L ) ∈ E , and analogically for parents . Note that since E is a list , edges contained in it are ordered , and the same edge may appear multiple times ( which will be useful later ) . Children of each node are also ordered – given two children C and C ′ of a node N , C precedes C ′ iff ( C , N , L ) precedes ( C ′ , N , L′ ) in the list of edges E . We denote the lists of children and parents of a given node N by Children ( N ) and Parents ( N ) , respectively . Computation graphs are then evaluated bottom up from the leaves of the graph ( nodes with no children ) to the roots of the graph ( nodes with no parents ) . Given a list of weightsW , we can now define the value of a node N ∈ N recursively as : value ( N ; W ) = fN ( WL1 · value ( M1 ; W ) , . . . , WLm · value ( Mm ; W ) ) , where ( M1 , . . . , Mm ) ≡ Children ( N ) is the ( ordered ) list of children of the nodeN , and L1 , . . . , Lm are the labels of the respective edges ( M1 , N , L1 ) , . . . , ( Mm , N , Lm ) ∈ E , andWLi is the Li-th component of the list W . Note that with the structured convolutional models , such as GNNs , we assume dynamic computation graphs where each learning sample Sj generates a separate Gj . Consequently , we can associate the leaf nodes in each Gj with constant functions1 , outputting the corresponding node ( feature ) values from the corresponding structured input sample Sj . 3 PROBLEM DEFINITION . The problem of detecting the symmetries in computation graphs can then be formalized as follows . Definition 1 ( Problem Definition ) . Let G = ( N , E , F ) be a computation graph . We say that two nodes N1 , N2 are equivalent if , for any W , it holds that value ( N1 ; W ) = value ( N2 ; W ) . The problem of detecting symmetries in computation graphs asks to partition the nodes of the computation graph into equivalence classes of mutually equivalent nodes . Example 1 . Consider the computation graph G = ( N , E , F ) , depicted in Fig . 1 , where N = { 0 , 1 , 2 , 3 , 4 } , E = ( ( 0 , 2 , 1 ) , ( 1 , 3 , 1 ) , ( 2 , 4 , 2 ) , ( 3 , 4 , 2 ) ) , F = { f0 = f1 = 1 , f2 ( x ) = f3 ( x ) = x , f4 ( x , y ) = x · cos ( y ) } . 1in contrast to static computation graphs where these functions are identities requiring the features at input . LetW = ( w1 , w2 ) be the weight list . The computation graph then computes the function ( w1w2 ) · cos ( w1w2 ) . It is not difficult to verify that the nodes { 0 , 1 } , and { 2 , 3 } are functionally equivalent . This also means , as we discuss in more detail in the next section , that we can “ merge ” them without changing the function that the graph computes . The resulting reduced graph then has the form N = { 1 , 3 , 4 } , E = { ( 1 , 3 , 1 ) , ( 3 , 4 , 2 ) , ( 3 , 4 , 2 ) } , F = { f1 = 1 , f3 ( x ) = x , f4 ( x , y ) = x · cos ( y ) } . In the example above , the nodes { 0 , 1 } and { 2 , 3 } are in fact also isomorphic in the sense that there exists an automorphism ( preserving weights and activation functions ) of the computation graph that swaps the nodes . Note that our definition is less strict : all we want the nodes to satisfy is functional equivalence , meaning that they should evaluate to the same values for any initialization ofW . We will also use the notion of structural-equivalence of nodes in computational graphs . Two nodes are structurally equivalent if they have the same outputs for any assignment of weightsW and for any replacement of any of the activation functions in the graph.2 That is if two nodes are structurally equivalent then they are also functionally equivalent but not vice versa . Importantly , the two nodes do not need to be automorphic3 in the graph-theoretical sense while being structurally equivalent , which also makes detecting structural equivalence easier from the computational point of view . In particular , we describe a simple polynomial-time algorithm in Section 4.2 . | The paper provides an interesting work in the scale/speed up of structured convolutional models. In particular, it proposes an idea using a technique named lifting which is used in scaling up of graphical models to detect the symmetries and compress the neural model such as Graph Neural Network. Authors show that this compression can lead to speedups of the models in many tasks. | SP:95a668e44a54b5e8def5ea2abb2e2a06026637b8 |
Lossless Compression of Structured Convolutional Models via Lifting | 1 INTRODUCTION . Lifted , often referred to as templated , models use highly expressive representation languages , typically based in weighted predicate logic , to capture symmetries in relational learning problems ( Koller et al. , 2007 ) . This includes learning from data such as chemical , biological , social , or traffic networks , and various knowledge graphs , relational databases and ontologies . The idea has been studied extensively in probabilistic settings under the notion of lifted graphical models ( Kimmig et al. , 2015 ) , with instances such as Markov Logic Networks ( MLNs ) ( Richardson & Domingos , 2006 ) or Bayesian Logic Programs ( BLPs ) ( Kersting & De Raedt , 2001 ) . In a wider view , convolutions can be seen as instances of the templating idea in neural models , where the same parameterized pattern is being carried around to exploit the underlying symmetries , i.e . some forms of shared correlations in the data . In this analogy , the popular Convolutional Neural Networks ( CNN ) ( Krizhevsky et al. , 2012 ) themselves can be seen as a simple form of a templated model , where the template corresponds to the convolutional filters , unfolded over regular spatial grids of pixels . But the symmetries are further even more noticeable in structured , relational domains with discrete element types . With convolutional templates for regular trees , the analogy covers Recursive Neural Networks ( Socher et al. , 2013 ) , popular in natural language processing . Extending to arbitrary graphs , the same notion covers works such as Graph Convolutional Networks ( Kipf & Welling , 2016 ) and their variants ( Wu et al. , 2019 ) , as well as various Knowledge-Base Embedding methods ( Wang et al. , 2017 ) . Extending even further to relational structures , there are works integrating parameterized relational logic templates with neural networks ( Sourek et al. , 2018 ; Rocktäschel & Riedel , 2017 ; Marra & Kuželka , 2019 ; Manhaeve et al. , 2018 ) . The common underlying principle of templated models is a joint parameterization of the symmetries , allowing for better generalization . However , standard lifted models , such as MLNs , provide another key advantage that , under certain conditions , the model computations can be efficiently carried out without complete template unfolding , often leading to even exponential speedups ( Kimmig et al. , 2015 ) . This is known as “ lifted inference ” ( Kersting , 2012 ) and is utilized heavily in lifted graphical models as well as database query engines ( Suciu et al. , 2011 ) . However , to our best knowledge , this idea has been so far unexploited in the neural ( convolutional ) models . The main contribution of this paper is thus a “ lifting ” technique to compress symmetries in convolutional models applied to structured data , which we refer to generically as “ structured convolutional models ” . 1.1 RELATED WORK . The idea for the compression is inspired by lifted inference ( Kersting , 2012 ) used in templated graphical models . The core principle is that all equivalent sub-computations can be effectively carried out in a single instance and broadcasted into successive operations together with their respective multiplicities , potentially leading to significant speedups . While the corresponding “ liftable ” template formulae ( or database queries ) generating the isomorphisms are typically assumed to be given ( Kimmig et al. , 2015 ) , we explore the symmetries from the unfolded ground structures , similarly to the approximate methods based on graph bisimulation ( Sen et al. , 2012 ) . All the lifting techniques are then based in some form of first-order variable elimination ( summation ) , and are inherently designed to explore structural symmetries in graphical models . In contrast , we aim to additionally explore functional symmetries , motivated by the fact that even structurally different neural computation graphs may effectively perform identical function . The learning in neural networks is also principally different from the model counting-based computations in lifted graphical models in that it requires many consecutive evaluations of the models as part of the encompassing iterative training routine . Consequently , even though we assume to unfold a complete computation graph before it is compressed with the proposed technique , the resulting speedup due to the subsequent training is still substantial . From the deep learning perspective , there have been various model compression techniques proposed to speedup the training , such as pruning , decreasing precision , and low-rank factorization ( Cheng et al. , 2017 ) . However , to our best knowledge , the existing techniques are lossy in nature , with a recent exception of compressing ReLU networks based on identifying neurons with linear behavior ( Serra et al. , 2020 ) . None of these works exploit the model computation symmetries . The most relevant line of work here are Lifted Relational Neural Networks ( LRNNs ) ( Sourek et al. , 2018 ) which however , despite the name , provide only templating capabilities without lifted inference , i.e . with complete , uncompressed ground computation graphs . 2 BACKGROUND . The compression technique described in this paper is applicable to a number of structured convolutional models , ranging from simple recursive ( Socher et al. , 2013 ) to fully relational neural models ( Sourek et al. , 2018 ) . The common characteristic of the targeted learners is the utilization of convolution ( templating ) , where the same parameterized pattern is carried over different subparts of the data ( representation ) with the same local structure , effectively introducing repetitive sub-computations in the resulting computation graphs , which we exploit in this work . 2.1 GRAPH NEURAL NETWORKS . Graph neural networks ( GNNs ) are currently the most prominent representatives of structured convolutional models , which is why we choose them for brevity of demonstration of the proposed compression technique . GNNs can be seen as an extension of the common CNN principles to completely irregular graph structures . Given a particularly structured input sample graph Sj , they dynamically unfold a multi-layered computation graph Gj , where the structure of each layer i follows the structure of the whole input graph Sj . For computation of the next layer i+1 values , each node v from the input graph Sj calculates its own value h ( v ) by aggregating A ( “ pooling ” ) the values of the adjacent nodes u : edge ( u , v ) , transformed by some parametric function CW1 ( “ convolution ” ) , which is being reused with the same parameterization W1 within each layer i as : h̃ ( v ) ( i ) = A ( i ) ( { C ( i ) W i1 ( h ( u ) ( i−1 ) ) |u : edge ( u , v ) } ) ( 1 ) The h̃ ( i ) ( v ) can be further combined through another CW2 with the central node ’ s representation from the previous layer to obtain the final updated value h ( i ) ( v ) for layer i as : h ( v ) ( i ) = C ( i ) W i2 ( h ( v ) ( i−1 ) , h̃ ( v ) ( i ) ) ( 2 ) This general principle covers a wide variety of GNN models , such the popular GCNs ( Kipf & Welling , 2016 ) , graph-SAGE ( Hamilton et al. , 2017 ) , GIN ( Xu et al. , 2018a ) , and others ( Xu et al. , 2018b ; Gilmer et al. , 2017 ) , which then reduces to the respective choices of particular aggregations A and transformations CW . An example computation graph of a generic GNN unfolded over an example molecule of methane is shown in Fig . 2 . 2.2 COMPUTATION GRAPHS . For the sake of this paper , let us now define the common notion of a computation graph more formally . A computation graph is a tuple G = ( N , E , F ) , where N = ( 1 , 2 , . . . , n ) is a list of nodes and E ⊆ N 2 × N is a list of directed labeled edges . Each labeled edge is a triple of integers ( n1 , n2 , l ) where n1 and n2 are nodes of the computation graph and l is the label . The labels are used to assign weights to the edges in the computation graph . Note this allows to define the weight sharing scheme as part of the graph ( cf Example 1 below ) . Finally , F = { f1 , f2 , . . . , fn } is the list of activation functions , one for each node from N . As usual , the graph is assumed to be acyclic . Children of a node N are naturally defined as all those nodes M such that ( M , N , L ) ∈ E , and analogically for parents . Note that since E is a list , edges contained in it are ordered , and the same edge may appear multiple times ( which will be useful later ) . Children of each node are also ordered – given two children C and C ′ of a node N , C precedes C ′ iff ( C , N , L ) precedes ( C ′ , N , L′ ) in the list of edges E . We denote the lists of children and parents of a given node N by Children ( N ) and Parents ( N ) , respectively . Computation graphs are then evaluated bottom up from the leaves of the graph ( nodes with no children ) to the roots of the graph ( nodes with no parents ) . Given a list of weightsW , we can now define the value of a node N ∈ N recursively as : value ( N ; W ) = fN ( WL1 · value ( M1 ; W ) , . . . , WLm · value ( Mm ; W ) ) , where ( M1 , . . . , Mm ) ≡ Children ( N ) is the ( ordered ) list of children of the nodeN , and L1 , . . . , Lm are the labels of the respective edges ( M1 , N , L1 ) , . . . , ( Mm , N , Lm ) ∈ E , andWLi is the Li-th component of the list W . Note that with the structured convolutional models , such as GNNs , we assume dynamic computation graphs where each learning sample Sj generates a separate Gj . Consequently , we can associate the leaf nodes in each Gj with constant functions1 , outputting the corresponding node ( feature ) values from the corresponding structured input sample Sj . 3 PROBLEM DEFINITION . The problem of detecting the symmetries in computation graphs can then be formalized as follows . Definition 1 ( Problem Definition ) . Let G = ( N , E , F ) be a computation graph . We say that two nodes N1 , N2 are equivalent if , for any W , it holds that value ( N1 ; W ) = value ( N2 ; W ) . The problem of detecting symmetries in computation graphs asks to partition the nodes of the computation graph into equivalence classes of mutually equivalent nodes . Example 1 . Consider the computation graph G = ( N , E , F ) , depicted in Fig . 1 , where N = { 0 , 1 , 2 , 3 , 4 } , E = ( ( 0 , 2 , 1 ) , ( 1 , 3 , 1 ) , ( 2 , 4 , 2 ) , ( 3 , 4 , 2 ) ) , F = { f0 = f1 = 1 , f2 ( x ) = f3 ( x ) = x , f4 ( x , y ) = x · cos ( y ) } . 1in contrast to static computation graphs where these functions are identities requiring the features at input . LetW = ( w1 , w2 ) be the weight list . The computation graph then computes the function ( w1w2 ) · cos ( w1w2 ) . It is not difficult to verify that the nodes { 0 , 1 } , and { 2 , 3 } are functionally equivalent . This also means , as we discuss in more detail in the next section , that we can “ merge ” them without changing the function that the graph computes . The resulting reduced graph then has the form N = { 1 , 3 , 4 } , E = { ( 1 , 3 , 1 ) , ( 3 , 4 , 2 ) , ( 3 , 4 , 2 ) } , F = { f1 = 1 , f3 ( x ) = x , f4 ( x , y ) = x · cos ( y ) } . In the example above , the nodes { 0 , 1 } and { 2 , 3 } are in fact also isomorphic in the sense that there exists an automorphism ( preserving weights and activation functions ) of the computation graph that swaps the nodes . Note that our definition is less strict : all we want the nodes to satisfy is functional equivalence , meaning that they should evaluate to the same values for any initialization ofW . We will also use the notion of structural-equivalence of nodes in computational graphs . Two nodes are structurally equivalent if they have the same outputs for any assignment of weightsW and for any replacement of any of the activation functions in the graph.2 That is if two nodes are structurally equivalent then they are also functionally equivalent but not vice versa . Importantly , the two nodes do not need to be automorphic3 in the graph-theoretical sense while being structurally equivalent , which also makes detecting structural equivalence easier from the computational point of view . In particular , we describe a simple polynomial-time algorithm in Section 4.2 . | The authors of this paper propose an compression technique for GNNs that was inspired by lifted inference. The compression consists of removing asymmetries by merging nodes. They define two algorithms for compression: a non-exact algorithm that merges two nodes that are "functional" equivalent and an exact algorithm that merges two nodes that are structurally equivalent. | SP:95a668e44a54b5e8def5ea2abb2e2a06026637b8 |
Unsupervised Anomaly Detection by Robust Collaborative Autoencoders | Unsupervised anomaly detection plays a critical role in many real-world applications , from computer security to healthcare . A common approach based on deep learning is to apply autoencoders to learn a feature representation of the normal ( non-anomalous ) observations and use the reconstruction error of each observation to detect anomalies present in the data . However , due to the high complexity brought upon by over-parameterization of the deep neural networks ( DNNs ) , the anomalies themselves may have small reconstruction errors , which degrades the performance of these methods . To address this problem , we present a robust framework for detecting anomalies using collaborative autoencoders . Unlike previous methods , our framework does not require supervised label information nor access to clean ( uncorrupted ) examples during training . We investigate the theoretical properties of our framework and perform extensive experiments to compare its performance against other DNN-based methods . Our experimental results show the superior performance of the proposed framework as well as its robustness to noise due to missing value imputation compared to the baseline methods . 1 INTRODUCTION . Anomaly detection ( AD ) is the task of identifying abnormal observations in the data . It has been successfully applied to many applications , from malware detection to medical diagnosis ( Chandola et al. , 2009 ) . Driven by the success of deep learning , AD methods based on deep neural networks ( DNNs ) ( Zhou & Paffenroth , 2017 ; Aggarwal & Sathe , 2017 ; Ruff et al. , 2018 ; Zong et al. , 2018 ; Hendrycks et al. , 2018 ) have attracted increasing attention recently . Unfortunately , DNN methods have several known drawbacks when applied to AD problems . First , since many of them are based on the supervised learning approach ( Hendrycks et al. , 2018 ) , this requires labeled examples of anomalies , which are often expensive to acquire and may not be representative enough in non-stationary environments . Supervised AD methods are also susceptible to the class imbalance problem as anomalies are rare compared to normal observations . Some DNN methods rely on having access to clean data to ensure that the feature representation learning is not contaminated by anomalies during training ( Zong et al. , 2018 ; Ruff et al. , 2018 ; Pidhorskyi et al. , 2018 ; Fan et al. , 2020 ) . This limits their applicability as acquiring a representative clean data itself is a tricky problem . Due to these limitations , there have been concerted efforts to develop robust unsupervised DNN methods that do not assume the availability of supervised labels nor clean training data ( Chandola et al. , 2009 ; Liu et al. , 2019 ) . Deep autoencoders are perhaps one of the most widely used unsupervised AD methods ( Sakurada & Yairi , 2014 ; Vincent et al. , 2010 ) . An autoencoder compresses the original data by learning a latent representation that minimizes the reconstruction loss . It is based on the working assumption that normal observations are easier to compress than anomalies . Unfortunately , such an assumption may not hold in practice since DNNs are often over-parameterized and have the capability to overfit the anomalies ( Zhang et al. , 2016 ) , thus degrading their overall performance . To improve their performance , the unsupervised DNN methods must consider the trade-off between model capacity and overfitting to the anomalies . One way to control the model capacity is through regularization . Many regularization methods for deep networks have been developed to control model capacity , e.g. , by constraining the norms of the model parameters or explicitly perturbing the training process ( Srivastava et al. , 2014 ) . However , these approaches do not prevent the networks from being able to perfectly fit random data ( Zhang et al. , 2016 ) . As a consequence , the regulariza- tion approaches can not prevent the anomalies from being memorized , especially in an unsupervised learning setting . Our work is motivated by recent advances in supervised learning on the robustness of DNNs for noisy labeled data by learning the weights of the training examples ( Jiang et al. , 2017 ; Han et al. , 2018 ) . Unlike previous studies , our goal is to learn the weights in an unsupervised learning fashion so that normal observations are assigned higher weights than the anomalies when calculating reconstruction error . The weights help to reduce the influence of anomalies when learning a feature representation of the data . Since existing approaches for weight learning are supervised , they are inapplicable to unsupervised AD . Instead , we propose an unsupervised robust collaborative autoencoders ( RCA ) method that trains a pair of autoencoders in a collaborative fashion and jointly learns their model parameters and sample weights . Each autoencoder selects a subset of samples with lowest reconstruction errors from a mini-batch to learn their feature representation . By discarding samples with high reconstruction errors , the algorithm is biased towards learning the representation for clean data , thereby reducing its risk of memorizing anomalies . However , by selecting only easyto-fit samples in each iteration , this may lead to premature convergence of the algorithm without sufficient exploration of the loss surface . Thus , instead of selecting the samples to update its own model parameters , each autoencoder will shuffle its selected samples to the other autoencoder , who will use the samples to update their model parameters . The sample selection procedure is illustrated in Figure 1 . During the testing phase , we apply the dropout mechanism used in training to produce multiple output predictions for each test point by repeating the forward pass multiple times . These ensemble of outputs are then aggregated to obtain a more robust estimate of the anomaly score . The main contributions of this paper are as follows . First , we present a novel framework for unsupervised AD using robust collaborative autoencoders ( RCA ) . Second , we provide rigorous theoretical analysis to understand the mechanism behind RCA . We also describe the convergence of RCA to the solution obtained if it was trained on clean data only . We show that the worst-case scenario for RCA is better than conventional autoencoders and analyze the conditions under which RCA is guaranteed to find the anomalies . Finally , we empirically demonstrate that RCA outperforms state-of-the-art unsupervised AD methods for the majority of the datasets used in this study , even in the presence of noise due to missing value imputation . 2 RELATED WORK . There are numerous methods developed for anomaly detection , a survey of which can be found in Chandola et al . ( 2009 ) . Reconstruction-based methods , such as principal component analysis ( PCA ) and autoencoders , are popular approaches , whereby the input data is projected to a lowerdimensional space before it was transformed back to its original feature space . The distance between the input and reconstructed data is used to determine the anomaly scores of the data points . More advanced unsupervised AD methods have been developed recently . Zhou & Paffenroth ( 2017 ) combined robust PCA with an autoencoder to decompose the data into a mixture of normal and anomaly parts . Zong et al . ( 2018 ) jointly learned a low dimensional embedding and density of the data , using the density of each point as its anomaly score while Ruff et al . ( 2018 ) extended the traditional one-class SVM approach to a deep learning setting . Wang et al . ( 2019 ) applied an end-to-end selfsupervised learning approach to the unsupervised AD problem . However , their approach is designed for image data , requiring operations such as rotation and patch reshuffling . Despite the recent progress on deep unsupervised AD , current methods do not explicitly prevent the neural network from incorporating anomalies into their learned representation , thereby degrading the model performance . One way to address the issue is by assigning a weight to each data point , giving higher weights to the normal data to make the model more robust against anomalies . The idea of learning a weight for each data point is not new in supervised learning . A classic example is boosting ( Freund et al. , 1996 ) , where hard to classify examples are assigned higher weights to encourage the model to classify them more accurately . An opposite strategy is used in self-paced learning ( Kumar et al. , 2010 ) , where the algorithm assigns higher weights to easier-to-classify examples and lower weights to harder ones . This strategy was also used by other methods for learning from noisy labeled data , including Jiang et al . ( 2017 ) and Han et al . ( 2018 ) . Furthermore , there are many studies providing theoretical analysis on the benefits of choosing samples with smaller loss to drive the optimization algorithm ( Shen & Sanghavi , 2018 ; Shah et al. , 2020 ) . 3 METHODOLOGY . This section introduces the proposed robust collaborative autoencoder ( RCA ) framework and analyze its properties . Let X ∈ Rn×d denote the input data , where n is the number of observations and d is the number of features . Our goal is to classify each data point xi ∈ X as an anomaly or a normal observation . Let O ⊂ X denote the set of true anomalies in the data . We assume the anomaly ratio , = |O|/n , is given1 or can be approximately estimated . The RCA framework trains a pair of autoencoders , A1 andA2 , with different initializations . In each iteration during training , the autoencoders will each apply a forward pass on a mini-batch randomly sampled from the training data and compute the reconstruction error of each data point in the minibatch . The data points in the mini-batch are then sorted according to their reconstruction errors and each mini-batch selects the points with lowest errors to be exchanged with the other autoencoder . Each autoencoder subsequently performs a back-propagation step to update its model parameters using the samples it receives from the other autoencoder . Upon convergence , the averaged reconstruction error of each data point is used to determine the anomaly score . A pseudocode of the training phase for RCA is given in Algorithm 1 , while the testing phase is given in Algorithm 2 . Algorithm 1 : Robust Collaborative Autoencoders ( Training Phase ) input : training data Xtrn , test data Xtst , reconstruction loss function L , anomaly ratio , dropout rate r > 0 , and maximum training epochs : max epoch ; return trained autoencoders , A∗1 andA ∗ 2 ; Initialize autoencodersA1 andA2 ; sample selection rate β = 1 and best loss ξ∗ = +∞ ; while epoch≤ max epoch do for minibatch S in Xtrn do Ŝ1 ← forward ( A1 , S , dropout = 0 ) , Ŝ2 ← forward ( A2 , S , dropout = 0 ) ; c1 ← sample selection ( L ( Ŝ1 , S ) , β ) , c2 ← sample selection ( L ( Ŝ2 , S ) , β ) ; Ŝ1 ← forward ( A1 , S [ c2 ] , dropout = r ) , Ŝ2 ← forward ( A2 , S [ c1 ] , dropout = r ) ; A1 ←backprop ( Ŝ1 , S [ c2 ] , dropout = r ) , A2 ← backprop ( Ŝ2 , S [ c1 ] , dropout = r ) ; end X̂tst1 ← forward ( A1 , Xtst , dropout = 0 ) , X̂tst2 ← forward ( A2 , Xtst , dropout = 0 ) ; ξtest ← L ( X̂tst1 , Xtst ) + L ( X̂tst2 , Xtst ) ; if ξtest < ξ∗ then ξ∗ = ξtest , A ∗ 1 = A1 , A ∗ 2 = A2 end ; β = max ( β − max epoch , 1− ) end Algorithm 2 : Robust Collaborative Autoencoders ( Testing Phase ) input : test data Xtst , trained autoencodersA∗1 , A ∗ 2 , dropout rate r > 0 , size of ensemble v ; return anomaly score ; Initialize an empty set of reconstruction errors : ξ = { } . for i = 1 to v do ξ1= forward ( A∗1 , Xtst , dropout = r ) , ξ2=forward ( A ∗ 2 , Xtst , dropout = r ) ; ξ = ξ ∪ ( ξ1 + ξ2 ) /2 end anomaly score = average ( ξ ) ; RCA differs from conventional autoencoders in several ways . First , its autoencoders are trained using only selected data points with small reconstruction errors . The selected points are then exchanged between the autoencoders to avoid premature convergence . Furthermore , in the testing phase , each autoencoder applies a dropout mechanism to generate multiple predicted outputs . The averaged ensemble output is used as the final anomaly score . Details of these steps are given next . 1In practice , users would typically specify the top-k anomalies to be examined and verified , where k = n . | This paper presents a Robust Collaborative Autoencoder (RCA) for unsupervised anomaly detection. The authors focused on the overparameterization of existing NN-based unsupervised anomaly detection methods, and the proposed method aims to overcome the overparameterization problem. The main contibutinos of the proposed method are that (1) it uses two autoencoders, each of which is trained using only selected data points and (2) monte carlo (MC) dropout was used for inference. | SP:cae76295c4e38ce51c6bf1ed147ee4ea0569faed |
Unsupervised Anomaly Detection by Robust Collaborative Autoencoders | Unsupervised anomaly detection plays a critical role in many real-world applications , from computer security to healthcare . A common approach based on deep learning is to apply autoencoders to learn a feature representation of the normal ( non-anomalous ) observations and use the reconstruction error of each observation to detect anomalies present in the data . However , due to the high complexity brought upon by over-parameterization of the deep neural networks ( DNNs ) , the anomalies themselves may have small reconstruction errors , which degrades the performance of these methods . To address this problem , we present a robust framework for detecting anomalies using collaborative autoencoders . Unlike previous methods , our framework does not require supervised label information nor access to clean ( uncorrupted ) examples during training . We investigate the theoretical properties of our framework and perform extensive experiments to compare its performance against other DNN-based methods . Our experimental results show the superior performance of the proposed framework as well as its robustness to noise due to missing value imputation compared to the baseline methods . 1 INTRODUCTION . Anomaly detection ( AD ) is the task of identifying abnormal observations in the data . It has been successfully applied to many applications , from malware detection to medical diagnosis ( Chandola et al. , 2009 ) . Driven by the success of deep learning , AD methods based on deep neural networks ( DNNs ) ( Zhou & Paffenroth , 2017 ; Aggarwal & Sathe , 2017 ; Ruff et al. , 2018 ; Zong et al. , 2018 ; Hendrycks et al. , 2018 ) have attracted increasing attention recently . Unfortunately , DNN methods have several known drawbacks when applied to AD problems . First , since many of them are based on the supervised learning approach ( Hendrycks et al. , 2018 ) , this requires labeled examples of anomalies , which are often expensive to acquire and may not be representative enough in non-stationary environments . Supervised AD methods are also susceptible to the class imbalance problem as anomalies are rare compared to normal observations . Some DNN methods rely on having access to clean data to ensure that the feature representation learning is not contaminated by anomalies during training ( Zong et al. , 2018 ; Ruff et al. , 2018 ; Pidhorskyi et al. , 2018 ; Fan et al. , 2020 ) . This limits their applicability as acquiring a representative clean data itself is a tricky problem . Due to these limitations , there have been concerted efforts to develop robust unsupervised DNN methods that do not assume the availability of supervised labels nor clean training data ( Chandola et al. , 2009 ; Liu et al. , 2019 ) . Deep autoencoders are perhaps one of the most widely used unsupervised AD methods ( Sakurada & Yairi , 2014 ; Vincent et al. , 2010 ) . An autoencoder compresses the original data by learning a latent representation that minimizes the reconstruction loss . It is based on the working assumption that normal observations are easier to compress than anomalies . Unfortunately , such an assumption may not hold in practice since DNNs are often over-parameterized and have the capability to overfit the anomalies ( Zhang et al. , 2016 ) , thus degrading their overall performance . To improve their performance , the unsupervised DNN methods must consider the trade-off between model capacity and overfitting to the anomalies . One way to control the model capacity is through regularization . Many regularization methods for deep networks have been developed to control model capacity , e.g. , by constraining the norms of the model parameters or explicitly perturbing the training process ( Srivastava et al. , 2014 ) . However , these approaches do not prevent the networks from being able to perfectly fit random data ( Zhang et al. , 2016 ) . As a consequence , the regulariza- tion approaches can not prevent the anomalies from being memorized , especially in an unsupervised learning setting . Our work is motivated by recent advances in supervised learning on the robustness of DNNs for noisy labeled data by learning the weights of the training examples ( Jiang et al. , 2017 ; Han et al. , 2018 ) . Unlike previous studies , our goal is to learn the weights in an unsupervised learning fashion so that normal observations are assigned higher weights than the anomalies when calculating reconstruction error . The weights help to reduce the influence of anomalies when learning a feature representation of the data . Since existing approaches for weight learning are supervised , they are inapplicable to unsupervised AD . Instead , we propose an unsupervised robust collaborative autoencoders ( RCA ) method that trains a pair of autoencoders in a collaborative fashion and jointly learns their model parameters and sample weights . Each autoencoder selects a subset of samples with lowest reconstruction errors from a mini-batch to learn their feature representation . By discarding samples with high reconstruction errors , the algorithm is biased towards learning the representation for clean data , thereby reducing its risk of memorizing anomalies . However , by selecting only easyto-fit samples in each iteration , this may lead to premature convergence of the algorithm without sufficient exploration of the loss surface . Thus , instead of selecting the samples to update its own model parameters , each autoencoder will shuffle its selected samples to the other autoencoder , who will use the samples to update their model parameters . The sample selection procedure is illustrated in Figure 1 . During the testing phase , we apply the dropout mechanism used in training to produce multiple output predictions for each test point by repeating the forward pass multiple times . These ensemble of outputs are then aggregated to obtain a more robust estimate of the anomaly score . The main contributions of this paper are as follows . First , we present a novel framework for unsupervised AD using robust collaborative autoencoders ( RCA ) . Second , we provide rigorous theoretical analysis to understand the mechanism behind RCA . We also describe the convergence of RCA to the solution obtained if it was trained on clean data only . We show that the worst-case scenario for RCA is better than conventional autoencoders and analyze the conditions under which RCA is guaranteed to find the anomalies . Finally , we empirically demonstrate that RCA outperforms state-of-the-art unsupervised AD methods for the majority of the datasets used in this study , even in the presence of noise due to missing value imputation . 2 RELATED WORK . There are numerous methods developed for anomaly detection , a survey of which can be found in Chandola et al . ( 2009 ) . Reconstruction-based methods , such as principal component analysis ( PCA ) and autoencoders , are popular approaches , whereby the input data is projected to a lowerdimensional space before it was transformed back to its original feature space . The distance between the input and reconstructed data is used to determine the anomaly scores of the data points . More advanced unsupervised AD methods have been developed recently . Zhou & Paffenroth ( 2017 ) combined robust PCA with an autoencoder to decompose the data into a mixture of normal and anomaly parts . Zong et al . ( 2018 ) jointly learned a low dimensional embedding and density of the data , using the density of each point as its anomaly score while Ruff et al . ( 2018 ) extended the traditional one-class SVM approach to a deep learning setting . Wang et al . ( 2019 ) applied an end-to-end selfsupervised learning approach to the unsupervised AD problem . However , their approach is designed for image data , requiring operations such as rotation and patch reshuffling . Despite the recent progress on deep unsupervised AD , current methods do not explicitly prevent the neural network from incorporating anomalies into their learned representation , thereby degrading the model performance . One way to address the issue is by assigning a weight to each data point , giving higher weights to the normal data to make the model more robust against anomalies . The idea of learning a weight for each data point is not new in supervised learning . A classic example is boosting ( Freund et al. , 1996 ) , where hard to classify examples are assigned higher weights to encourage the model to classify them more accurately . An opposite strategy is used in self-paced learning ( Kumar et al. , 2010 ) , where the algorithm assigns higher weights to easier-to-classify examples and lower weights to harder ones . This strategy was also used by other methods for learning from noisy labeled data , including Jiang et al . ( 2017 ) and Han et al . ( 2018 ) . Furthermore , there are many studies providing theoretical analysis on the benefits of choosing samples with smaller loss to drive the optimization algorithm ( Shen & Sanghavi , 2018 ; Shah et al. , 2020 ) . 3 METHODOLOGY . This section introduces the proposed robust collaborative autoencoder ( RCA ) framework and analyze its properties . Let X ∈ Rn×d denote the input data , where n is the number of observations and d is the number of features . Our goal is to classify each data point xi ∈ X as an anomaly or a normal observation . Let O ⊂ X denote the set of true anomalies in the data . We assume the anomaly ratio , = |O|/n , is given1 or can be approximately estimated . The RCA framework trains a pair of autoencoders , A1 andA2 , with different initializations . In each iteration during training , the autoencoders will each apply a forward pass on a mini-batch randomly sampled from the training data and compute the reconstruction error of each data point in the minibatch . The data points in the mini-batch are then sorted according to their reconstruction errors and each mini-batch selects the points with lowest errors to be exchanged with the other autoencoder . Each autoencoder subsequently performs a back-propagation step to update its model parameters using the samples it receives from the other autoencoder . Upon convergence , the averaged reconstruction error of each data point is used to determine the anomaly score . A pseudocode of the training phase for RCA is given in Algorithm 1 , while the testing phase is given in Algorithm 2 . Algorithm 1 : Robust Collaborative Autoencoders ( Training Phase ) input : training data Xtrn , test data Xtst , reconstruction loss function L , anomaly ratio , dropout rate r > 0 , and maximum training epochs : max epoch ; return trained autoencoders , A∗1 andA ∗ 2 ; Initialize autoencodersA1 andA2 ; sample selection rate β = 1 and best loss ξ∗ = +∞ ; while epoch≤ max epoch do for minibatch S in Xtrn do Ŝ1 ← forward ( A1 , S , dropout = 0 ) , Ŝ2 ← forward ( A2 , S , dropout = 0 ) ; c1 ← sample selection ( L ( Ŝ1 , S ) , β ) , c2 ← sample selection ( L ( Ŝ2 , S ) , β ) ; Ŝ1 ← forward ( A1 , S [ c2 ] , dropout = r ) , Ŝ2 ← forward ( A2 , S [ c1 ] , dropout = r ) ; A1 ←backprop ( Ŝ1 , S [ c2 ] , dropout = r ) , A2 ← backprop ( Ŝ2 , S [ c1 ] , dropout = r ) ; end X̂tst1 ← forward ( A1 , Xtst , dropout = 0 ) , X̂tst2 ← forward ( A2 , Xtst , dropout = 0 ) ; ξtest ← L ( X̂tst1 , Xtst ) + L ( X̂tst2 , Xtst ) ; if ξtest < ξ∗ then ξ∗ = ξtest , A ∗ 1 = A1 , A ∗ 2 = A2 end ; β = max ( β − max epoch , 1− ) end Algorithm 2 : Robust Collaborative Autoencoders ( Testing Phase ) input : test data Xtst , trained autoencodersA∗1 , A ∗ 2 , dropout rate r > 0 , size of ensemble v ; return anomaly score ; Initialize an empty set of reconstruction errors : ξ = { } . for i = 1 to v do ξ1= forward ( A∗1 , Xtst , dropout = r ) , ξ2=forward ( A ∗ 2 , Xtst , dropout = r ) ; ξ = ξ ∪ ( ξ1 + ξ2 ) /2 end anomaly score = average ( ξ ) ; RCA differs from conventional autoencoders in several ways . First , its autoencoders are trained using only selected data points with small reconstruction errors . The selected points are then exchanged between the autoencoders to avoid premature convergence . Furthermore , in the testing phase , each autoencoder applies a dropout mechanism to generate multiple predicted outputs . The averaged ensemble output is used as the final anomaly score . Details of these steps are given next . 1In practice , users would typically specify the top-k anomalies to be examined and verified , where k = n . | The submission tackles unsupervised anomaly detection, specifically in a scenario where supervision labels are not available, only information about the ratio of anomalous examples in the data set. They suggest an architecture consisting of two auto-encoders collaboratively determining anomalous samples and updating their weights based on data that is deemed normal. The authors provide a theoretical analysis of the selection process, and validate anomaly detection performance on a range of experiments. | SP:cae76295c4e38ce51c6bf1ed147ee4ea0569faed |
PseudoSeg: Designing Pseudo Labels for Semantic Segmentation | 1 INTRODUCTION . Image semantic segmentation is a core computer vision task that has been studied for decades . Compared with other vision tasks , such as image classification and object detection , human annotation of pixel-accurate segmentation is dramatically more expensive . Given sufficient pixellevel labeled training data ( i.e. , high-data regime ) , the current state-of-the-art segmentation models ( e.g. , DeepLabv3+ ( Chen et al. , 2018 ) ) produce satisfactory segmentation prediction for common practical usage . Recent exploration demonstrates improvement over high-data regime settings with large-scale data , including self-training ( Chen et al. , 2020a ; Zoph et al. , 2020 ) and backbone pretraining ( Zhang et al. , 2020a ) . In contrast to the high-data regime , the performance of segmentation models drop significantly , given very limited pixel-labeled data ( i.e. , low-data regime ) . Such ineffectiveness at the low-data regime hinders the applicability of segmentation models . Therefore , instead of improving high-data regime segmentation , our work focuses on data-efficient segmentation training that only relies on few pixellabeled data and leverages the availability of extra unlabeled or weakly annotated ( e.g. , image-level ) data to improve performance , with the aim of narrowing the gap to the supervised models trained with fully pixel-labeled data . Our work is inspired by the recent success in semi-supervised learning ( SSL ) for image classification , demonstrating promising performance given very limited labeled data and a sufficient amount of unlabeled data . Successful examples include MeanTeacher ( Tarvainen & Valpola , 2017 ) , UDA ( Xie et al. , 2019 ) , MixMatch ( Berthelot et al. , 2019b ) , FeatMatch ( Kuo et al. , 2020 ) , and FixMatch ( Sohn et al. , 2020a ) . One outstanding idea in this type of SSL is consistency training : making predictions consistent among multiple augmented images . FixMatch ( Sohn et al. , 2020a ) shows that using high-confidence one-hot pseudo labels obtained from weakly-augmented unlabeled data to train strongly-augmented counterpart is the key to the success of SSL in image classification . ∗Work done during internship at Google Cloud AI Research . However , effective pseudo labels and well-designed data augmentation are non-trivial to satisfy for semantic segmentation . Although we observe that many related works explore the second condition ( i.e. , augmentation ) for image segmentation to enable consistency training framework ( French et al. , 2020 ; Ouali et al. , 2020 ) , we show that a wise design of pseudo labels for segmentation has great veiled potentials . In this paper , we propose PseudoSeg , a one-stage training framework to improve image semantic segmentation by leveraging additional data either with image-level labels ( weakly-labeled data ) or without any labels . PseudoSeg presents a novel design of pseudo-labeling to infer effective structured pseudo labels of additional data . It then optimizes the prediction of strongly-augmented data to match its corresponding pseudo labels . In summary , we make the following contributions : • We propose a simple one-stage framework to improve semantic segmentation by using a limited amount of pixel-labeled data and sufficient unlabeled data or image-level labeled data . Our framework is simple to apply and therefore network architecture agnostic . • Directly applying consistency training approaches validated in image classification renders particular challenges in segmentation . We first demonstrate how well-calibrated soft pseudo labels obtained through wise fusion of predictions from diverse sources can greatly improve consistency training for segmentation . • We conduct extensive experimental studies on the PASCAL VOC 2012 and COCO datasets . Comprehensive analyses are conducted to validate the effectiveness of this method at not only the low-data regime but also the high-data regime . Our experiments study multiple important open questions about transferring SSL advances to segmentation tasks . 2 RELATED WORK . Semi-supervised classification . Semi-supervised learning ( SSL ) aims to improve model performance by incorporating a large amount of unlabeled data during training . Consistency regularization and entropy minimization are two common strategies for SSL . The intuition behind consistencybased approaches ( Laine & Aila , 2016 ; Sajjadi et al. , 2016 ; Miyato et al. , 2018 ; Tarvainen & Valpola , 2017 ) is that , the model output should remain unchanged when the input is perturbed . On the other hand , the entropy minimization strategy ( Grandvalet & Bengio , 2005 ) argues that the unlabeled data can be used to ensured classes are well-separated , which can be achieved by encouraging the model to output low-entropy predictions . Pseudo-labeling ( Lee , 2013 ) is one of the methods for implicit entropy minimization . Recently , holistic approaches ( Berthelot et al. , 2019b ; a ; Sohn et al. , 2020a ) combining both strategies have been proposed and achieved significant improvement . By redesigning the pseudo label , we propose an efficient one-stage semi-supervised learning framework of semantic segmentation for consistency training . Semi-supervised semantic segmentation . Collecting pixel-level annotations for semantic segmentation is costly and prone to error . Hence , leveraging unlabeled data in semantic segmentation is a natural fit . Early methods utilize a GAN-based model either to generate additional training data ( Souly et al. , 2017 ) or to learn a discriminator between the prediction and the ground truth mask ( Hung et al. , 2018 ; Mittal et al. , 2019 ) . Consistency regularization based approaches have also been proposed recently , by enforcing the predictions to be consistent , either from augmented input images ( French et al. , 2020 ; Kim et al. , 2020 ) , perturbed feature embeddings ( Ouali et al. , 2020 ) , or different networks ( Ke et al. , 2020 ) . Recently , Luo & Yang ( 2020 ) proposes a dual-branch training network to jointly learn from pixel-accurate and coarse labeled data , achieving good segmentation performance . To push the performance of state of the arts , iterative self-training approaches ( Chen et al. , 2020a ; Zoph et al. , 2020 ; Zhu et al. , 2020 ) have been proposed . These methods usually assume the available labeled data is enough to train a good teacher model , which will be used to generate pseudo labels for the student model . However , this condition might not satisfy in the low-data regime . Our proposed method , on the other hand , realizing the ideas of both consistency regularization and pseudo-labeling in segmentation , consistently improves the supervised baseline in both low-data and high-data regimes . Weakly-supervised semantic segmentation . Instead of supervising network training with accurate pixel-level labels , many prior works exploit weaker forms of annotations ( e.g. , bounding boxes ( Dai et al. , 2015 ) , scribbles ( Lin et al. , 2016 ) , image-level labels ) . Most recent approaches use imagelevel labels as the supervisory signal , which exploits the idea of class activation map ( CAM ) ( Zhou et al. , 2016 ) . Since the vanilla CAM only focus on the most discriminative region of objects , dif- ferent ways to refine CAM have been proposed , including partial image/feature erasing ( Hou et al. , 2018 ; Wei et al. , 2017 ; Li et al. , 2018 ) , using an additional saliency estimation model ( Oh et al. , 2017 ; Huang et al. , 2018 ; Wei et al. , 2018 ) , utilizing pixel similarity to propagate the initial score map ( Ahn & Kwak , 2018 ; Wang et al. , 2020 ) , or mining and co-segment the same category of objects across images ( Sun et al. , 2020 ; Zhang et al. , 2020b ) . While achieving promising results using the approaches mentioned above , most of them require a multi-stage training strategy . The refined score maps are optimized again using a dense-CRF model ( Krähenbühl & Koltun , 2011 ) , and then used as the target to train a separate segmentation network . On the other hand , we assume there exists a small number of fully-annotated data , which allows us to learn stronger segmentation models than general methods without needing pixel-labeled data . 3 THE PROPOSED METHOD . In analogous to SSL for classification , our training objective in PseudoSeg consists of a supervised loss Ls applied to pixel-level labeled data Dl , and a consistency constraint Lu applied to unlabeled data Du 1 . Specifically , the supervised loss Ls is the standard pixel-wise cross-entropy loss on the weakly augmented pixel-level labeled examples : Ls = 1 N × |Dl| ∑ x∈Dl N−1∑ i=0 CrossEntropy ( yi , fθ ( ω ( xi ) ) ) , ( 1 ) where θ represents the learnable parameters of the network function f and N denotes the number of valid labeled pixels in an image x ∈ RH×W×3 . yi ∈ RC is the ground truth label of a pixel i in H×W dimensions , and fθ ( ω ( xi ) ) ∈ RC is the predicted probability of pixel i , where C is the number of classes to predict and ω ( · ) denotes the weak ( common ) data augmentation operations used by Chen et al . ( 2018 ) . During training , the proposed PseudoSeg estimates a pseudo label ỹ ∈ RH×W×C for each stronglyaugmented unlabeled data x in Du , which is then used for computing the cross-entropy loss . The unsupervised objective can then be written as : Lu = 1 N × |Du| ∑ x∈Du N−1∑ i=0 CrossEntropy ( ỹi , fθ ( β ◦ ω ( xi ) ) ) , ( 2 ) where β ( · ) denotes a stronger data augmentation operation , which will be described in Section 3.2 . We illustrate the unlabeled data training branch in Figure 1 . 3.1 THE DESIGN OF STRUCTURED PSEUDO LABELS . The next important question is how to generate the desirable pseudo label ỹ . A straightforward solution is directly using the decoder output of a trained segmentation model after confidence threshold- 1For simplicity , here we illustrate the method with unlabeled data and then show it can be easily adapted to use image-level labeled data in Section 3.2. ing , as suggested by Sohn et al . ( 2020a ) ; Zoph et al . ( 2020 ) ; Xie et al . ( 2020 ) ; Sohn et al . ( 2020b ) . However , as we demonstrate later in the experiments , the generated pseudo hard/soft labels as well as other post-processing of outputs are barely satisfactory in the low-data regime , and thus yield inferior final results . To address this issue , our design of pseudo-labeling has two key insights . First , we seek for a distinct yet efficient decision mechanisms to compensate for the potential errors of decoder outputs . Second , wisely fusing multiple sources of predictions to generate an ensemble and better-calibrated version of pseudo labels . Starting with localization . Compared with precise segmentation , learning localization is a simpler task as it only needs to provide coarser-grained outputs than pixel level of objects in images . Based on this motivation , we improve decoder predictions from the localization perspective . Class activation map ( CAM ) ( Zhou et al. , 2016 ) is a popular approach to provide localization for class-specific regions . CAM-based methods ( Hou et al. , 2018 ; Wei et al. , 2017 ; Ahn & Kwak , 2018 ) have been successfully adopted to tackle a different weakly supervised semantic segmentation task from us , where they assume only image-level labels are available . In practice , we adopt a variant of class activation map , Grad-CAM ( Selvaraju et al. , 2017 ) in PseudoSeg . From localization to segmentation . CAM estimates the strength of classifier responses on local feature maps . Thus , an inherent limitation of CAM-based approaches is that it is prone to attending only to the most discriminative regions . Although many weakly-supervised segmentation approaches ( Ahn & Kwak , 2018 ; Ahn et al. , 2019 ; Sun et al. , 2020 ) aim at refining CAM localization maps to segmentation masks , most of them have complicated post-processing steps , such as dense CRF ( Krähenbühl & Koltun , 2011 ) , which increases the model complexity when used for consistency training . Here we present a computationally efficient yet effective refinement alternative , which is learnable using available pixel-labeled data . Although CAM only localizes partial regions of interests , if we know the pairwise similarities between regions , we can propagate the CAM scores from the discriminative regions to the rest unattended regions . Actually , it has been shown in many works that the learned high-level deep features are usually good at similarity measurements of visual objects . In this paper , we find hypercolumn ( Hariharan et al. , 2015 ) with a learnable similarity measure function works fairly effective . Given the vanilla Grad-CAM output for all C classes , which can be viewed as a spatially-flatten 2-D vector of weight m ∈ RL×C , where each row mi is the response weight per class for one region i . Using a kernel functionK ( · , · ) : RH×RH → R that measures element-wise similarity given feature h ∈ RH of two regions , the propagated score m̂i ∈ RC can be computed as follows m̂i = mi + L−1∑ j=0 eK ( Wkhi , Wvhj ) ∑L−1 k=0 e K ( Wkhi , Wvhk ) mj ·Wc . ( 3 ) The goal of this function is to train Θ = { Wk , Wv ∈ RH×H , Wc ∈ RC×C } in order to propagate the high value in m to all adjacent elements in the feature space RH ( i.e. , hypercolumn features ) to region i . Adding mi in equation 3 indicates the skip-connection . To compute propagated score for all regions , the operations in equation 3 can be efficiently implemented with self-attention dotproduct ( Vaswani et al. , 2017 ) . For brevity , we denote this efficient refinement process output as selfattention Grad-CAM ( SGC ) maps in RH×H×C . Figure 6 in Appendix A specifies the architecture . Calibrated prediction fusion . SGC maps are obtained from low-resolution feature maps . It is then resized to the desired output resolution , and thus not sufficient at delineating crisp boundaries . However , compared to the segmentation decoder , SGC is capable of generating more locally-consistent masks . Thus , we propose a novel calibrated fusion strategy to take advantage of both decoder and SCG predictions for better pseudo labels . Specifically , given a batch of decoder outputs ( pre-softmax logits ) p̂ = fθ ( ω ( x ) ) and SGC maps m̂ computed from weakly-augmented data ω ( x ) , we generate the pseudo labels ỹ by F ( p̂ , m̂ ) = Sharpen ( γ Softmax ( p̂ Norm ( p̂ , m̂ ) ) + ( 1− γ ) Softmax ( m̂ Norm ( p̂ , m̂ ) ) , T ) . ( 4 ) Two critical procedures are proposed to use here to make the fusion process successful . First , p̂ and m̂ are from different decision mechanisms and they could have very different degrees of overconfidence . Therefore , we introduce the operation Norm ( a , b ) = √∑|a| i ( a 2 i + b 2 i ) as a nor- malization factor . It alleviates the over-confident probability after softmax , which could unfavorably dominate the resulted γ-averaged probability . Second , the distribution sharpening operation Sharpen ( a , T ) i = a 1/T i / ∑C j a 1/T j adjusts the temperature scalar T of categorical distribution ( Berthelot et al. , 2019b ; Chen et al. , 2020b ) . Figure 2 illustrates the predictions from different sources . More importantly , we investigate the pseudo-labeling from a calibration perspective ( Section 4.3 ) , demonstrating that the proposed soft pseudo label ỹ leads to a better calibration metric comparing to other possible fusion alternatives , and justifying why it benefits the final segmentation performance . Training . Our final training objective contains two extra losses : a classification loss Lx , and a segmentation lossLsa . First , to compute Grad-CAM , we add a one-layer classification head after the segmentation backbone and a multi-label classification loss Lx . Second , as specified in Appendix A ( Figure 6 ) , SGC maps are scaled as pixel-wise probabilities using one-layer convolution followed by softmax in equation 3 . Learning Θ to predict SGC maps needs pixel-labeled data Dl . It is achieved by an extra segmentation loss Lsa between SGC maps of pixel-labeled data and corresponding ground truth . All the loss terms are jointly optimized ( i.e. , Lu + Ls + Lx + Lsa ) , while Lsa only optimizes Θ ( achieved by stopping gradient ) . See Figure 7 in the appendix for further details . | This work addresses the task of semi-supervised learning (SSL) in semantic segmentation. Following recent SOTAs in SSL, this work also advocates for the use of pseudo-labels on unlabeled data and heavy data augmentation. The main novelty of this work is the novel way to construct higher-quality pseudo-labels: besides the pixel-wise classifier's probabilistic outputs, the authors leverage as well CAM-based activation maps, named as SGC, as an additional pseudo-label source. The final set of pseudo-labels is determined by linear combining the two soft pseudo-label sources with temperature adjustment. The authors conducted extensive experiments with lots of ablation studies to validate the proposed framework. | SP:776851b803da21fa83071c6f5c41e82a1ccc765a |
PseudoSeg: Designing Pseudo Labels for Semantic Segmentation | 1 INTRODUCTION . Image semantic segmentation is a core computer vision task that has been studied for decades . Compared with other vision tasks , such as image classification and object detection , human annotation of pixel-accurate segmentation is dramatically more expensive . Given sufficient pixellevel labeled training data ( i.e. , high-data regime ) , the current state-of-the-art segmentation models ( e.g. , DeepLabv3+ ( Chen et al. , 2018 ) ) produce satisfactory segmentation prediction for common practical usage . Recent exploration demonstrates improvement over high-data regime settings with large-scale data , including self-training ( Chen et al. , 2020a ; Zoph et al. , 2020 ) and backbone pretraining ( Zhang et al. , 2020a ) . In contrast to the high-data regime , the performance of segmentation models drop significantly , given very limited pixel-labeled data ( i.e. , low-data regime ) . Such ineffectiveness at the low-data regime hinders the applicability of segmentation models . Therefore , instead of improving high-data regime segmentation , our work focuses on data-efficient segmentation training that only relies on few pixellabeled data and leverages the availability of extra unlabeled or weakly annotated ( e.g. , image-level ) data to improve performance , with the aim of narrowing the gap to the supervised models trained with fully pixel-labeled data . Our work is inspired by the recent success in semi-supervised learning ( SSL ) for image classification , demonstrating promising performance given very limited labeled data and a sufficient amount of unlabeled data . Successful examples include MeanTeacher ( Tarvainen & Valpola , 2017 ) , UDA ( Xie et al. , 2019 ) , MixMatch ( Berthelot et al. , 2019b ) , FeatMatch ( Kuo et al. , 2020 ) , and FixMatch ( Sohn et al. , 2020a ) . One outstanding idea in this type of SSL is consistency training : making predictions consistent among multiple augmented images . FixMatch ( Sohn et al. , 2020a ) shows that using high-confidence one-hot pseudo labels obtained from weakly-augmented unlabeled data to train strongly-augmented counterpart is the key to the success of SSL in image classification . ∗Work done during internship at Google Cloud AI Research . However , effective pseudo labels and well-designed data augmentation are non-trivial to satisfy for semantic segmentation . Although we observe that many related works explore the second condition ( i.e. , augmentation ) for image segmentation to enable consistency training framework ( French et al. , 2020 ; Ouali et al. , 2020 ) , we show that a wise design of pseudo labels for segmentation has great veiled potentials . In this paper , we propose PseudoSeg , a one-stage training framework to improve image semantic segmentation by leveraging additional data either with image-level labels ( weakly-labeled data ) or without any labels . PseudoSeg presents a novel design of pseudo-labeling to infer effective structured pseudo labels of additional data . It then optimizes the prediction of strongly-augmented data to match its corresponding pseudo labels . In summary , we make the following contributions : • We propose a simple one-stage framework to improve semantic segmentation by using a limited amount of pixel-labeled data and sufficient unlabeled data or image-level labeled data . Our framework is simple to apply and therefore network architecture agnostic . • Directly applying consistency training approaches validated in image classification renders particular challenges in segmentation . We first demonstrate how well-calibrated soft pseudo labels obtained through wise fusion of predictions from diverse sources can greatly improve consistency training for segmentation . • We conduct extensive experimental studies on the PASCAL VOC 2012 and COCO datasets . Comprehensive analyses are conducted to validate the effectiveness of this method at not only the low-data regime but also the high-data regime . Our experiments study multiple important open questions about transferring SSL advances to segmentation tasks . 2 RELATED WORK . Semi-supervised classification . Semi-supervised learning ( SSL ) aims to improve model performance by incorporating a large amount of unlabeled data during training . Consistency regularization and entropy minimization are two common strategies for SSL . The intuition behind consistencybased approaches ( Laine & Aila , 2016 ; Sajjadi et al. , 2016 ; Miyato et al. , 2018 ; Tarvainen & Valpola , 2017 ) is that , the model output should remain unchanged when the input is perturbed . On the other hand , the entropy minimization strategy ( Grandvalet & Bengio , 2005 ) argues that the unlabeled data can be used to ensured classes are well-separated , which can be achieved by encouraging the model to output low-entropy predictions . Pseudo-labeling ( Lee , 2013 ) is one of the methods for implicit entropy minimization . Recently , holistic approaches ( Berthelot et al. , 2019b ; a ; Sohn et al. , 2020a ) combining both strategies have been proposed and achieved significant improvement . By redesigning the pseudo label , we propose an efficient one-stage semi-supervised learning framework of semantic segmentation for consistency training . Semi-supervised semantic segmentation . Collecting pixel-level annotations for semantic segmentation is costly and prone to error . Hence , leveraging unlabeled data in semantic segmentation is a natural fit . Early methods utilize a GAN-based model either to generate additional training data ( Souly et al. , 2017 ) or to learn a discriminator between the prediction and the ground truth mask ( Hung et al. , 2018 ; Mittal et al. , 2019 ) . Consistency regularization based approaches have also been proposed recently , by enforcing the predictions to be consistent , either from augmented input images ( French et al. , 2020 ; Kim et al. , 2020 ) , perturbed feature embeddings ( Ouali et al. , 2020 ) , or different networks ( Ke et al. , 2020 ) . Recently , Luo & Yang ( 2020 ) proposes a dual-branch training network to jointly learn from pixel-accurate and coarse labeled data , achieving good segmentation performance . To push the performance of state of the arts , iterative self-training approaches ( Chen et al. , 2020a ; Zoph et al. , 2020 ; Zhu et al. , 2020 ) have been proposed . These methods usually assume the available labeled data is enough to train a good teacher model , which will be used to generate pseudo labels for the student model . However , this condition might not satisfy in the low-data regime . Our proposed method , on the other hand , realizing the ideas of both consistency regularization and pseudo-labeling in segmentation , consistently improves the supervised baseline in both low-data and high-data regimes . Weakly-supervised semantic segmentation . Instead of supervising network training with accurate pixel-level labels , many prior works exploit weaker forms of annotations ( e.g. , bounding boxes ( Dai et al. , 2015 ) , scribbles ( Lin et al. , 2016 ) , image-level labels ) . Most recent approaches use imagelevel labels as the supervisory signal , which exploits the idea of class activation map ( CAM ) ( Zhou et al. , 2016 ) . Since the vanilla CAM only focus on the most discriminative region of objects , dif- ferent ways to refine CAM have been proposed , including partial image/feature erasing ( Hou et al. , 2018 ; Wei et al. , 2017 ; Li et al. , 2018 ) , using an additional saliency estimation model ( Oh et al. , 2017 ; Huang et al. , 2018 ; Wei et al. , 2018 ) , utilizing pixel similarity to propagate the initial score map ( Ahn & Kwak , 2018 ; Wang et al. , 2020 ) , or mining and co-segment the same category of objects across images ( Sun et al. , 2020 ; Zhang et al. , 2020b ) . While achieving promising results using the approaches mentioned above , most of them require a multi-stage training strategy . The refined score maps are optimized again using a dense-CRF model ( Krähenbühl & Koltun , 2011 ) , and then used as the target to train a separate segmentation network . On the other hand , we assume there exists a small number of fully-annotated data , which allows us to learn stronger segmentation models than general methods without needing pixel-labeled data . 3 THE PROPOSED METHOD . In analogous to SSL for classification , our training objective in PseudoSeg consists of a supervised loss Ls applied to pixel-level labeled data Dl , and a consistency constraint Lu applied to unlabeled data Du 1 . Specifically , the supervised loss Ls is the standard pixel-wise cross-entropy loss on the weakly augmented pixel-level labeled examples : Ls = 1 N × |Dl| ∑ x∈Dl N−1∑ i=0 CrossEntropy ( yi , fθ ( ω ( xi ) ) ) , ( 1 ) where θ represents the learnable parameters of the network function f and N denotes the number of valid labeled pixels in an image x ∈ RH×W×3 . yi ∈ RC is the ground truth label of a pixel i in H×W dimensions , and fθ ( ω ( xi ) ) ∈ RC is the predicted probability of pixel i , where C is the number of classes to predict and ω ( · ) denotes the weak ( common ) data augmentation operations used by Chen et al . ( 2018 ) . During training , the proposed PseudoSeg estimates a pseudo label ỹ ∈ RH×W×C for each stronglyaugmented unlabeled data x in Du , which is then used for computing the cross-entropy loss . The unsupervised objective can then be written as : Lu = 1 N × |Du| ∑ x∈Du N−1∑ i=0 CrossEntropy ( ỹi , fθ ( β ◦ ω ( xi ) ) ) , ( 2 ) where β ( · ) denotes a stronger data augmentation operation , which will be described in Section 3.2 . We illustrate the unlabeled data training branch in Figure 1 . 3.1 THE DESIGN OF STRUCTURED PSEUDO LABELS . The next important question is how to generate the desirable pseudo label ỹ . A straightforward solution is directly using the decoder output of a trained segmentation model after confidence threshold- 1For simplicity , here we illustrate the method with unlabeled data and then show it can be easily adapted to use image-level labeled data in Section 3.2. ing , as suggested by Sohn et al . ( 2020a ) ; Zoph et al . ( 2020 ) ; Xie et al . ( 2020 ) ; Sohn et al . ( 2020b ) . However , as we demonstrate later in the experiments , the generated pseudo hard/soft labels as well as other post-processing of outputs are barely satisfactory in the low-data regime , and thus yield inferior final results . To address this issue , our design of pseudo-labeling has two key insights . First , we seek for a distinct yet efficient decision mechanisms to compensate for the potential errors of decoder outputs . Second , wisely fusing multiple sources of predictions to generate an ensemble and better-calibrated version of pseudo labels . Starting with localization . Compared with precise segmentation , learning localization is a simpler task as it only needs to provide coarser-grained outputs than pixel level of objects in images . Based on this motivation , we improve decoder predictions from the localization perspective . Class activation map ( CAM ) ( Zhou et al. , 2016 ) is a popular approach to provide localization for class-specific regions . CAM-based methods ( Hou et al. , 2018 ; Wei et al. , 2017 ; Ahn & Kwak , 2018 ) have been successfully adopted to tackle a different weakly supervised semantic segmentation task from us , where they assume only image-level labels are available . In practice , we adopt a variant of class activation map , Grad-CAM ( Selvaraju et al. , 2017 ) in PseudoSeg . From localization to segmentation . CAM estimates the strength of classifier responses on local feature maps . Thus , an inherent limitation of CAM-based approaches is that it is prone to attending only to the most discriminative regions . Although many weakly-supervised segmentation approaches ( Ahn & Kwak , 2018 ; Ahn et al. , 2019 ; Sun et al. , 2020 ) aim at refining CAM localization maps to segmentation masks , most of them have complicated post-processing steps , such as dense CRF ( Krähenbühl & Koltun , 2011 ) , which increases the model complexity when used for consistency training . Here we present a computationally efficient yet effective refinement alternative , which is learnable using available pixel-labeled data . Although CAM only localizes partial regions of interests , if we know the pairwise similarities between regions , we can propagate the CAM scores from the discriminative regions to the rest unattended regions . Actually , it has been shown in many works that the learned high-level deep features are usually good at similarity measurements of visual objects . In this paper , we find hypercolumn ( Hariharan et al. , 2015 ) with a learnable similarity measure function works fairly effective . Given the vanilla Grad-CAM output for all C classes , which can be viewed as a spatially-flatten 2-D vector of weight m ∈ RL×C , where each row mi is the response weight per class for one region i . Using a kernel functionK ( · , · ) : RH×RH → R that measures element-wise similarity given feature h ∈ RH of two regions , the propagated score m̂i ∈ RC can be computed as follows m̂i = mi + L−1∑ j=0 eK ( Wkhi , Wvhj ) ∑L−1 k=0 e K ( Wkhi , Wvhk ) mj ·Wc . ( 3 ) The goal of this function is to train Θ = { Wk , Wv ∈ RH×H , Wc ∈ RC×C } in order to propagate the high value in m to all adjacent elements in the feature space RH ( i.e. , hypercolumn features ) to region i . Adding mi in equation 3 indicates the skip-connection . To compute propagated score for all regions , the operations in equation 3 can be efficiently implemented with self-attention dotproduct ( Vaswani et al. , 2017 ) . For brevity , we denote this efficient refinement process output as selfattention Grad-CAM ( SGC ) maps in RH×H×C . Figure 6 in Appendix A specifies the architecture . Calibrated prediction fusion . SGC maps are obtained from low-resolution feature maps . It is then resized to the desired output resolution , and thus not sufficient at delineating crisp boundaries . However , compared to the segmentation decoder , SGC is capable of generating more locally-consistent masks . Thus , we propose a novel calibrated fusion strategy to take advantage of both decoder and SCG predictions for better pseudo labels . Specifically , given a batch of decoder outputs ( pre-softmax logits ) p̂ = fθ ( ω ( x ) ) and SGC maps m̂ computed from weakly-augmented data ω ( x ) , we generate the pseudo labels ỹ by F ( p̂ , m̂ ) = Sharpen ( γ Softmax ( p̂ Norm ( p̂ , m̂ ) ) + ( 1− γ ) Softmax ( m̂ Norm ( p̂ , m̂ ) ) , T ) . ( 4 ) Two critical procedures are proposed to use here to make the fusion process successful . First , p̂ and m̂ are from different decision mechanisms and they could have very different degrees of overconfidence . Therefore , we introduce the operation Norm ( a , b ) = √∑|a| i ( a 2 i + b 2 i ) as a nor- malization factor . It alleviates the over-confident probability after softmax , which could unfavorably dominate the resulted γ-averaged probability . Second , the distribution sharpening operation Sharpen ( a , T ) i = a 1/T i / ∑C j a 1/T j adjusts the temperature scalar T of categorical distribution ( Berthelot et al. , 2019b ; Chen et al. , 2020b ) . Figure 2 illustrates the predictions from different sources . More importantly , we investigate the pseudo-labeling from a calibration perspective ( Section 4.3 ) , demonstrating that the proposed soft pseudo label ỹ leads to a better calibration metric comparing to other possible fusion alternatives , and justifying why it benefits the final segmentation performance . Training . Our final training objective contains two extra losses : a classification loss Lx , and a segmentation lossLsa . First , to compute Grad-CAM , we add a one-layer classification head after the segmentation backbone and a multi-label classification loss Lx . Second , as specified in Appendix A ( Figure 6 ) , SGC maps are scaled as pixel-wise probabilities using one-layer convolution followed by softmax in equation 3 . Learning Θ to predict SGC maps needs pixel-labeled data Dl . It is achieved by an extra segmentation loss Lsa between SGC maps of pixel-labeled data and corresponding ground truth . All the loss terms are jointly optimized ( i.e. , Lu + Ls + Lx + Lsa ) , while Lsa only optimizes Θ ( achieved by stopping gradient ) . See Figure 7 in the appendix for further details . | This paper focuses on the problem of semi-supervised semantic segmentation, where less pixel-level annotations are used to train the network. A new one-stage training framework is proposed to include the process of localization cue generation, pseudo label refinement and training of semantic segmentation. Inspire by recent success in the semi-supervised learning (SSL), a novel calibrated fusion strategy is proposed to incorporate the concept of consistency training with data augmentation into the framework. Experiments on PASCAL VOC and MSCOCO benchmarks validate the effectiveness of the proposed method. | SP:776851b803da21fa83071c6f5c41e82a1ccc765a |
AriEL: Volume Coding for Sentence Generation Comparisons | 1 Introduction . Representation regularization , through the normalization and bounding of data , representations and gradients , is fundamental to fast deep learning training ( Ioffe and Szegedy , 2015 ; Kingma and Welling , 2014 ; He et al. , 2015 ; Perez et al. , 2018 ) . However , it seldom offers guarantees for boundedness , only encouraging it through initial conditions and loss summands . The final conditions , i.e . the representations learned , can be empirically explored through the sampling of the latent space . This exercise reveals how often the data not seen during training has no bounded representation , which can be regarded as undesirable if we want architectures that can quickly generalize outside the training bias for successful transfer learning . However it is difficult to find the learned patterns through latent sampling , since typically neural networks map an input to a point in Rd ( Hochreiter and Schmidhuber , 1997 ; Vaswani et al. , 2017 ; LeCun et al. , 1989 ) . Some models do map inputs to volumes , to ease retrieval through random sampling . Variational Autoencoders ( Kingma and Welling , 2014 ; Bowman et al. , 2016 ; Chen et al. , 2018 ) encourage volume representations : by encoding an input into a probability distribution that is sampled before decoding , neighbouring points in Rd can end up representing one input . However , it requires two loss summands , a log-prior and a log-likelihood , that fight for two different causes . A smooth and volumetric representation , encouraged by the log-prior regularization , can worsen performance , encouraged by the log-likelihood . By giving partially up on smoothness , we propose AriEL , a method to construct volumes , without a loss to encourage them . It maps sentences to volumes in Rd for efficient retrieval with random sampling , or a network that operates in its continuous space . It fuses arithmetic coding ( AC ) ( Elias and Abramson , 1963 ) and k-d trees ( KdT ) ( Bentley , 1975 ) , so we name it Arithmetic coding and k-d trEes for Language ( AriEL ) . For simplicity we focus on dialogue language , even if AriEL works with any variable length sequence of symbols . AriEL can be used as a benchmark to understand natural language processing and generation models use of latent space . It fills completely the latent space with the language learned , using information theory , and bounding its representations within [ 0 , 1 ] d. Its language model splits the latent space in volumes , guided by the probability assigned to the next symbol in a sentence . It can provide an agent with a simpler interface with a pretrained language model , e.g . a GPT-2 ( Radford et al. , 2019 ; Wolf et al. , 2020 ) , where the agent could choose the optimal d. We prove how such a volume representation eases the retrieval of stored learned patterns and how to use it to set references for other models . Our contributions are therefore : • AriEL , a novel unsupervised volume representation based on arithmetic coding and k-d trees ( Section 3.1 ) , to retrieve learned patterns with random sampling ; • the HouseQ dataset , consisting of a large context-free grammar and a random bias ( Section 3.3 ) , to automatically generate and evaluate sentences generated by trained models , and find them in their latent space ; • the notion that explicit volume coding ( Section 4 ) can be a useful technique in tasks that involve the generation of sequences of discrete symbols , such as sentences ; • the observation that conventional learned codes like AE , VAE or Transformer , do not use the latent space effectively ( Section 4 ) , in the AriEL entropic coding sense . 2 Related Work . Volume codes : We define a volume code as two functions , an encoder and a decoder functions , where the encoder maps an input x into a set that contains compact and connected sets of Rd ( Munkres , 2018 ) , and the decoder maps every point within that set back to x . It is a distributed representation ( Hinton et al. , 1984 ) since the input x is represented by at least one Rd point . We call the volume code implicit , when the volumes are encouraged through a loss term ( Bengio et al. , 2013 ; Ng and Jordan , 2002 ; Kingma and Welling , 2014 ; Jebara , 2012 ) and explicit , when the volumes are constructed through the model ’ s operations , independently from any loss and optimizer choice . Sentence generation through random sampling : Generative Adversarial Networks ( GAN ) ( Goodfellow et al. , 2014 ) map random samples to a learned generation through a 2-players game procedure . Yu et al . ( 2017 ) ; Kusner and Hernández-Lobato ( 2016 ) ; Scialom et al . ( 2020 ) significantly improved GAN performance in text generation . Random sampling the latent space is used as well by Variational Autoencoders ( VAE ) ( Kingma and Welling , 2014 ) , to smooth their representations . Bowman et al . ( 2016 ) ; Yang et al . ( 2017 ) ; Li et al . ( 2021 ) , and others , have refined their performance for text representation . AriEL can be used as a generator or a discriminator in a GAN , or as an encoder or a decoder in an autoencoder . However it differs from them in the explicit procedure to construct volumes . It fills the entire latent space with the learned patterns , to ease retrieval by uniform sampling . Arithmetic coding and neural networks : AC is one of the most efficient lossless data compression techniques ( Elias and Abramson , 1963 ; Witten et al. , 1987 ) . AC assigns a sequence to a segment in [ 0 , 1 ] with length proportional to its frequency . When converted into bits , frequent symbols take less bits than unfrequent . AC is used for neural network compression ( Wiedemann et al. , 2019 ) and neural networks are used in AC to perform prediction based compression ( Jiang et al. , 1993 ; Pasero and Montuori , 2003 ; Tatwawadi , 2018 ) . We generalize AC to Rd , to combine its properties with the properties of highdimensional spaces , neural networks domain . K-d trees and neural networks : KdT ( Bentley , 1975 ) is a data structure for storage that can handle different types of queries efficiently . It is typically used as a fast approximation to k-nearest neighbours in low dimensions ( Friedman et al. , 1977 ) . It gives a binary label to the data with respect to its median . It moves through the k dimensions of the data and repeats the process . Neural networks are typically used in conjuction with KdT to reduce the dimensionality of the search space , for KdT to be able to perform queries efficiently ( Woodbridge et al. , 2018 ; Yin et al. , 2017 ; Vasudevan et al. , 2009 ) . We use KdT to make sure that the multidimensional AC uses all the space available . 3 Methodology . 3.1 AriEL : volume coding of language in continuous spaces ac S a | b | aa | ab | ac | bc | abc | bcc aa abc ab b bc bcc a 0 1 0 0 1 1 b bc a aa ab ac Arithmetic Coding AriEL 0 0 1 1 b a d1 = 1 mod 2 d1= 1 0 0 1 1 b bc bcc a aa ab ac abc d2 = 2 mod 2 d2 = 0 d3 = 3 mod 2 d3 = 1 grammar and bias Figure 1 : Arithmetic coding and AriEL . In this illustrative example , the generating context-free grammar ( CFG ) is S → a|b|aa|ab|ac|bc|abc|bcc , and the bar plot on top indicates the frequency of those sentences in the dataset , as an extra bias to the language . Arithmetic Coding ( middle ) encodes any sequence of this CFG over a single dimension within [ 0 , 1 ] , and the frequency of the sentence determines the length assigned on that segment . AriEL ( bottom ) is a multidimensional extension of AC ( here in 2D ) , where the frequency information is preserved in the volumes . The Language Model provides the boundaries where the next symbols are to be found . For a 2D latent space , d = 2 , the axis to split to find symbol st is dt = t mod d. dt = 0 , 1 represent the horizontal and vertical axis . AriEL maps the sequence/sentence ( s1 , · · · , sn ) = ( st ) nt=1 of length n , to a d-dimensional volume of size P ( ( s1 , · · · , sn ) ) = Πnt=1P ( st| ( st′ ) t′ < t ) in the [ 0 , 1 ] d hypercube . When the symbol is used as a random variable we refer to it as s , s′ , while st represents the observed sample at time step t. The words belong to a finite vocabulary s ∈ { 1 , · · · , Vsize } , Vsize ∈ N. To adapt KdT to more splits than binary , we split axis dt = t mod d , into Vsize segments , one for each possible st . The segment has length proportional to st probability . Then we turn to the following axis dt+1 , and continue the process of splitting and turning ( figure 1 and algorithm 1 ) . In figure 1 , st ∈ { a , b , c , } . The initial token s1 = a is given a portion on d1 of length P ( a ) , larger than the portion given to s1 = b or s1 = c , since there are less sentences that start with b than with a , and there is none that starts with c : P ( a ) > P ( b ) > P ( c ) = 0 . Then , we split d2 according to the probability of the next symbol s2 . In this case the second most likely symbol after symbol s1 = a is s2 = c , so ac ends with a larger volume than aa , ab and a . For sentences longer than d , next symbol is assigned an axis d3 previously split , but only the volume selected up to t− 1 is further split . So , the sentence abc takes a portion of ab equal to P ( c| ( ab ) ) , while ‘ ab ’ takes a portion equal to P ( | ( ab ) ) . We estimate language statistics with a Language Model ( LM ) , PLM ( st| ( st′ ) t′ < t ) . This will approximate the frequency information that makes AC entropically efficient . The sentence is finally encoded as the center of the volume bounded by those segments for simplicity , and any point within it is decoded to the same sentence . The extension to a larger [ a , b ] d hypercube is straightforward , and could provide higher precision , but we restrict ourselves to [ 0 , 1 ] d. AriEL has a computational complexity of O ( nD2Vsize ) for encoding and decoding ( algorithm 1 ) , where n is the length of the sequence , D is the dimensionality of the LM latent space , and Vsize the vocabulary size . AriEL has a minimum number of sequential operations of O ( n ) for both encoding and decoding , on par with conventional seq2seq recurrent networks . | This paper proposes a sentence embedding called AriEL. Specifically, based on arithmetic coding and k-d trees, AriEL maps sequences of discrete data into volumes in the latent space, and can then retrieve sequences by random sampling. AriEL is compared to other standard techniques such as Transformer and Variational Autoencoders. Results show that it can generate more diverse and valid sentences. | SP:f88a0263fe87db598ed9d3b537430324ee29ddf2 |
AriEL: Volume Coding for Sentence Generation Comparisons | 1 Introduction . Representation regularization , through the normalization and bounding of data , representations and gradients , is fundamental to fast deep learning training ( Ioffe and Szegedy , 2015 ; Kingma and Welling , 2014 ; He et al. , 2015 ; Perez et al. , 2018 ) . However , it seldom offers guarantees for boundedness , only encouraging it through initial conditions and loss summands . The final conditions , i.e . the representations learned , can be empirically explored through the sampling of the latent space . This exercise reveals how often the data not seen during training has no bounded representation , which can be regarded as undesirable if we want architectures that can quickly generalize outside the training bias for successful transfer learning . However it is difficult to find the learned patterns through latent sampling , since typically neural networks map an input to a point in Rd ( Hochreiter and Schmidhuber , 1997 ; Vaswani et al. , 2017 ; LeCun et al. , 1989 ) . Some models do map inputs to volumes , to ease retrieval through random sampling . Variational Autoencoders ( Kingma and Welling , 2014 ; Bowman et al. , 2016 ; Chen et al. , 2018 ) encourage volume representations : by encoding an input into a probability distribution that is sampled before decoding , neighbouring points in Rd can end up representing one input . However , it requires two loss summands , a log-prior and a log-likelihood , that fight for two different causes . A smooth and volumetric representation , encouraged by the log-prior regularization , can worsen performance , encouraged by the log-likelihood . By giving partially up on smoothness , we propose AriEL , a method to construct volumes , without a loss to encourage them . It maps sentences to volumes in Rd for efficient retrieval with random sampling , or a network that operates in its continuous space . It fuses arithmetic coding ( AC ) ( Elias and Abramson , 1963 ) and k-d trees ( KdT ) ( Bentley , 1975 ) , so we name it Arithmetic coding and k-d trEes for Language ( AriEL ) . For simplicity we focus on dialogue language , even if AriEL works with any variable length sequence of symbols . AriEL can be used as a benchmark to understand natural language processing and generation models use of latent space . It fills completely the latent space with the language learned , using information theory , and bounding its representations within [ 0 , 1 ] d. Its language model splits the latent space in volumes , guided by the probability assigned to the next symbol in a sentence . It can provide an agent with a simpler interface with a pretrained language model , e.g . a GPT-2 ( Radford et al. , 2019 ; Wolf et al. , 2020 ) , where the agent could choose the optimal d. We prove how such a volume representation eases the retrieval of stored learned patterns and how to use it to set references for other models . Our contributions are therefore : • AriEL , a novel unsupervised volume representation based on arithmetic coding and k-d trees ( Section 3.1 ) , to retrieve learned patterns with random sampling ; • the HouseQ dataset , consisting of a large context-free grammar and a random bias ( Section 3.3 ) , to automatically generate and evaluate sentences generated by trained models , and find them in their latent space ; • the notion that explicit volume coding ( Section 4 ) can be a useful technique in tasks that involve the generation of sequences of discrete symbols , such as sentences ; • the observation that conventional learned codes like AE , VAE or Transformer , do not use the latent space effectively ( Section 4 ) , in the AriEL entropic coding sense . 2 Related Work . Volume codes : We define a volume code as two functions , an encoder and a decoder functions , where the encoder maps an input x into a set that contains compact and connected sets of Rd ( Munkres , 2018 ) , and the decoder maps every point within that set back to x . It is a distributed representation ( Hinton et al. , 1984 ) since the input x is represented by at least one Rd point . We call the volume code implicit , when the volumes are encouraged through a loss term ( Bengio et al. , 2013 ; Ng and Jordan , 2002 ; Kingma and Welling , 2014 ; Jebara , 2012 ) and explicit , when the volumes are constructed through the model ’ s operations , independently from any loss and optimizer choice . Sentence generation through random sampling : Generative Adversarial Networks ( GAN ) ( Goodfellow et al. , 2014 ) map random samples to a learned generation through a 2-players game procedure . Yu et al . ( 2017 ) ; Kusner and Hernández-Lobato ( 2016 ) ; Scialom et al . ( 2020 ) significantly improved GAN performance in text generation . Random sampling the latent space is used as well by Variational Autoencoders ( VAE ) ( Kingma and Welling , 2014 ) , to smooth their representations . Bowman et al . ( 2016 ) ; Yang et al . ( 2017 ) ; Li et al . ( 2021 ) , and others , have refined their performance for text representation . AriEL can be used as a generator or a discriminator in a GAN , or as an encoder or a decoder in an autoencoder . However it differs from them in the explicit procedure to construct volumes . It fills the entire latent space with the learned patterns , to ease retrieval by uniform sampling . Arithmetic coding and neural networks : AC is one of the most efficient lossless data compression techniques ( Elias and Abramson , 1963 ; Witten et al. , 1987 ) . AC assigns a sequence to a segment in [ 0 , 1 ] with length proportional to its frequency . When converted into bits , frequent symbols take less bits than unfrequent . AC is used for neural network compression ( Wiedemann et al. , 2019 ) and neural networks are used in AC to perform prediction based compression ( Jiang et al. , 1993 ; Pasero and Montuori , 2003 ; Tatwawadi , 2018 ) . We generalize AC to Rd , to combine its properties with the properties of highdimensional spaces , neural networks domain . K-d trees and neural networks : KdT ( Bentley , 1975 ) is a data structure for storage that can handle different types of queries efficiently . It is typically used as a fast approximation to k-nearest neighbours in low dimensions ( Friedman et al. , 1977 ) . It gives a binary label to the data with respect to its median . It moves through the k dimensions of the data and repeats the process . Neural networks are typically used in conjuction with KdT to reduce the dimensionality of the search space , for KdT to be able to perform queries efficiently ( Woodbridge et al. , 2018 ; Yin et al. , 2017 ; Vasudevan et al. , 2009 ) . We use KdT to make sure that the multidimensional AC uses all the space available . 3 Methodology . 3.1 AriEL : volume coding of language in continuous spaces ac S a | b | aa | ab | ac | bc | abc | bcc aa abc ab b bc bcc a 0 1 0 0 1 1 b bc a aa ab ac Arithmetic Coding AriEL 0 0 1 1 b a d1 = 1 mod 2 d1= 1 0 0 1 1 b bc bcc a aa ab ac abc d2 = 2 mod 2 d2 = 0 d3 = 3 mod 2 d3 = 1 grammar and bias Figure 1 : Arithmetic coding and AriEL . In this illustrative example , the generating context-free grammar ( CFG ) is S → a|b|aa|ab|ac|bc|abc|bcc , and the bar plot on top indicates the frequency of those sentences in the dataset , as an extra bias to the language . Arithmetic Coding ( middle ) encodes any sequence of this CFG over a single dimension within [ 0 , 1 ] , and the frequency of the sentence determines the length assigned on that segment . AriEL ( bottom ) is a multidimensional extension of AC ( here in 2D ) , where the frequency information is preserved in the volumes . The Language Model provides the boundaries where the next symbols are to be found . For a 2D latent space , d = 2 , the axis to split to find symbol st is dt = t mod d. dt = 0 , 1 represent the horizontal and vertical axis . AriEL maps the sequence/sentence ( s1 , · · · , sn ) = ( st ) nt=1 of length n , to a d-dimensional volume of size P ( ( s1 , · · · , sn ) ) = Πnt=1P ( st| ( st′ ) t′ < t ) in the [ 0 , 1 ] d hypercube . When the symbol is used as a random variable we refer to it as s , s′ , while st represents the observed sample at time step t. The words belong to a finite vocabulary s ∈ { 1 , · · · , Vsize } , Vsize ∈ N. To adapt KdT to more splits than binary , we split axis dt = t mod d , into Vsize segments , one for each possible st . The segment has length proportional to st probability . Then we turn to the following axis dt+1 , and continue the process of splitting and turning ( figure 1 and algorithm 1 ) . In figure 1 , st ∈ { a , b , c , } . The initial token s1 = a is given a portion on d1 of length P ( a ) , larger than the portion given to s1 = b or s1 = c , since there are less sentences that start with b than with a , and there is none that starts with c : P ( a ) > P ( b ) > P ( c ) = 0 . Then , we split d2 according to the probability of the next symbol s2 . In this case the second most likely symbol after symbol s1 = a is s2 = c , so ac ends with a larger volume than aa , ab and a . For sentences longer than d , next symbol is assigned an axis d3 previously split , but only the volume selected up to t− 1 is further split . So , the sentence abc takes a portion of ab equal to P ( c| ( ab ) ) , while ‘ ab ’ takes a portion equal to P ( | ( ab ) ) . We estimate language statistics with a Language Model ( LM ) , PLM ( st| ( st′ ) t′ < t ) . This will approximate the frequency information that makes AC entropically efficient . The sentence is finally encoded as the center of the volume bounded by those segments for simplicity , and any point within it is decoded to the same sentence . The extension to a larger [ a , b ] d hypercube is straightforward , and could provide higher precision , but we restrict ourselves to [ 0 , 1 ] d. AriEL has a computational complexity of O ( nD2Vsize ) for encoding and decoding ( algorithm 1 ) , where n is the length of the sequence , D is the dimensionality of the LM latent space , and Vsize the vocabulary size . AriEL has a minimum number of sequential operations of O ( n ) for both encoding and decoding , on par with conventional seq2seq recurrent networks . | This paper proposes AriEL, a sentence encoding method onto the compact space [0, 1]^d. It leverages essences of arithmetic coding and kd-tree to encode/decode sentences with a fixed region of the space. With the property of arithmetic coding, in theory, it can map sentences with any lengths into individual values, and any points on [0, 1]^d can map back into corresponding sentence. Although the method relies on neural network based LMs to assign sentences into corresponding regions, the generality of mapping between any sentences/points is kept while changing the LM's behavior. The idea is interesting. | SP:f88a0263fe87db598ed9d3b537430324ee29ddf2 |
Mastering Atari with Discrete World Models | 1 INTRODUCTION To successfully operate in unknown environments , reinforcement learning agents need to learn about their environments over time . World models are an explicit way to represent an agent ’ s knowledge about its environment . Compared to model-free reinforcement learning that learns through trial and error , world models facilitate generalization and can predict the outcomes of potential actions to enable planning ( Sutton , 1991 ) . Capturing general aspects of the environment , world models have been shown to be effective for transfer to novel tasks ( Byravan et al. , 2019 ) , directed exploration ( Sekar et al. , 2020 ) , and generalization from offline datasets ( Yu et al. , 2020 ) . When the inputs are high-dimensional images , latent dynamics models predict ahead in an abstract latent space ( Watter et al. , 2015 ; Ha and Schmidhuber , 2018 ; Hafner et al. , 2018 ; Zhang et al. , 2019 ) . Predicting compact representations instead of images has been hypothesized to reduce accumulating errors and their small memory footprint enables thousands of parallel predictions on a single GPU ( Hafner et al. , 2018 ; 2019 ) . Leveraging this approach , the recent Dreamer agent ( Hafner et al. , 2019 ) has solved a wide range of continuous control tasks from image inputs . Despite their intriguing properties , world models have so far not been accurate enough to compete with the stateof-the-art model-free algorithms on the most competitive benchmarks . The well-established Atari benchmark ∗Correspondence to : Danijar Hafner < mail @ danijar.com > . ( Bellemare et al. , 2013 ) historically required model-free algorithms to achieve human-level performance , such as DQN ( Mnih et al. , 2015 ) , A3C ( Mnih et al. , 2016 ) , or Rainbow ( Hessel et al. , 2018 ) . Several attempts at learning accurate world models of Atari games have been made , without achieving competitive performance ( Oh et al. , 2015 ; Chiappa et al. , 2017 ; Kaiser et al. , 2019 ) . On the other hand , the recently proposed MuZero agent ( Schrittwieser et al. , 2019 ) shows that planning can achieve impressive performance on board games and deterministic Atari games given extensive engineering effort and a vast computational budget . However , its implementation is not available to the public and it would require over 2 months of computation to train even one agent on a GPU , rendering it impractical for most research groups . In this paper , we introduce DreamerV2 , the first reinforcement learning agent that achieves humanlevel performance on the Atari benchmark by learning behaviors purely within a separately trained world model , as shown in Figure 1 . Learning successful behaviors purely within the world model demonstrates that the world model learns to accurately represent the environment . To achieve this , we apply small modifications to the Dreamer agent ( Hafner et al. , 2019 ) , such as using discrete latents and balancing terms within the KL loss . Using a single GPU and a single environment instance , DreamerV2 outperforms top single-GPU Atari agents Rainbow ( Hessel et al. , 2018 ) and IQN ( Dabney et al. , 2018 ) , which rest upon years of model-free reinforcement learning research ( Van Hasselt et al. , 2015 ; Schaul et al. , 2015 ; Wang et al. , 2016 ; Bellemare et al. , 2017 ; Fortunato et al. , 2017 ) . Moreover , aspects of these algorithms are complementary to our world model and could be integrated into the Dreamer framework in the future . To rigorously compare the algorithms , we report scores normalized by both a human gamer ( Mnih et al. , 2015 ) and the human world record ( Toromanoff et al. , 2019 ) and make a suggestion for reporting scores going forward . 2 DREAMERV2 . We present DreamerV2 , an evolution of the Dreamer agent ( Hafner et al. , 2019 ) . We refer to the original Dreamer agent as DreamerV1 throughout this paper . This section describes the complete DreamerV2 algorithm , consisting of the three typical components of a model-based agent ( Sutton , 1991 ) . We learn the world model from a dataset of past experience , learn an actor and critic from imagined sequences of compact model states , and execute the actor in the environment to grow the experience dataset . In Appendix C , we include a list of changes that we applied to DreamerV1 and which of them we found to increase empirical performance . 2.1 WORLD MODEL LEARNING . World models summarize an agent ’ s experience into a predictive model that can be used in place of the environment to learn behaviors . When inputs are high-dimensional images , it is beneficial to learn compact state representations of the inputs to predict ahead in this learned latent space ( Watter et al. , 2015 ; Karl et al. , 2016 ; Ha and Schmidhuber , 2018 ) . These models are called latent dynamics models . Predicting ahead in latent space not only facilitates long-term predictions , it also allows to efficiently predict thousands of compact state sequences in parallel in a single batch , without having to generate images . DreamerV2 builds upon the world model that was introduced by PlaNet ( Hafner et al. , 2018 ) and used in DreamerV1 , by replacing its Gaussian latents with categorical variables . Experience dataset The world model is trained from the agent ’ s growing dataset of past experience that contains sequences of images x1 : T , actions a1 : T , rewards r1 : T , and discount factors γ1 : T . The discount factors equal a fixed hyper parameter γ = 0.999 for time steps within an episode and are set to zero for terminal time steps . For training , we use batches of B = 50 sequences of fixed length L = 50 that are sampled randomly within the stored episodes . To observe enough episode ends during training , we sample the start index of each training sequence uniformly within the episode and then clip it to not exceed the episode length minus the training sequence length . Model components The world model consists of an image encoder , a Recurrent State-Space Model ( RSSM ; Hafner et al. , 2018 ) to learn the dynamics , and predictors for the image , reward , and discount factor . The world model is summarized in Figure 2 . The RSSM uses a sequence of deterministic recurrent states ht , from which it computes two distributions over stochastic states at each step . The posterior state zt incorporates information about the current image xt , while the prior state ẑt aims to predict the posterior without access to the current image . The concatenation of deterministic and stochastic states forms the compact model state . From the posterior model state , we reconstruct the current image xt and predict the reward rt and discount factor γt . The model components are : RSSM Recurrent model : ht = fφ ( ht−1 , zt−1 , at−1 ) Representation model : zt ∼ qφ ( zt | ht , xt ) Transition predictor : ẑt ∼ pφ ( ẑt | ht ) Image predictor : x̂t ∼ pφ ( x̂t | ht , zt ) Reward predictor : r̂t ∼ pφ ( r̂t | ht , zt ) Discount predictor : γ̂t ∼ pφ ( γ̂t | ht , zt ) . ( 1 ) All components are implemented as neural networks and φ describes their combined parameter vector . The transition predictor guesses the next model state only from the current model state and the action but without using the next image , so that we can later learn behaviors by predicting sequences of model states without having to observe or generate images . The discount predictor lets us estimate the probability of an episode ending when learning behaviors from model predictions . Neural networks The representation model is implemented as a Convolutional Neural Network ( CNN ; LeCun et al. , 1989 ) followed by a Multi-Layer Perceptron ( MLP ) that receives the image embedding and the deterministic recurrent state . The RSSM uses a Gated Recurrent Unit ( GRU ; Cho et al. , 2014 ) to compute the deterministic recurrent states . The model state is the concatenation of deterministic GRU state and a sample of the stochastic state . The image predictor is a transposed CNN and the transition , reward , and discount predictors are MLPs . We down-scale the 84 × 84 grayscale images to 64× 64 pixels so that we can apply the convolutional architecture of DreamerV1 . Algorithm 1 : Straight-Through Gradients with Automatic Differentiation sample = one_hot ( draw ( logits ) ) # sample has no gradient probs = softmax ( logits ) # want gradient of this sample = sample + probs - stop_grad ( probs ) # has gradient of probs We use the ELU activation function for all components of the model ( Clevert et al. , 2015 ) . The world model uses a total of 20M trainable parameters . Distributions The image predictor outputs the mean of a diagonal Gaussian likelihood with unit variance , the reward predictor outputs a univariate Gaussian with unit variance , and the discount predictor outputs a Bernoulli likelihood . In prior work , the latent variable in the model state was a diagonal Gaussian that used reparameterization gradients during backpropagation ( Kingma and Welling , 2013 ; Rezende et al. , 2014 ) . In DreamerV2 , we instead use a vector of several categorical variables and optimize them using straight-through gradients ( Bengio et al. , 2013 ) , which are easy to implement using automatic differentiation as shown in Algorithm 1 . We discuss possible benefits of categorical over Gaussian latents in the experiments section . Loss function All components of the world model are optimized jointly . The distributions produced by the image predictor , reward predictor , discount predictor , and transition predictor are trained to maximize the log-likelihood of their corresponding targets . The representation model is trained to produce model states that facilitates these prediction tasks , through the expectation below . Moreover , it is regularized to produce model states with high entropy , such that the model becomes robust to many different model states during training . The loss function for learning the world model is : L ( φ ) .= Eqφ ( z1 : T | a1 : T , x1 : T ) [ ∑T t=1 − ln pφ ( xt | ht , zt ) image log loss − ln pφ ( rt | ht , zt ) reward log loss − ln pφ ( γt | ht , zt ) discount log loss +βKL [ qφ ( zt | ht , xt ) ∥∥ pφ ( zt | ht ) ] KL loss ] . ( 2 ) We jointly minimize the loss function with respect to the vector φ that contains all parameters of the world model using the Adam optimizer ( Kingma and Ba , 2014 ) . We scale the KL loss by β = 0.1 for Atari and by β = 1.0 for continuous control ( Higgins et al. , 2016 ) . KL balancing The world model loss function in Equation 2 is the ELBO or variational free energy of a hidden Markov model that is conditioned on the action sequence . The world model can thus be interpreted as a sequential VAE , where the representation model is the approximate posterior and the transition predictor is the temporal prior . In the ELBO objective , the KL loss serves two purposes : it trains the prior toward the representations , and it regularizes the representations toward the prior . However , learning the transition function is difficult and we want to avoid regularizing the representations toward a poorly trained prior . To solve this problem , we minimize the KL loss faster with respect to the prior than the representations by using different learning rates , α = 0.8 for the prior and 1 − α for the approximate posterior . We implement this technique as shown in Algorithm 2 and refer to it as KL balancing . KL balancing encourages learning an accurate prior over increasing posterior entropy , so that the prior better approximates the aggregate posterior . KL balancing is different from and orthogonal to beta-VAEs ( Higgins et al. , 2016 ) . | The authors introduce DreamerV2, a modification of the influential Dreamer RL agent (hereafter refered to as DreamerV1). The primary changes from DreamerV1 are a discrete latent space and a modified loss function (and with it, a modified optimization scheme). As in DreamerV1, the agent trains a world model with environment experience, and the policy is learned by "imagining" within the learned latent space using the world model to simulate transitions and rewards. They demonstrate superior performance over a variety of successful benchmarks that use similar compute (1 GPU, 1 environment -- e.g. MuZero, which requires vastly more, is not considered) on Atari. Further, they analyze several ways of aggregating Atari scores, and (while their algorithm performs best of those tried in each aggregation), they recommend one aggregation method (along with several other choices made for benchmarking) going forward. | SP:e5b6ac071882028ba6191098c718861340728918 |
Mastering Atari with Discrete World Models | 1 INTRODUCTION To successfully operate in unknown environments , reinforcement learning agents need to learn about their environments over time . World models are an explicit way to represent an agent ’ s knowledge about its environment . Compared to model-free reinforcement learning that learns through trial and error , world models facilitate generalization and can predict the outcomes of potential actions to enable planning ( Sutton , 1991 ) . Capturing general aspects of the environment , world models have been shown to be effective for transfer to novel tasks ( Byravan et al. , 2019 ) , directed exploration ( Sekar et al. , 2020 ) , and generalization from offline datasets ( Yu et al. , 2020 ) . When the inputs are high-dimensional images , latent dynamics models predict ahead in an abstract latent space ( Watter et al. , 2015 ; Ha and Schmidhuber , 2018 ; Hafner et al. , 2018 ; Zhang et al. , 2019 ) . Predicting compact representations instead of images has been hypothesized to reduce accumulating errors and their small memory footprint enables thousands of parallel predictions on a single GPU ( Hafner et al. , 2018 ; 2019 ) . Leveraging this approach , the recent Dreamer agent ( Hafner et al. , 2019 ) has solved a wide range of continuous control tasks from image inputs . Despite their intriguing properties , world models have so far not been accurate enough to compete with the stateof-the-art model-free algorithms on the most competitive benchmarks . The well-established Atari benchmark ∗Correspondence to : Danijar Hafner < mail @ danijar.com > . ( Bellemare et al. , 2013 ) historically required model-free algorithms to achieve human-level performance , such as DQN ( Mnih et al. , 2015 ) , A3C ( Mnih et al. , 2016 ) , or Rainbow ( Hessel et al. , 2018 ) . Several attempts at learning accurate world models of Atari games have been made , without achieving competitive performance ( Oh et al. , 2015 ; Chiappa et al. , 2017 ; Kaiser et al. , 2019 ) . On the other hand , the recently proposed MuZero agent ( Schrittwieser et al. , 2019 ) shows that planning can achieve impressive performance on board games and deterministic Atari games given extensive engineering effort and a vast computational budget . However , its implementation is not available to the public and it would require over 2 months of computation to train even one agent on a GPU , rendering it impractical for most research groups . In this paper , we introduce DreamerV2 , the first reinforcement learning agent that achieves humanlevel performance on the Atari benchmark by learning behaviors purely within a separately trained world model , as shown in Figure 1 . Learning successful behaviors purely within the world model demonstrates that the world model learns to accurately represent the environment . To achieve this , we apply small modifications to the Dreamer agent ( Hafner et al. , 2019 ) , such as using discrete latents and balancing terms within the KL loss . Using a single GPU and a single environment instance , DreamerV2 outperforms top single-GPU Atari agents Rainbow ( Hessel et al. , 2018 ) and IQN ( Dabney et al. , 2018 ) , which rest upon years of model-free reinforcement learning research ( Van Hasselt et al. , 2015 ; Schaul et al. , 2015 ; Wang et al. , 2016 ; Bellemare et al. , 2017 ; Fortunato et al. , 2017 ) . Moreover , aspects of these algorithms are complementary to our world model and could be integrated into the Dreamer framework in the future . To rigorously compare the algorithms , we report scores normalized by both a human gamer ( Mnih et al. , 2015 ) and the human world record ( Toromanoff et al. , 2019 ) and make a suggestion for reporting scores going forward . 2 DREAMERV2 . We present DreamerV2 , an evolution of the Dreamer agent ( Hafner et al. , 2019 ) . We refer to the original Dreamer agent as DreamerV1 throughout this paper . This section describes the complete DreamerV2 algorithm , consisting of the three typical components of a model-based agent ( Sutton , 1991 ) . We learn the world model from a dataset of past experience , learn an actor and critic from imagined sequences of compact model states , and execute the actor in the environment to grow the experience dataset . In Appendix C , we include a list of changes that we applied to DreamerV1 and which of them we found to increase empirical performance . 2.1 WORLD MODEL LEARNING . World models summarize an agent ’ s experience into a predictive model that can be used in place of the environment to learn behaviors . When inputs are high-dimensional images , it is beneficial to learn compact state representations of the inputs to predict ahead in this learned latent space ( Watter et al. , 2015 ; Karl et al. , 2016 ; Ha and Schmidhuber , 2018 ) . These models are called latent dynamics models . Predicting ahead in latent space not only facilitates long-term predictions , it also allows to efficiently predict thousands of compact state sequences in parallel in a single batch , without having to generate images . DreamerV2 builds upon the world model that was introduced by PlaNet ( Hafner et al. , 2018 ) and used in DreamerV1 , by replacing its Gaussian latents with categorical variables . Experience dataset The world model is trained from the agent ’ s growing dataset of past experience that contains sequences of images x1 : T , actions a1 : T , rewards r1 : T , and discount factors γ1 : T . The discount factors equal a fixed hyper parameter γ = 0.999 for time steps within an episode and are set to zero for terminal time steps . For training , we use batches of B = 50 sequences of fixed length L = 50 that are sampled randomly within the stored episodes . To observe enough episode ends during training , we sample the start index of each training sequence uniformly within the episode and then clip it to not exceed the episode length minus the training sequence length . Model components The world model consists of an image encoder , a Recurrent State-Space Model ( RSSM ; Hafner et al. , 2018 ) to learn the dynamics , and predictors for the image , reward , and discount factor . The world model is summarized in Figure 2 . The RSSM uses a sequence of deterministic recurrent states ht , from which it computes two distributions over stochastic states at each step . The posterior state zt incorporates information about the current image xt , while the prior state ẑt aims to predict the posterior without access to the current image . The concatenation of deterministic and stochastic states forms the compact model state . From the posterior model state , we reconstruct the current image xt and predict the reward rt and discount factor γt . The model components are : RSSM Recurrent model : ht = fφ ( ht−1 , zt−1 , at−1 ) Representation model : zt ∼ qφ ( zt | ht , xt ) Transition predictor : ẑt ∼ pφ ( ẑt | ht ) Image predictor : x̂t ∼ pφ ( x̂t | ht , zt ) Reward predictor : r̂t ∼ pφ ( r̂t | ht , zt ) Discount predictor : γ̂t ∼ pφ ( γ̂t | ht , zt ) . ( 1 ) All components are implemented as neural networks and φ describes their combined parameter vector . The transition predictor guesses the next model state only from the current model state and the action but without using the next image , so that we can later learn behaviors by predicting sequences of model states without having to observe or generate images . The discount predictor lets us estimate the probability of an episode ending when learning behaviors from model predictions . Neural networks The representation model is implemented as a Convolutional Neural Network ( CNN ; LeCun et al. , 1989 ) followed by a Multi-Layer Perceptron ( MLP ) that receives the image embedding and the deterministic recurrent state . The RSSM uses a Gated Recurrent Unit ( GRU ; Cho et al. , 2014 ) to compute the deterministic recurrent states . The model state is the concatenation of deterministic GRU state and a sample of the stochastic state . The image predictor is a transposed CNN and the transition , reward , and discount predictors are MLPs . We down-scale the 84 × 84 grayscale images to 64× 64 pixels so that we can apply the convolutional architecture of DreamerV1 . Algorithm 1 : Straight-Through Gradients with Automatic Differentiation sample = one_hot ( draw ( logits ) ) # sample has no gradient probs = softmax ( logits ) # want gradient of this sample = sample + probs - stop_grad ( probs ) # has gradient of probs We use the ELU activation function for all components of the model ( Clevert et al. , 2015 ) . The world model uses a total of 20M trainable parameters . Distributions The image predictor outputs the mean of a diagonal Gaussian likelihood with unit variance , the reward predictor outputs a univariate Gaussian with unit variance , and the discount predictor outputs a Bernoulli likelihood . In prior work , the latent variable in the model state was a diagonal Gaussian that used reparameterization gradients during backpropagation ( Kingma and Welling , 2013 ; Rezende et al. , 2014 ) . In DreamerV2 , we instead use a vector of several categorical variables and optimize them using straight-through gradients ( Bengio et al. , 2013 ) , which are easy to implement using automatic differentiation as shown in Algorithm 1 . We discuss possible benefits of categorical over Gaussian latents in the experiments section . Loss function All components of the world model are optimized jointly . The distributions produced by the image predictor , reward predictor , discount predictor , and transition predictor are trained to maximize the log-likelihood of their corresponding targets . The representation model is trained to produce model states that facilitates these prediction tasks , through the expectation below . Moreover , it is regularized to produce model states with high entropy , such that the model becomes robust to many different model states during training . The loss function for learning the world model is : L ( φ ) .= Eqφ ( z1 : T | a1 : T , x1 : T ) [ ∑T t=1 − ln pφ ( xt | ht , zt ) image log loss − ln pφ ( rt | ht , zt ) reward log loss − ln pφ ( γt | ht , zt ) discount log loss +βKL [ qφ ( zt | ht , xt ) ∥∥ pφ ( zt | ht ) ] KL loss ] . ( 2 ) We jointly minimize the loss function with respect to the vector φ that contains all parameters of the world model using the Adam optimizer ( Kingma and Ba , 2014 ) . We scale the KL loss by β = 0.1 for Atari and by β = 1.0 for continuous control ( Higgins et al. , 2016 ) . KL balancing The world model loss function in Equation 2 is the ELBO or variational free energy of a hidden Markov model that is conditioned on the action sequence . The world model can thus be interpreted as a sequential VAE , where the representation model is the approximate posterior and the transition predictor is the temporal prior . In the ELBO objective , the KL loss serves two purposes : it trains the prior toward the representations , and it regularizes the representations toward the prior . However , learning the transition function is difficult and we want to avoid regularizing the representations toward a poorly trained prior . To solve this problem , we minimize the KL loss faster with respect to the prior than the representations by using different learning rates , α = 0.8 for the prior and 1 − α for the approximate posterior . We implement this technique as shown in Algorithm 2 and refer to it as KL balancing . KL balancing encourages learning an accurate prior over increasing posterior entropy , so that the prior better approximates the aggregate posterior . KL balancing is different from and orthogonal to beta-VAEs ( Higgins et al. , 2016 ) . | The authors build on the Dreamer architecture, that is able to learn models of an environment, to build DreamerV2, which learns a model of an environment in latent space. The authors then train their agent in this latent space. DreamerV2 was evaluated on the Atari learning environment and results showed that it was comparable to Rainbow and better, under certain metrics. | SP:e5b6ac071882028ba6191098c718861340728918 |
Learning to Dynamically Select Between Reward Shaping Signals | 1 INTRODUCTION . Although numerous successes have been reported in reinforcement learning ( RL ) , it still suffers from several drawbacks that prevent it from performing to expectation in many real-life situations . One critical limitation is the sample complexity . In order to arrive at an acceptable solution , RL requires an enormous amount of experience ( i.e. , data ) before useful behaviors are learned . Reward shaping is one approach that seeks to address this problem , providing additional feedback in the form of shaping rewards to allow an RL agent to learn faster . Moreover , shaping rewards that follow a potential form preserve guarantees that optimal solutions will be found despite the altered feedback ( Ng et al. , 1999 ) . However , until recently , most reward shaping signals and functions have been hand-engineered . This task is notoriously difficult , as even slight incorrectness can lead to local optima that do not solve the present problem ( Randløv & Alstrøm , 1998 ) . Automatic reward shaping eliminates the difficulty of shaping reward signal and function design by learning the shaping reward function that in turn enables optimal learning of the policy . Automatic reward shaping in itself is an extremely difficult problem to solve . In order to simplify the problem , we break down the idea into two sub-problems : ( 1 ) learning shaping reward signals , and ( 2 ) learning how to exploit shaping reward signals to provide an appropriate shaping reward at each state of the learning process . This paper focuses on the latter task , i.e. , ( 2 ) , which we refer to as ” automatic reward adaptation ” . Problem Definition : Given a set of shaping reward signals ~φ = φ1 , . . . , φn , learn to adapt these signals automatically to produce a single shaping reward F ( s , a , s′ ) : ~φ→ R for each state s , action a , next state s′ tuple in the learning process . The full reward for any transition in the RL problem is R ( s , a , s′ ) = r ( s , a , s′ ) + F ( s , a , s′ ) , where r is the original reward of the RL problem before shaping . Our proposed approach to automatic reward adaptation is to learn to dynamically select the right shaping reward signal φi from ~φ at each transition encountered in learning . In addition , our method learns using minimal infrastructure and with value-based gradients . We avoid the use of models and additional approximate value functions ( which previous approaches to automatic reward shaping typically rely on ) and perform updates using already-present values as feedback and with minimal additional computation . The proposed ideas have been verified through experiments in a variety of environments using different shaping reward paradigms . The basis of our shaping signal selection approach is rooted in how humans seem to react in realistic decision-making situations . Given a set of basic reward signals , say ” comfort ” and ” selfpreservation ” , humans have the uncanny ability to identify when and how much one should listen to any given signal depending on the current situation . For example , in a lane keeping task , if we were near the edge of a lane with no cars around us , we would simply listen to the ” comfort ” reward signal telling us to move closer to the center of the lane . However , if there was another car moving into the same lane at the same time , we would instead follow the ” self-preservation ” reward signal dictating that we stay as reasonably far away from other cars as possible . Both signals lead to correct performance but are applicable in entirely different situations , provide different information , and , most importantly , induce different behaviors . While we recognize that explicit selection risks not using available information in other shaping reward signals , we argue that it provides a guaranteed improvement over only the environment reward . In this sense , selection does not hinder the RL agent in learning even though it is incomplete relative to optimal reward shaping , which is extremely difficult to design . This work parallels the area of multi-objective RL ( MORL ) , which investigates how to perform RL in the presence of multiple rewards ( Roijers et al. , 2013 ) . However , there are a couple differences between our idea and previous work in automatic reward shaping and MORL . First , much research in automatic reward shaping focuses on the first sub-problem ( 1 ) described above , i.e. , learning the parameters of some shaping reward signal ( usually only one ) . However , realistic problems often involve a number of ( possibly conflicting ) goals or signals that can not be trivially summarized as a single signal or function . Our work focuses on the second sub-problem ( 2 ) and investigates if there are better ways to access provided shaping reward signals for more effective reward shaping . Second , while MORL also handles environments with multiple feedback signals , it aims to solve a different problem overall . MORL attempts to learn solutions that best optimize the multiple rewards presented to it , but our idea attempts to best exploit multiple rewards in a way that solves the single objective present in the problem . Furthermore , MORL typically learns a linear combination of signals as the final reward shaping function . While effective in many cases , a learned fixed combination does not always suit each individual state in learning , especially if the environment is dynamic . That being said , we do not discount the potential of combination in automatic reward adaptation.1 Our goal is simply to consider another approach with promising flexibility . 2 RELATED WORK . The fundamental theory behind reward shaping regarding its optimality-preserving guarantees was documented in Ng et al . ( 1999 ) . Since then , the guarantees of potential-based reward shaping have been expanded to time-varying potential options by Harutyunyan et al . ( 2015 ) , which more naturally reflects realistic problems and the concept of automatic reward shaping . One of the initial works in this area demonstrated impressive results but the shaping reward function was heavily engineered ( Laud & DeJong , 2002 ) . More recent research has made progress towards greater autonomy . The mechanisms driving automatic reward shaping typically fall into a few categories , with one of the originals involving the use of abstractions to learn values on a simpler version of the problem before using these values as shaping rewards ( Marthi , 2007 ) . Other projects have followed up this idea using both model-free and model-based RL methods ( Grześ & Kudenko , 2010 ) as well as with the estimated modeling of the reward function itself ( Marom & Rosman , 2018 ) . One particular work directly bootstraps the model-based learning in R-max into a shaping reward function ( Asmuth et al. , 2008 ) . Yet another builds a graph of the environment before using subgoals as a means of defining shaping rewards which adjust as the graph is updated ( Marashi et al. , 2012 ) . Credit assignment is another form of automatic reward shaping . It injects information into previous experiences such that learning is enhanced when replay occurs . Song & Jin ( 2011 ) implemented this by identifying critical states and using these landmarks as sub-rewards to make learning easier . De Villiers & 1We will study flexible combination of individual shaping signals in our future work . Sabatta ( 2020 ) directly augmented the replay buffer with propagated rewards to make the reward surface more dense and accelerate learning . Zheng et al . ( 2018 ) integrated learning of an intrinsic motivation signal along with the agent ’ s value and policy learning . While representative of automatic reward shaping , they only consider the single intrinsic motivation feedback signal when performing reward shaping . Adjacent to our focus is the area of multi-objective RL ( MORL ) . Brys et al . ( 2014 ) explored some connections between multi-objective frameworks and reward shaping which Brys et al . ( 2017 ) later expanded on by directly characterizing shaping reward signals as multiple objectives and applying MORL to successfully solve such problems . Van Seijen et al . ( 2017 ) implemented a form of automatic reward adaptation by learning a separate value function for each reward signal and using a value estimated from each reward signal ’ s associated value to update the policy . Fu et al . ( 2019 ) addressed the presence of multiple goals ( in the form of rewards ) by explicitly calculating the best weighting for each reward in a linear combination after some amount of experience . Tajmajer ( 2018 ) similarly used a linear combination to combine multiple rewards but simultaneously learned a set of decision values that acted as weights for each reward in the final feedback computation . As mentioned previously , our work differs from these methods primarily in that our method does not use linear combination , uses less learning infrastructure , and aims to control reward shaping ( not the agent itself in the pursuit of multiple goals ) . Instead of learning the weights for rewards in solving a problem with multiple objectives , we dynamically select between shaping reward signals to use in learning at any given point . While inspired by concepts in MORL , this selection technique is , to our knowledge , new . 3 BACKGROUND . 3.1 REINFORCEMENT LEARNING . RL is a form of machine learning that specializes in learning to solve problems that involve unknown environments and require sequences of decisions . Such problems are often characterized as Markov Decision Processes ( MDPs ) , and described by a tuple M = ( S , A , τ , R , γ ) . S represents the set of states , A is the set of possible actions , τ is the transition probability function τ : S x A → S , and R is the reward function R : S x A x S → R. γ is a parameter that describes the discount factor . MDPs operate generally through the following repeating set of steps : a state is observed , an action is taken which transitions the agent into another state while a reward is given by the environment , then the transition and reward are used to update the policy and the value function ( if used ) . The goal of any given agent attempting to solve an MDP is to generate a policy through interactions with the environment that maps states to actions , π : S → A . This policy is optimized towards the objective of maximizing the expected sum of discounted rewards G = Eπ [ ∑ γr | S = s , A = a ] , or return . Q-learning is a well-established algorithm that follows the Q-value , or action-value , form of this objective by attempting to find the action-value function Q ( s , a ) = Eπ [ G | s , a ] ( Kröse , 1995 ) . Updates take the following form : Q ( s , a ) = Q ( s , a ) + α ( r + γmax a Q ( s′ , a ) −Q ( s , a ) ) ( 1 ) with α as the learning rate ( Sutton & Barto , 2018 ) . Typically , some form of encouraged exploration is implemented , most often by specifying a probability ( often decaying over time ) with which the agent must select a random action instead of the greedy one . | The paper presents an approach to select the best reward shaping potential signal out of multiple available shaping potentials. The main idea seems to be to select the shaping signal that minimizes the inverse of the difference of potentials between the next state and the current state. The experiments show the proposed approach works better than other baselines. | SP:5247942ad6db19ef5124aa562fd1d4186358c779 |
Learning to Dynamically Select Between Reward Shaping Signals | 1 INTRODUCTION . Although numerous successes have been reported in reinforcement learning ( RL ) , it still suffers from several drawbacks that prevent it from performing to expectation in many real-life situations . One critical limitation is the sample complexity . In order to arrive at an acceptable solution , RL requires an enormous amount of experience ( i.e. , data ) before useful behaviors are learned . Reward shaping is one approach that seeks to address this problem , providing additional feedback in the form of shaping rewards to allow an RL agent to learn faster . Moreover , shaping rewards that follow a potential form preserve guarantees that optimal solutions will be found despite the altered feedback ( Ng et al. , 1999 ) . However , until recently , most reward shaping signals and functions have been hand-engineered . This task is notoriously difficult , as even slight incorrectness can lead to local optima that do not solve the present problem ( Randløv & Alstrøm , 1998 ) . Automatic reward shaping eliminates the difficulty of shaping reward signal and function design by learning the shaping reward function that in turn enables optimal learning of the policy . Automatic reward shaping in itself is an extremely difficult problem to solve . In order to simplify the problem , we break down the idea into two sub-problems : ( 1 ) learning shaping reward signals , and ( 2 ) learning how to exploit shaping reward signals to provide an appropriate shaping reward at each state of the learning process . This paper focuses on the latter task , i.e. , ( 2 ) , which we refer to as ” automatic reward adaptation ” . Problem Definition : Given a set of shaping reward signals ~φ = φ1 , . . . , φn , learn to adapt these signals automatically to produce a single shaping reward F ( s , a , s′ ) : ~φ→ R for each state s , action a , next state s′ tuple in the learning process . The full reward for any transition in the RL problem is R ( s , a , s′ ) = r ( s , a , s′ ) + F ( s , a , s′ ) , where r is the original reward of the RL problem before shaping . Our proposed approach to automatic reward adaptation is to learn to dynamically select the right shaping reward signal φi from ~φ at each transition encountered in learning . In addition , our method learns using minimal infrastructure and with value-based gradients . We avoid the use of models and additional approximate value functions ( which previous approaches to automatic reward shaping typically rely on ) and perform updates using already-present values as feedback and with minimal additional computation . The proposed ideas have been verified through experiments in a variety of environments using different shaping reward paradigms . The basis of our shaping signal selection approach is rooted in how humans seem to react in realistic decision-making situations . Given a set of basic reward signals , say ” comfort ” and ” selfpreservation ” , humans have the uncanny ability to identify when and how much one should listen to any given signal depending on the current situation . For example , in a lane keeping task , if we were near the edge of a lane with no cars around us , we would simply listen to the ” comfort ” reward signal telling us to move closer to the center of the lane . However , if there was another car moving into the same lane at the same time , we would instead follow the ” self-preservation ” reward signal dictating that we stay as reasonably far away from other cars as possible . Both signals lead to correct performance but are applicable in entirely different situations , provide different information , and , most importantly , induce different behaviors . While we recognize that explicit selection risks not using available information in other shaping reward signals , we argue that it provides a guaranteed improvement over only the environment reward . In this sense , selection does not hinder the RL agent in learning even though it is incomplete relative to optimal reward shaping , which is extremely difficult to design . This work parallels the area of multi-objective RL ( MORL ) , which investigates how to perform RL in the presence of multiple rewards ( Roijers et al. , 2013 ) . However , there are a couple differences between our idea and previous work in automatic reward shaping and MORL . First , much research in automatic reward shaping focuses on the first sub-problem ( 1 ) described above , i.e. , learning the parameters of some shaping reward signal ( usually only one ) . However , realistic problems often involve a number of ( possibly conflicting ) goals or signals that can not be trivially summarized as a single signal or function . Our work focuses on the second sub-problem ( 2 ) and investigates if there are better ways to access provided shaping reward signals for more effective reward shaping . Second , while MORL also handles environments with multiple feedback signals , it aims to solve a different problem overall . MORL attempts to learn solutions that best optimize the multiple rewards presented to it , but our idea attempts to best exploit multiple rewards in a way that solves the single objective present in the problem . Furthermore , MORL typically learns a linear combination of signals as the final reward shaping function . While effective in many cases , a learned fixed combination does not always suit each individual state in learning , especially if the environment is dynamic . That being said , we do not discount the potential of combination in automatic reward adaptation.1 Our goal is simply to consider another approach with promising flexibility . 2 RELATED WORK . The fundamental theory behind reward shaping regarding its optimality-preserving guarantees was documented in Ng et al . ( 1999 ) . Since then , the guarantees of potential-based reward shaping have been expanded to time-varying potential options by Harutyunyan et al . ( 2015 ) , which more naturally reflects realistic problems and the concept of automatic reward shaping . One of the initial works in this area demonstrated impressive results but the shaping reward function was heavily engineered ( Laud & DeJong , 2002 ) . More recent research has made progress towards greater autonomy . The mechanisms driving automatic reward shaping typically fall into a few categories , with one of the originals involving the use of abstractions to learn values on a simpler version of the problem before using these values as shaping rewards ( Marthi , 2007 ) . Other projects have followed up this idea using both model-free and model-based RL methods ( Grześ & Kudenko , 2010 ) as well as with the estimated modeling of the reward function itself ( Marom & Rosman , 2018 ) . One particular work directly bootstraps the model-based learning in R-max into a shaping reward function ( Asmuth et al. , 2008 ) . Yet another builds a graph of the environment before using subgoals as a means of defining shaping rewards which adjust as the graph is updated ( Marashi et al. , 2012 ) . Credit assignment is another form of automatic reward shaping . It injects information into previous experiences such that learning is enhanced when replay occurs . Song & Jin ( 2011 ) implemented this by identifying critical states and using these landmarks as sub-rewards to make learning easier . De Villiers & 1We will study flexible combination of individual shaping signals in our future work . Sabatta ( 2020 ) directly augmented the replay buffer with propagated rewards to make the reward surface more dense and accelerate learning . Zheng et al . ( 2018 ) integrated learning of an intrinsic motivation signal along with the agent ’ s value and policy learning . While representative of automatic reward shaping , they only consider the single intrinsic motivation feedback signal when performing reward shaping . Adjacent to our focus is the area of multi-objective RL ( MORL ) . Brys et al . ( 2014 ) explored some connections between multi-objective frameworks and reward shaping which Brys et al . ( 2017 ) later expanded on by directly characterizing shaping reward signals as multiple objectives and applying MORL to successfully solve such problems . Van Seijen et al . ( 2017 ) implemented a form of automatic reward adaptation by learning a separate value function for each reward signal and using a value estimated from each reward signal ’ s associated value to update the policy . Fu et al . ( 2019 ) addressed the presence of multiple goals ( in the form of rewards ) by explicitly calculating the best weighting for each reward in a linear combination after some amount of experience . Tajmajer ( 2018 ) similarly used a linear combination to combine multiple rewards but simultaneously learned a set of decision values that acted as weights for each reward in the final feedback computation . As mentioned previously , our work differs from these methods primarily in that our method does not use linear combination , uses less learning infrastructure , and aims to control reward shaping ( not the agent itself in the pursuit of multiple goals ) . Instead of learning the weights for rewards in solving a problem with multiple objectives , we dynamically select between shaping reward signals to use in learning at any given point . While inspired by concepts in MORL , this selection technique is , to our knowledge , new . 3 BACKGROUND . 3.1 REINFORCEMENT LEARNING . RL is a form of machine learning that specializes in learning to solve problems that involve unknown environments and require sequences of decisions . Such problems are often characterized as Markov Decision Processes ( MDPs ) , and described by a tuple M = ( S , A , τ , R , γ ) . S represents the set of states , A is the set of possible actions , τ is the transition probability function τ : S x A → S , and R is the reward function R : S x A x S → R. γ is a parameter that describes the discount factor . MDPs operate generally through the following repeating set of steps : a state is observed , an action is taken which transitions the agent into another state while a reward is given by the environment , then the transition and reward are used to update the policy and the value function ( if used ) . The goal of any given agent attempting to solve an MDP is to generate a policy through interactions with the environment that maps states to actions , π : S → A . This policy is optimized towards the objective of maximizing the expected sum of discounted rewards G = Eπ [ ∑ γr | S = s , A = a ] , or return . Q-learning is a well-established algorithm that follows the Q-value , or action-value , form of this objective by attempting to find the action-value function Q ( s , a ) = Eπ [ G | s , a ] ( Kröse , 1995 ) . Updates take the following form : Q ( s , a ) = Q ( s , a ) + α ( r + γmax a Q ( s′ , a ) −Q ( s , a ) ) ( 1 ) with α as the learning rate ( Sutton & Barto , 2018 ) . Typically , some form of encouraged exploration is implemented , most often by specifying a probability ( often decaying over time ) with which the agent must select a random action instead of the greedy one . | The idea of potential-based reward shaping (PBRS) is to improve the performance of learning agents by incorporating additional domain knowledge into their reward function (making it dense), while maintaining the same asymptotic convergence guarantees. However, one aspect of PBRS, that is introduced in this work, is how to select among multiple shaping signals in an adaptive manner that is also dependent on the current context, in a way that can overall improve the performance of the agent. A method is proposed here that learns to select among two or more shaping functions based on experience, represented as the difference between the TD error (or surprise) and the shaped reward signal, the conjecture being that signals that are closer to the TD error can lead to the best improvement in the agent's performance. The signal with the smallest difference is used as a self-supervised label to train a classifier that selects the signal. Experiments show that the method is at par with other similar approaches while being simpler to implement. | SP:5247942ad6db19ef5124aa562fd1d4186358c779 |
Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers | 1 INTRODUCTION . Bounding singular values of different layers of a neural network is a way to control the complexity of the model and has been used in different problems including robustness , generalization , optimization , generative modeling , etc . In particular , the spectral norm ( the maximum singular value ) of a layer bounds the factor by which the norm of the signal increases or decreases during both forward and backward propagation within that layer . If all singular values are all close to one , then the gradients neither explode nor vanish ( Hochreiter , 1991 ; Hochreiter et al. , 2001 ; Klambauer et al. , 2017 ; Xiao et al. , 2018 ) . Spectral norm regularizations/bounds have been used in improving the generalization ( Bartlett et al. , 2017 ; Long & Sedghi , 2020 ) , in training deep generative models ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ; Tolstikhin et al. , 2018 ; Miyato et al. , 2018 ; Hoogeboom et al. , 2020 ) and in robustifying models against adversarial attacks ( Singla & Feizi , 2020 ; Szegedy et al. , 2014 ; Peck et al. , 2017 ; Zhang et al. , 2018 ; Anil et al. , 2018 ; Hein & Andriushchenko , 2017 ; Cisse et al. , 2017 ) . These applications have resulted in multiple works to regularize neural networks by penalizing the spectral norm of the network layers ( Drucker & Le Cun , 1992 ; Yoshida & Miyato , 2017 ; Miyato et al. , 2018 ; 2017 ; Sedghi et al. , 2019 ; Singla & Feizi , 2020 ) . For a fully connected layer with weights W and bias b , the lipschitz constant is given by the spectral norm of the weight matrix i.e , ∥W∥2 , which can be computed efficiently using the power iteration method ( Golub & Van Loan , 1996 ) . In particular , if the matrix W is of size p × q , the computational complexity of power iteration ( assuming convergence in constant number of steps ) is O ( pq ) . Convolution layers ( Lecun et al. , 1998 ) are one of the key components of modern neural networks , particularly in computer vision ( Krizhevsky et al. , 2012 ) . Consider a convolution filter L of size cout × cin × h ×w where cout , cin , h and w denote the number of output channels , input channels , height and width of the filter respectively ; and a square input sample of size cin ×n×n where n is its height and width . A naive representation of the Jacobian of this layer will result in a matrix of size n2cout × n 2cin . For a typical convolution layer with the filter size 64 × 3 × 7 × 7 and an ImageNet sized input 3 × 224 × 224 ( Krizhevsky et al. , 2012 ) , the corresponding jacobian matrix has a very large size : 802816 × 150528 . This makes an explicit computation of the jacobian infeasible . Ryu et al . ( 2019 ) provide a way to compute the spectral norm of convolution layers using convolution and transposed convolution operations in power iteration , thereby avoiding this explicit computation . This leads to an improved running time especially when the number of input/output channels is small ( Table 1 ) . However , in addition to the running time , there is an additional difficulty in the approach proposed in Ryu et al . ( 2019 ) ( and other existing approaches described later ) regarding the computation of the spectral norm gradient ( often used as a regularization during the training ) . The gradient of the largest singular value with respect to the jacobian can be naively computed by taking the outer product of corresponding singular vectors . However , due to the special structure of the convolution operation , the jacobian will be a sparse matrix with repeated elements ( see Appendix Section D for details ) . The naive computation of the gradient will result in non-zero gradient values at elements that should be in fact zeros throughout training and also will assign different gradient values at elements that should always be identical . These issues make the gradient computation of the spectral norm with respect to the convolution filter weights using the technique of Ryu et al . ( 2019 ) difficult . Recently , Sedghi et al . ( 2019 ) provided a principled approach for exactly computing the singular values of convolution layers . They construct n2 matrices each of size cout × cin by taking the Fourier transform of the convolution filter ( details in Appendix Section B ) . The set of singular values of the jacobian equals the union of singular values of these n2 matrices . However , this method can have high computational complexity since it requires SVD of n2 matrices . Although this method can be adapted to compute the spectral norm of n2 matrices using power iteration ( in parallel with a GPU implementation ) instead of full SVD , the intrinsic computational complexity ( discussed in Table 2 ) can make it difficult to use this approach for very deep networks and large input sizes especially when computational resources are limited . Moreover , computing the gradient of the spectral norm using this method is not straightforward since each of these n2 matrices contain complex numbers . Thus , Sedghi et al . ( 2019 ) suggests to clip the singular values if they are above a certain threshold to bound the spectral norm of the layer . In order to reduce the training overhead , they clip the singular values only after every 100 iterations . The resulting method reduces the training overhead but is still costly for large input sizes and very deep networks . We report the running time of this method in Table 1 and its training time for one epoch ( using 1 GPU implementation ) in Table 4c . Because of the aforementioned issues , efficient methods to control the spectral norm of convolution layers have resorted to heuristics ( Yoshida & Miyato , 2017 ; Miyato et al. , 2018 ; Gouk et al. , 2018 ) . Typically , these methods reshape the convolution filter of dimensions cout × cin × h ×w to construct a matrix of dimensions cout × hwcin , and use the spectral norm of this matrix as an estimate of the spectral norm of the convolution layer . To regularize during training , they use the outer product of the corresponding singular vectors as the gradient of the largest singular value with respect to the reshaped matrix . Since the weights do not change significantly during each training step , they use only one iteration of power method during each step to update the singular values and vectors ( using the singular vectors computed in the previous step ) . These methods result in negligible overhead during the training . However , due to lack of theoretical justifications ( which we resolve in this work ) , they are not guaranteed to work for all different shapes and weights of the convolution filter . Previous studies have observed under estimation of the spectral norm using these heuristics ( Jiang et al. , 2019 ) . On one hand , there are computationally efficient but heuristic ways of computing and bounding the spectral norm of convolutional layers ( Miyato et al. , 2017 ; 2018 ) . On the other hand , the exact computation of the spectral norm of convolutional layers proposed by Sedghi et al . ( 2019 ) ; Ryu et al . ( 2019 ) can be expensive for commonly used architectures especially with large inputs such as ImageNet samples . Moreover , the difficulty in computing the gradient of the spectral norm with respect to the jacobian under these methods make their use as regularization during the training process challenging . In this paper , we resolve these issues by deriving a differentiable and efficient upper bound on the spectral norm of convolutional layers . Our bound is provable and not based on heuristics . Our computational complexity is similar to that of heuristics ( Miyato et al. , 2017 ; 2018 ) allowing our bound to be used as a regularizer for efficiently training deep convolutional networks . In this way , our proposed approach combines the benefits of the speed of the heuristics and the theoretical rigor of Sedghi et al . ( 2019 ) . Table 2 summarizes the differences between previous works and our approach . In Table 1 , we empirically observe that our bound can be computed in a time significantly faster than Sedghi et al . ( 2019 ) ; Ryu et al . ( 2019 ) , while providing a guaranteed upper bound on the spectral norm . Moreover , we empirically observe that our upper bound and the exact value are close to each other ( Section 3.1 ) . Below , we briefly explain our main result . Consider a convolution filter L of dimensions cout × cin × h ×w and input of size cin × n × n. The corresponding jacobian matrix J is of size n2cout × n2cin . We show that the largest singular value of the jacobian ( i.e . ∥J∥2 ) is bounded as : ∥J∥2 ≤ √ hwmin ( ∥R∥2 , ∥S∥2 , ∥T∥2 , ∥U∥2 ) , where R , S , T and U are matrices of sizes hcout×wcin , wcout×hcin , cout×hwcin and hwcout×cin respectively , and can be computed by appropriately reshaping the filter L ( details in Section 3 ) . This upper bound is independent of the input width and height ( n ) . Formal results are stated in Theorem 1 and proved in the appendix . Remarkably , ∥T∥2 is the heuristic suggested by Miyato et al . ( 2018 ) . To the best of our knowledge , this is the first work that derives a provable bound on the spectral norm of convolution filter as a constant factor ( dependant on filter sizes , but not filter weights ) times the heuristic of Miyato et al . ( 2018 ) . In Tables 1 and 3 , we show that the other 3 bounds ( using ∥R∥2 , ∥S∥2 , ∥U∥2 ) can be significantly smaller than √ hw∥T∥2 for different convolution filters . Thus , we take the minimum of these 4 quantities to bound the spectral norm of a convolution filter . In Section 4 , we show that our bound can be used to improve the generalization and robustness properties of neural networks . Specifically , we show that using our bound as a regularizer during training , we can achieve improvement in accuracy on par with exact method ( Sedghi et al. , 2019 ) while being significantly faster to train ( Table 4 ) . We also achieve significantly higher robustness certificates against adversarial attacks than CNN-Cert ( Boopathy et al. , 2018 ) on a single layer CNN ( Table 5 ) . These results demonstrate potentials for practical uses of our results . Code is available at the github repository : https : //github.com/singlasahil14/fantastic-four . 2 NOTATION . For a vector v , we use vj to denote the element in the jth position of the vector . We use Aj , ∶ and A∶ , k to denote the jth row and kth column of the matrix A respectively . We assume both Aj , ∶ , A∶ , k to be column vectors ( thus Aj , ∶ is the transpose of jth row of A ) . Aj , k denotes the element in jth row and kth column of A . The same rules can be directly extended to higher order tensors . For a matrix A ∈ Rq×r and a tensor B ∈ Rp×q×r , vec ( A ) denotes the vector constructed by stacking the rows of A and vec ( B ) denotes the vector constructed by stacking the vectors vec ( Bj , ∶ , ∶ ) , j ∈ [ p−1 ] : vec ( A ) T = [ AT0 , ∶ , A T 1 , ∶ , ⋯ , A T q−1 , ∶ ] vec ( B ) T = [ BT0 , ∶ , ∶ , B T 1 , ∶ , ∶ , ⋯ , B T p−1 , ∶ , ∶ ] We use the following notation for a convolutional neural network . L denotes the number of layers and φ is the activation function . For an input x , we use z ( I ) ( x ) ∈RNI and a ( I ) ( x ) ∈RNI to denote the raw ( before applying φ ) and activated ( after applying φ ) neurons in the Ith hidden layer respectively . a ( 0 ) denotes the input image x . To simplify notation and when no confusion arises , we make the dependency of z ( I ) and a ( I ) to x implicit . φ ′ ( z ( I ) ) and φ ′′ ( z ( I ) ) denotes the elementwise first and second derivatives of φ at z ( 1 ) . W ( I ) denotes the weights for the Ith layer i.e W ( I ) will be a tensor for a convolution layer and a matrix for a fully connected layer . J ( I ) denotes the jacobian matrix of vec ( z ( 1 ) ) with respect to the input x. θ denotes the neural network parameters . fθ ( x ) denotes the softmax probabilities output by the network for an input x . For an input x and label y , the cross entropy loss is denoted by ` ( fθ ( x ) , y ) . | This paper provides an method for computing an upper bound for the spectral norm of the linear transformation induced by a convolutional layer. An upper bound was first introduced as a heuristic by Miyato et al, but they did not prove any bounds. The authors use the exact computation of singular values of a convolutional layer in Sedghi, Gupta and Long to prove that the Miyato heuristic is indeed an upper bound. They further generalize Miyato's method to find 3 additional heuristics, all of which are proved to be upper bounds, and then show empirically that the minimum of these bounds gives a much tighter bound, often very close to the exact value. The bounds are significantly faster to compute than exact spectral norms, both in complexity and in practice. | SP:5c782fa6fa0245b510c2aa33ea661b2f8cf09062 |
Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers | 1 INTRODUCTION . Bounding singular values of different layers of a neural network is a way to control the complexity of the model and has been used in different problems including robustness , generalization , optimization , generative modeling , etc . In particular , the spectral norm ( the maximum singular value ) of a layer bounds the factor by which the norm of the signal increases or decreases during both forward and backward propagation within that layer . If all singular values are all close to one , then the gradients neither explode nor vanish ( Hochreiter , 1991 ; Hochreiter et al. , 2001 ; Klambauer et al. , 2017 ; Xiao et al. , 2018 ) . Spectral norm regularizations/bounds have been used in improving the generalization ( Bartlett et al. , 2017 ; Long & Sedghi , 2020 ) , in training deep generative models ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ; Tolstikhin et al. , 2018 ; Miyato et al. , 2018 ; Hoogeboom et al. , 2020 ) and in robustifying models against adversarial attacks ( Singla & Feizi , 2020 ; Szegedy et al. , 2014 ; Peck et al. , 2017 ; Zhang et al. , 2018 ; Anil et al. , 2018 ; Hein & Andriushchenko , 2017 ; Cisse et al. , 2017 ) . These applications have resulted in multiple works to regularize neural networks by penalizing the spectral norm of the network layers ( Drucker & Le Cun , 1992 ; Yoshida & Miyato , 2017 ; Miyato et al. , 2018 ; 2017 ; Sedghi et al. , 2019 ; Singla & Feizi , 2020 ) . For a fully connected layer with weights W and bias b , the lipschitz constant is given by the spectral norm of the weight matrix i.e , ∥W∥2 , which can be computed efficiently using the power iteration method ( Golub & Van Loan , 1996 ) . In particular , if the matrix W is of size p × q , the computational complexity of power iteration ( assuming convergence in constant number of steps ) is O ( pq ) . Convolution layers ( Lecun et al. , 1998 ) are one of the key components of modern neural networks , particularly in computer vision ( Krizhevsky et al. , 2012 ) . Consider a convolution filter L of size cout × cin × h ×w where cout , cin , h and w denote the number of output channels , input channels , height and width of the filter respectively ; and a square input sample of size cin ×n×n where n is its height and width . A naive representation of the Jacobian of this layer will result in a matrix of size n2cout × n 2cin . For a typical convolution layer with the filter size 64 × 3 × 7 × 7 and an ImageNet sized input 3 × 224 × 224 ( Krizhevsky et al. , 2012 ) , the corresponding jacobian matrix has a very large size : 802816 × 150528 . This makes an explicit computation of the jacobian infeasible . Ryu et al . ( 2019 ) provide a way to compute the spectral norm of convolution layers using convolution and transposed convolution operations in power iteration , thereby avoiding this explicit computation . This leads to an improved running time especially when the number of input/output channels is small ( Table 1 ) . However , in addition to the running time , there is an additional difficulty in the approach proposed in Ryu et al . ( 2019 ) ( and other existing approaches described later ) regarding the computation of the spectral norm gradient ( often used as a regularization during the training ) . The gradient of the largest singular value with respect to the jacobian can be naively computed by taking the outer product of corresponding singular vectors . However , due to the special structure of the convolution operation , the jacobian will be a sparse matrix with repeated elements ( see Appendix Section D for details ) . The naive computation of the gradient will result in non-zero gradient values at elements that should be in fact zeros throughout training and also will assign different gradient values at elements that should always be identical . These issues make the gradient computation of the spectral norm with respect to the convolution filter weights using the technique of Ryu et al . ( 2019 ) difficult . Recently , Sedghi et al . ( 2019 ) provided a principled approach for exactly computing the singular values of convolution layers . They construct n2 matrices each of size cout × cin by taking the Fourier transform of the convolution filter ( details in Appendix Section B ) . The set of singular values of the jacobian equals the union of singular values of these n2 matrices . However , this method can have high computational complexity since it requires SVD of n2 matrices . Although this method can be adapted to compute the spectral norm of n2 matrices using power iteration ( in parallel with a GPU implementation ) instead of full SVD , the intrinsic computational complexity ( discussed in Table 2 ) can make it difficult to use this approach for very deep networks and large input sizes especially when computational resources are limited . Moreover , computing the gradient of the spectral norm using this method is not straightforward since each of these n2 matrices contain complex numbers . Thus , Sedghi et al . ( 2019 ) suggests to clip the singular values if they are above a certain threshold to bound the spectral norm of the layer . In order to reduce the training overhead , they clip the singular values only after every 100 iterations . The resulting method reduces the training overhead but is still costly for large input sizes and very deep networks . We report the running time of this method in Table 1 and its training time for one epoch ( using 1 GPU implementation ) in Table 4c . Because of the aforementioned issues , efficient methods to control the spectral norm of convolution layers have resorted to heuristics ( Yoshida & Miyato , 2017 ; Miyato et al. , 2018 ; Gouk et al. , 2018 ) . Typically , these methods reshape the convolution filter of dimensions cout × cin × h ×w to construct a matrix of dimensions cout × hwcin , and use the spectral norm of this matrix as an estimate of the spectral norm of the convolution layer . To regularize during training , they use the outer product of the corresponding singular vectors as the gradient of the largest singular value with respect to the reshaped matrix . Since the weights do not change significantly during each training step , they use only one iteration of power method during each step to update the singular values and vectors ( using the singular vectors computed in the previous step ) . These methods result in negligible overhead during the training . However , due to lack of theoretical justifications ( which we resolve in this work ) , they are not guaranteed to work for all different shapes and weights of the convolution filter . Previous studies have observed under estimation of the spectral norm using these heuristics ( Jiang et al. , 2019 ) . On one hand , there are computationally efficient but heuristic ways of computing and bounding the spectral norm of convolutional layers ( Miyato et al. , 2017 ; 2018 ) . On the other hand , the exact computation of the spectral norm of convolutional layers proposed by Sedghi et al . ( 2019 ) ; Ryu et al . ( 2019 ) can be expensive for commonly used architectures especially with large inputs such as ImageNet samples . Moreover , the difficulty in computing the gradient of the spectral norm with respect to the jacobian under these methods make their use as regularization during the training process challenging . In this paper , we resolve these issues by deriving a differentiable and efficient upper bound on the spectral norm of convolutional layers . Our bound is provable and not based on heuristics . Our computational complexity is similar to that of heuristics ( Miyato et al. , 2017 ; 2018 ) allowing our bound to be used as a regularizer for efficiently training deep convolutional networks . In this way , our proposed approach combines the benefits of the speed of the heuristics and the theoretical rigor of Sedghi et al . ( 2019 ) . Table 2 summarizes the differences between previous works and our approach . In Table 1 , we empirically observe that our bound can be computed in a time significantly faster than Sedghi et al . ( 2019 ) ; Ryu et al . ( 2019 ) , while providing a guaranteed upper bound on the spectral norm . Moreover , we empirically observe that our upper bound and the exact value are close to each other ( Section 3.1 ) . Below , we briefly explain our main result . Consider a convolution filter L of dimensions cout × cin × h ×w and input of size cin × n × n. The corresponding jacobian matrix J is of size n2cout × n2cin . We show that the largest singular value of the jacobian ( i.e . ∥J∥2 ) is bounded as : ∥J∥2 ≤ √ hwmin ( ∥R∥2 , ∥S∥2 , ∥T∥2 , ∥U∥2 ) , where R , S , T and U are matrices of sizes hcout×wcin , wcout×hcin , cout×hwcin and hwcout×cin respectively , and can be computed by appropriately reshaping the filter L ( details in Section 3 ) . This upper bound is independent of the input width and height ( n ) . Formal results are stated in Theorem 1 and proved in the appendix . Remarkably , ∥T∥2 is the heuristic suggested by Miyato et al . ( 2018 ) . To the best of our knowledge , this is the first work that derives a provable bound on the spectral norm of convolution filter as a constant factor ( dependant on filter sizes , but not filter weights ) times the heuristic of Miyato et al . ( 2018 ) . In Tables 1 and 3 , we show that the other 3 bounds ( using ∥R∥2 , ∥S∥2 , ∥U∥2 ) can be significantly smaller than √ hw∥T∥2 for different convolution filters . Thus , we take the minimum of these 4 quantities to bound the spectral norm of a convolution filter . In Section 4 , we show that our bound can be used to improve the generalization and robustness properties of neural networks . Specifically , we show that using our bound as a regularizer during training , we can achieve improvement in accuracy on par with exact method ( Sedghi et al. , 2019 ) while being significantly faster to train ( Table 4 ) . We also achieve significantly higher robustness certificates against adversarial attacks than CNN-Cert ( Boopathy et al. , 2018 ) on a single layer CNN ( Table 5 ) . These results demonstrate potentials for practical uses of our results . Code is available at the github repository : https : //github.com/singlasahil14/fantastic-four . 2 NOTATION . For a vector v , we use vj to denote the element in the jth position of the vector . We use Aj , ∶ and A∶ , k to denote the jth row and kth column of the matrix A respectively . We assume both Aj , ∶ , A∶ , k to be column vectors ( thus Aj , ∶ is the transpose of jth row of A ) . Aj , k denotes the element in jth row and kth column of A . The same rules can be directly extended to higher order tensors . For a matrix A ∈ Rq×r and a tensor B ∈ Rp×q×r , vec ( A ) denotes the vector constructed by stacking the rows of A and vec ( B ) denotes the vector constructed by stacking the vectors vec ( Bj , ∶ , ∶ ) , j ∈ [ p−1 ] : vec ( A ) T = [ AT0 , ∶ , A T 1 , ∶ , ⋯ , A T q−1 , ∶ ] vec ( B ) T = [ BT0 , ∶ , ∶ , B T 1 , ∶ , ∶ , ⋯ , B T p−1 , ∶ , ∶ ] We use the following notation for a convolutional neural network . L denotes the number of layers and φ is the activation function . For an input x , we use z ( I ) ( x ) ∈RNI and a ( I ) ( x ) ∈RNI to denote the raw ( before applying φ ) and activated ( after applying φ ) neurons in the Ith hidden layer respectively . a ( 0 ) denotes the input image x . To simplify notation and when no confusion arises , we make the dependency of z ( I ) and a ( I ) to x implicit . φ ′ ( z ( I ) ) and φ ′′ ( z ( I ) ) denotes the elementwise first and second derivatives of φ at z ( 1 ) . W ( I ) denotes the weights for the Ith layer i.e W ( I ) will be a tensor for a convolution layer and a matrix for a fully connected layer . J ( I ) denotes the jacobian matrix of vec ( z ( 1 ) ) with respect to the input x. θ denotes the neural network parameters . fθ ( x ) denotes the softmax probabilities output by the network for an input x . For an input x and label y , the cross entropy loss is denoted by ` ( fθ ( x ) , y ) . | This paper propose to study the Lipschitz constant of convolutional layers and to give an easy to compute and differentiable upper bound. The upper bound is composed of 4 different bounds, based on tensor unfolding of the Jacobian, and taking the min of these 4 values. This upper bound is then used to train networks with spectral norm regularization and compared with network trained with singular value clipping from `Sedghi et al. (2019)`. The proposed bound gives similar performances with much cheaper computational time. | SP:5c782fa6fa0245b510c2aa33ea661b2f8cf09062 |
Distributionally Robust Learning for Unsupervised Domain Adaptation | 1 INTRODUCTION . In many real-world applications , the target domain for deployment of a machine-learning ( ML ) model can significantly differ from the source training domain . Furthermore , labels in the target domain are often more expensive to obtain compared to the source domain . An example is synthetic training where the source domain has complete supervision while the target domain of real images may not be labeled . Unsupervised domain adaptation ( UDA ) aims to maximize performance on the target domain , and it utilizes both the labeled source data and the unlabeled target data . A popular framework for UDA involves obtaining proxy labels in the target domain through selftraining ( Zou et al. , 2019 ) . Self-training starts with a classifier trained on the labeled source data . It then iteratively obtains pseudo-labels in the target domain using predictions from the current ML model . However , this process is brittle , since wrong pseudo-labels in the target domain can lead to catastrophic failure in early iterations ( Kumar et al. , 2020 ) . To avoid this , self training needs to be conservative and select only pseudo-labels with sufficiently high confidence level . This entails accurate knowledge of the confidence levels . Accurate confidence estimation is a challenge for current deep learning models . Deep learning models tend to produce over-confident and misleading probabilities , even when predicting on the same distribution ( Guo et al. , 2017a ; Gal & Ghahramani , 2016 ) . Some attempts to remedy this issue include temperature scaling ( Platt et al. , 1999 ) , Monte-Carlo sampling ( Gal & Ghahramani , 2016 ) and Bayesian inference ( Blundell et al. , 2015 ; Riquelme et al. , 2018 ) . However , Snoek et al . ( 2019 ) has shown that the uncertainty estimation from these models can not be trusted under domain shifts . In this paper , we instead consider the distributionally robust learning ( DRL ) framework ( Liu & Ziebart , 2014 ; 2017 ) which provides a principled approach for uncertainty quantification under domain shifts . DRL can be formulated as a two-player adversarial risk minimization game , as depicted in Figure 1 ( a ) . Recall that the standard framework of empirical risk minimization ( ERM ) directly learns a predictor P̂ ( Y |X ) from training data . In contrast , DRL also includes an adversaryQ ( Y |X ) that is allowed to perturb the labels , subject to certain feature-matching constraints to ensure datacompatibility . Formally , the minimax game for DRL is : min P̂ ( Y |X ) max Q ( Y |X ) lossPt ( X ) ( P̂ ( Y |X ) , Q ( Y |X ) ) , ( 1 ) where the adversary Q ( Y |X ) is constrained to match the evaluation of a set of features Φ ( x , y ) to that of the source distribution ( see Section 2 for details ) . Note that the loss in ( 1 ) is evaluated under the target input distribution Pt ( X ) , and the predictor does not have direct access to the source data { Xs , Ys } . Instead , the predictor optimizes the target loss by playing a game with an adversary constrained by source data . A special case of UDA is the covariate shift setting , where the label-generating distribution P ( Y |X ) is assumed to be the same in both source and target domains . Under this assumption , with log-loss and a linear predictor parameterized by θ and features Φ ( x , y ) , ( 1 ) reduces to : P̂ ( y|x ) ∝ exp ( Ps ( x ) Pt ( x ) θ · Φ ( x , y ) ) . ( 2 ) Intuitively , the density ratio Ps ( x ) /Pt ( x ) prevents the model from being overconfident on target inputs far away from the source domain . Thus , the DRL framework is a principled approach for conservative confidence estimation . Previous works have shown that DRL is highly effective in safety-critical applications such as safe exploration in control systems ( Liu et al. , 2020 ) and safe trajectory planning ( Nakka et al. , 2020 ) . However , these works only consider estimating the density ratio in low dimensions ( e.g . control inputs ) using standard kernel density estimator ( KDE ) and extending it to high-dimensional inputs such as images remains an open challenge . Moreover , it is not clear if the covariate-shift assumption holds for common high-dimensional settings such as images — which we investigate in this paper . In this paper , we propose a novel deep-learning method based on the DRL framework for accurate uncertainties that scales to modern domain-adaptation tasks in computer vision . Summary of Contributions : 1 . We develop differentiable density-ratio estimation as part of the DRL framework to enable efficient end-to-end training . See Figure 1 ( b ) . 2 . We employ DRL ’ s confidence estimation in the self-training framework for domain adaptation and term it as distributionally robust self-training ( DRST ) . See Figure 2 . 3 . We further combine it with automated synthetic to real generalization ( ASG ) framework of ( Chen et al. , 2020b ) to improve generalization in the real target domain when the source domain consists of synthetic images . 4 . We demonstrate that DRST generates more calibrated probabilities . DRST-ASG achieves competitive accuracy on the VisDA2017 dataset ( Peng et al. , 2017 ) with 1 % improvement over the baseline class-regularized self-training ( CRST ) using the standard soft-max confidence measure . 5 . We analyze the reason for the effectiveness of DRST through a careful ablation study . One challenge for training DRL is that the training loss can not be directly evaluated under the UDA framework . However , we show that the gradients of the target loss can indeed be evaluated . By deriving gradients for both neural networks and proposing a joint training algorithm ( Alg . 1 ) , we show the network can be trained efficiently . We also directly incorporate class regularization in the minimax game under our DRL framework . This is a principled approach in contrast to standard label smoothing incorporated on top of a given learning method . In our ablation studies , we observe that the covariate-shift assumption progressively holds to a greater extent as the iterations in self-training proceed . This is also correlated with the greater ability to capture shape features through self training , as seen in the Grad-CAM visualization ( Selvaraju et al. , 2017 ) . 2 PROPOSED FORMULATION AND ALGORITHMS . In this section , we first introduce the class regularized DRL framework ( 2.1 ) and then propose differentiable density ratio estimation to enable end-to-end learning of DRL using neural networks . We provide training details , especially the gradient computation for training both networks ( 2.2 ) . This is unique for our setting since the actual training loss on target can not be evaluated due to lack of target labels . Finally , we propose our self-training algorithm DRST in 2.3 . 2.1 DISTRIBUTIONALLY ROBUST LEARNING WITH CLASS REGULARIZATION . We are interested in robustly minimizing the classification loss on the target domain with confidence of adversary ’ s prediction regularized . We use a weighted logloss term to penalize high confidence in the adversary ’ s label prediction as the regularization . We make the same covariate shift assumption as in ( Liu & Ziebart , 2014 ) that only the marginal input distribution changes and P ( y|x ) is shared between source and target : Ps ( x ) 6= Pt ( x ) and Ps ( y|x ) = Pt ( y|x ) . We aim to solve the following : min P̂ ( Y |X ) max Q ( Y |X ) ∈Ξ loglossPt ( X ) ( P̂ ( Y |X ) , Q ( Y |X ) ) − rEPt ( x ) Q ( y|x ) [ Y log P̂ ( Y |X ) ] , ( 3 ) where Y is a one-hot encoding of the class and r ∈ [ 0 , 1 ] is a hyper-parameter that controls the level of regularization . In this formulation , the estimator player P̂ ( Y |X ) first chooses a conditional label distribution to minimize the regularized logloss on the target domain and then an adversarial playerQ ( Y |X ) chooses a conditional label distribution from the set ( Ξ ) to maximize the regularized logloss . The constraint set Ξ defines how much flexibility we want to give to the adversary . Usually , we design feature functions on both X and Y and restrict the adversary to match statistics of the expectation of the features . We have the following lemma : Lemma 1 . If we choose feature map Φ ( X , Y ) as the statistics to constrainQ ( Y |X ) , then equation 3 can be reduced to a reguralized maximum entropy problem with the estimator constrained : max P̂ ( Y |X ) EPt ( x ) P̂ ( y|x ) [ −log P̂ ( Y |X ) ] − rEPt ( x ) P̂ ( y|x ) [ Y log P̂ ( Y |X ) ] ( 4 ) such that : P̂ ( Y |X ) ∈∆ and |EPs ( x ) P̂ ( y|x ) [ Φ ( X , Y ) ] − EPs ( x , y ) [ Φ ( X , Y ) ] | ≤ λ , where ∆ defines the conditional probability simplex that P̂ ( y|x ) must reside within , Φ is a vectorvalued feature function that is evaluated on input x , and EPs ( x , y ) [ Φ ( X , Y ) ] is a vector of the expected feature values that corresponds with the feature function . λ is the slack term of constraints . The proof of this lemma involves strong duality of the convex-concave function , such that the min and max player can switch the order . We refer more details to the appendix . The following theorem states the solution of the problem : Theorem 1 . The parametric solution of ( 4 ) for P̂ ( y|x ) takes the form : P̂ ( y|x ) ∝ exp Ps ( x ) Pt ( x ) θ · Φ ( x , y ) + ry ry + 1 , ( 5 ) where the parameter θ can be optimized by maximizing the log-likelihood on the target distribution . The gradients take the form : ∇θEPt ( x ) P ( y|x ) [ − log P̂θ ( Y |X ) ] = EP̃s ( x ) P̂ ( y|x ) [ Φ ( X , Y ) ] − c̃ , ( 6 ) where c̃ , EP̃s ( x ) P̃ ( y|x ) [ Φ ( X , Y ) ] , as the empirical evaluation of the feature expectations . Here P̃s ( x ) and P̃ ( y|x ) are the empirical distribution . In principle , P ( y|x ) is the ground truth conditional label distribution shared between source and target domains . We call EPt ( x ) P ( y|x ) [ − log P̂θ ( Y |X ) ] the expected target loss in the paper . Even though it is not available in practice , we can approximate the gradients ( 6 ) using the source data in training . The norm of the approximated gradient converges to the true gradient in the rate of O ( 1/m ) , where m is the amount of source data . The proof involves application of the Lagrangian multiplier , setting the derivative of each specific P̂ ( y|x ) to 0 and utilizing the KKT condition . We refer details to the appendix . Algorithm 1 End-to-end Training for DRL Input : Source data , Target data , DNN φ ) , DNN τ , SGD optimizerOpt1 andOpt2 , learning rate γ1 and γ2 , epoch number T . Initialization : φ , τ ← random initialization , epoch← 0 While epoch < T For each mini-batch Compute ( 10 ) back propagate Back propagate Ld Optimizer Opt1 ( γ1 ) updates β Compute p̂ following ( 5 ) . Compute gradients using ( 9 ) Back-propagate through∇φLc w = w − γ2 · ∇wLc b = b− γ2 · ∇bLc Optimizer Opt2 ( γ2 ) updates α epoch← epoch +1 Output : Trained α , β , w , b We use this form to illustrate the property of representation-level conservativeness and the class-level regularization of our formulation . Representation-level conservativeness : The prediction has higher certainty for inputs closer to the source domain , when magnitude of Ps ( x ) /Pt ( x ) is large . On the contrary , if the inputs are farther away from the source , which means Ps ( x ) /Pt ( x ) is small , the prediction is uncertain . Class-level regularization : Hyper-parameter r adjusts the smoothness of theQ ( Y |X ) ’ s label prediction in ( 3 ) . It translates to the ry terms in the parametric form . In training , we compute the gradients using source labels where y is the one-hot encoding of the class . In testing , we can set y to be all one vector to obtain smoothed confidence . In machine learning methods using density ratios , such as transfer learning ( Pan & Yang , 2009 ) , or off-policy evaluation ( Dudı́k et al. , 2011 ) , a plugin estimator for the density ratio Ps ( x ) /Pt ( x ) is used . However , density ratio estimation ( Sugiyama et al. , 2012 ) , especially in the high-dimensional data , is rather different . It is also not the case that more accurate density ratio estimation would lead to the better downstream task performance . We have a synthetic example shown in Appendix E. To scale up the method for modern domain adaptation tasks , we ask the question : can we train the density ratio estimation and the learning tasks that use the ratios together such that they share the common goal–the target domain predictive performance ? | I find the paper to be well motivated. Self-labeling has proven to be a useful approach for unsupervised domain adaptation. And since wrong pseudo-labels in the target domain result catastrophic failure in early iterations, it makes sense to calibrate the production of pseudo-labels through the use of uncertainty estimation. This is done through the framework of distributionally robust learning. | SP:677142c2fc75609c7728334a2adeebf0b4620453 |
Distributionally Robust Learning for Unsupervised Domain Adaptation | 1 INTRODUCTION . In many real-world applications , the target domain for deployment of a machine-learning ( ML ) model can significantly differ from the source training domain . Furthermore , labels in the target domain are often more expensive to obtain compared to the source domain . An example is synthetic training where the source domain has complete supervision while the target domain of real images may not be labeled . Unsupervised domain adaptation ( UDA ) aims to maximize performance on the target domain , and it utilizes both the labeled source data and the unlabeled target data . A popular framework for UDA involves obtaining proxy labels in the target domain through selftraining ( Zou et al. , 2019 ) . Self-training starts with a classifier trained on the labeled source data . It then iteratively obtains pseudo-labels in the target domain using predictions from the current ML model . However , this process is brittle , since wrong pseudo-labels in the target domain can lead to catastrophic failure in early iterations ( Kumar et al. , 2020 ) . To avoid this , self training needs to be conservative and select only pseudo-labels with sufficiently high confidence level . This entails accurate knowledge of the confidence levels . Accurate confidence estimation is a challenge for current deep learning models . Deep learning models tend to produce over-confident and misleading probabilities , even when predicting on the same distribution ( Guo et al. , 2017a ; Gal & Ghahramani , 2016 ) . Some attempts to remedy this issue include temperature scaling ( Platt et al. , 1999 ) , Monte-Carlo sampling ( Gal & Ghahramani , 2016 ) and Bayesian inference ( Blundell et al. , 2015 ; Riquelme et al. , 2018 ) . However , Snoek et al . ( 2019 ) has shown that the uncertainty estimation from these models can not be trusted under domain shifts . In this paper , we instead consider the distributionally robust learning ( DRL ) framework ( Liu & Ziebart , 2014 ; 2017 ) which provides a principled approach for uncertainty quantification under domain shifts . DRL can be formulated as a two-player adversarial risk minimization game , as depicted in Figure 1 ( a ) . Recall that the standard framework of empirical risk minimization ( ERM ) directly learns a predictor P̂ ( Y |X ) from training data . In contrast , DRL also includes an adversaryQ ( Y |X ) that is allowed to perturb the labels , subject to certain feature-matching constraints to ensure datacompatibility . Formally , the minimax game for DRL is : min P̂ ( Y |X ) max Q ( Y |X ) lossPt ( X ) ( P̂ ( Y |X ) , Q ( Y |X ) ) , ( 1 ) where the adversary Q ( Y |X ) is constrained to match the evaluation of a set of features Φ ( x , y ) to that of the source distribution ( see Section 2 for details ) . Note that the loss in ( 1 ) is evaluated under the target input distribution Pt ( X ) , and the predictor does not have direct access to the source data { Xs , Ys } . Instead , the predictor optimizes the target loss by playing a game with an adversary constrained by source data . A special case of UDA is the covariate shift setting , where the label-generating distribution P ( Y |X ) is assumed to be the same in both source and target domains . Under this assumption , with log-loss and a linear predictor parameterized by θ and features Φ ( x , y ) , ( 1 ) reduces to : P̂ ( y|x ) ∝ exp ( Ps ( x ) Pt ( x ) θ · Φ ( x , y ) ) . ( 2 ) Intuitively , the density ratio Ps ( x ) /Pt ( x ) prevents the model from being overconfident on target inputs far away from the source domain . Thus , the DRL framework is a principled approach for conservative confidence estimation . Previous works have shown that DRL is highly effective in safety-critical applications such as safe exploration in control systems ( Liu et al. , 2020 ) and safe trajectory planning ( Nakka et al. , 2020 ) . However , these works only consider estimating the density ratio in low dimensions ( e.g . control inputs ) using standard kernel density estimator ( KDE ) and extending it to high-dimensional inputs such as images remains an open challenge . Moreover , it is not clear if the covariate-shift assumption holds for common high-dimensional settings such as images — which we investigate in this paper . In this paper , we propose a novel deep-learning method based on the DRL framework for accurate uncertainties that scales to modern domain-adaptation tasks in computer vision . Summary of Contributions : 1 . We develop differentiable density-ratio estimation as part of the DRL framework to enable efficient end-to-end training . See Figure 1 ( b ) . 2 . We employ DRL ’ s confidence estimation in the self-training framework for domain adaptation and term it as distributionally robust self-training ( DRST ) . See Figure 2 . 3 . We further combine it with automated synthetic to real generalization ( ASG ) framework of ( Chen et al. , 2020b ) to improve generalization in the real target domain when the source domain consists of synthetic images . 4 . We demonstrate that DRST generates more calibrated probabilities . DRST-ASG achieves competitive accuracy on the VisDA2017 dataset ( Peng et al. , 2017 ) with 1 % improvement over the baseline class-regularized self-training ( CRST ) using the standard soft-max confidence measure . 5 . We analyze the reason for the effectiveness of DRST through a careful ablation study . One challenge for training DRL is that the training loss can not be directly evaluated under the UDA framework . However , we show that the gradients of the target loss can indeed be evaluated . By deriving gradients for both neural networks and proposing a joint training algorithm ( Alg . 1 ) , we show the network can be trained efficiently . We also directly incorporate class regularization in the minimax game under our DRL framework . This is a principled approach in contrast to standard label smoothing incorporated on top of a given learning method . In our ablation studies , we observe that the covariate-shift assumption progressively holds to a greater extent as the iterations in self-training proceed . This is also correlated with the greater ability to capture shape features through self training , as seen in the Grad-CAM visualization ( Selvaraju et al. , 2017 ) . 2 PROPOSED FORMULATION AND ALGORITHMS . In this section , we first introduce the class regularized DRL framework ( 2.1 ) and then propose differentiable density ratio estimation to enable end-to-end learning of DRL using neural networks . We provide training details , especially the gradient computation for training both networks ( 2.2 ) . This is unique for our setting since the actual training loss on target can not be evaluated due to lack of target labels . Finally , we propose our self-training algorithm DRST in 2.3 . 2.1 DISTRIBUTIONALLY ROBUST LEARNING WITH CLASS REGULARIZATION . We are interested in robustly minimizing the classification loss on the target domain with confidence of adversary ’ s prediction regularized . We use a weighted logloss term to penalize high confidence in the adversary ’ s label prediction as the regularization . We make the same covariate shift assumption as in ( Liu & Ziebart , 2014 ) that only the marginal input distribution changes and P ( y|x ) is shared between source and target : Ps ( x ) 6= Pt ( x ) and Ps ( y|x ) = Pt ( y|x ) . We aim to solve the following : min P̂ ( Y |X ) max Q ( Y |X ) ∈Ξ loglossPt ( X ) ( P̂ ( Y |X ) , Q ( Y |X ) ) − rEPt ( x ) Q ( y|x ) [ Y log P̂ ( Y |X ) ] , ( 3 ) where Y is a one-hot encoding of the class and r ∈ [ 0 , 1 ] is a hyper-parameter that controls the level of regularization . In this formulation , the estimator player P̂ ( Y |X ) first chooses a conditional label distribution to minimize the regularized logloss on the target domain and then an adversarial playerQ ( Y |X ) chooses a conditional label distribution from the set ( Ξ ) to maximize the regularized logloss . The constraint set Ξ defines how much flexibility we want to give to the adversary . Usually , we design feature functions on both X and Y and restrict the adversary to match statistics of the expectation of the features . We have the following lemma : Lemma 1 . If we choose feature map Φ ( X , Y ) as the statistics to constrainQ ( Y |X ) , then equation 3 can be reduced to a reguralized maximum entropy problem with the estimator constrained : max P̂ ( Y |X ) EPt ( x ) P̂ ( y|x ) [ −log P̂ ( Y |X ) ] − rEPt ( x ) P̂ ( y|x ) [ Y log P̂ ( Y |X ) ] ( 4 ) such that : P̂ ( Y |X ) ∈∆ and |EPs ( x ) P̂ ( y|x ) [ Φ ( X , Y ) ] − EPs ( x , y ) [ Φ ( X , Y ) ] | ≤ λ , where ∆ defines the conditional probability simplex that P̂ ( y|x ) must reside within , Φ is a vectorvalued feature function that is evaluated on input x , and EPs ( x , y ) [ Φ ( X , Y ) ] is a vector of the expected feature values that corresponds with the feature function . λ is the slack term of constraints . The proof of this lemma involves strong duality of the convex-concave function , such that the min and max player can switch the order . We refer more details to the appendix . The following theorem states the solution of the problem : Theorem 1 . The parametric solution of ( 4 ) for P̂ ( y|x ) takes the form : P̂ ( y|x ) ∝ exp Ps ( x ) Pt ( x ) θ · Φ ( x , y ) + ry ry + 1 , ( 5 ) where the parameter θ can be optimized by maximizing the log-likelihood on the target distribution . The gradients take the form : ∇θEPt ( x ) P ( y|x ) [ − log P̂θ ( Y |X ) ] = EP̃s ( x ) P̂ ( y|x ) [ Φ ( X , Y ) ] − c̃ , ( 6 ) where c̃ , EP̃s ( x ) P̃ ( y|x ) [ Φ ( X , Y ) ] , as the empirical evaluation of the feature expectations . Here P̃s ( x ) and P̃ ( y|x ) are the empirical distribution . In principle , P ( y|x ) is the ground truth conditional label distribution shared between source and target domains . We call EPt ( x ) P ( y|x ) [ − log P̂θ ( Y |X ) ] the expected target loss in the paper . Even though it is not available in practice , we can approximate the gradients ( 6 ) using the source data in training . The norm of the approximated gradient converges to the true gradient in the rate of O ( 1/m ) , where m is the amount of source data . The proof involves application of the Lagrangian multiplier , setting the derivative of each specific P̂ ( y|x ) to 0 and utilizing the KKT condition . We refer details to the appendix . Algorithm 1 End-to-end Training for DRL Input : Source data , Target data , DNN φ ) , DNN τ , SGD optimizerOpt1 andOpt2 , learning rate γ1 and γ2 , epoch number T . Initialization : φ , τ ← random initialization , epoch← 0 While epoch < T For each mini-batch Compute ( 10 ) back propagate Back propagate Ld Optimizer Opt1 ( γ1 ) updates β Compute p̂ following ( 5 ) . Compute gradients using ( 9 ) Back-propagate through∇φLc w = w − γ2 · ∇wLc b = b− γ2 · ∇bLc Optimizer Opt2 ( γ2 ) updates α epoch← epoch +1 Output : Trained α , β , w , b We use this form to illustrate the property of representation-level conservativeness and the class-level regularization of our formulation . Representation-level conservativeness : The prediction has higher certainty for inputs closer to the source domain , when magnitude of Ps ( x ) /Pt ( x ) is large . On the contrary , if the inputs are farther away from the source , which means Ps ( x ) /Pt ( x ) is small , the prediction is uncertain . Class-level regularization : Hyper-parameter r adjusts the smoothness of theQ ( Y |X ) ’ s label prediction in ( 3 ) . It translates to the ry terms in the parametric form . In training , we compute the gradients using source labels where y is the one-hot encoding of the class . In testing , we can set y to be all one vector to obtain smoothed confidence . In machine learning methods using density ratios , such as transfer learning ( Pan & Yang , 2009 ) , or off-policy evaluation ( Dudı́k et al. , 2011 ) , a plugin estimator for the density ratio Ps ( x ) /Pt ( x ) is used . However , density ratio estimation ( Sugiyama et al. , 2012 ) , especially in the high-dimensional data , is rather different . It is also not the case that more accurate density ratio estimation would lead to the better downstream task performance . We have a synthetic example shown in Appendix E. To scale up the method for modern domain adaptation tasks , we ask the question : can we train the density ratio estimation and the learning tasks that use the ratios together such that they share the common goal–the target domain predictive performance ? | The paper proposes to use the distributionally robust learning (DRL) for unsupervised domain adaptation. First, the authors demonstrate how differentiable density ratio estimation can be done for source and target domains in an end-to-end manner. Following this, the authors demonstrate how confidence estimation (reliant on DRL) can be utilized for self-training based approaches for unsupervised domain adaptation. In terms of results, the authors show that the proposed approach — Distributionally Robust Self-Training (DRSL) — can provide competitive performance on t he VisDA 2017 benchmark. Furthermore, DRSL is able to achieve more calibrated probabilities (useful for more calibrated uncertainty measures) with competitive predictive performance. Finally, the authors provide some intuition as to why DRSL results in increasingly aligned conditional source and target distributions due to increased focus on learning shape features. | SP:677142c2fc75609c7728334a2adeebf0b4620453 |
Matrix Shuffle-Exchange Networks for Hard 2D Tasks | 1 INTRODUCTION . Data often comes in a form of two-dimensional matrices . Neural networks are often used for processing such data usually involving convolution as the primary processing method . But convolutions are local , capable of analyzing only neighbouring positions in the data matrix . That is good for images since the neighbouring pixels are closely related , but not sufficient for data having more distant relationships . In this paper , we consider the problem how to efficiently process 2D data in a way that allows both local and long-range relationship modelling and propose a new neural architecture , called Matrix Shuffle-Exchange network , to this end . The complexity of the proposed architecture is O ( n2 log n ) for processing n × n data matrix , which is significantly lower than O ( n4 ) if one would use the attention ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) in its pure form . The architecture is derived from the Neural Shuffle-Exchange networks ( Freivalds et al. , 2019 ; Draguns et al. , 2020 ) by lifting their architecture from 1D to 2D . We validate our model on tasks with differently structured 2D input/output data . It can handle complex data inter-dependencies present in algorithmic tasks on matrices such as transposition , rotation , arithmetic operations and matrix multiplication . Our model reaches the perfect accuracy on test instances of the same size it was trained on and generalizes on much larger instances . In contrast , a convolutional baseline can be trained only on small instances and does not generalize . The generalization capability is an important measure for algorithmic tasks to say that the model has learned an algorithm , not only fitted the training data . Our model can be used for processing graphs by representing a graph with its adjacency matrix . It has a significant advantage over graph neural networks ( GNN ) in case of dense graphs having additional data associated with graph edges ( for example , edge length ) since GNNs typically attach data to vertices , not edges . We demonstrate that the proposed model can infer important local and non-local graph concepts by evaluating it on component labelling , triangle finding and transitivity tasks . It reaches the perfect accuracy on test instances of the same size it was trained on and generalizes on larger instances while the GNN baseline struggles to find these concepts even on small graphs . The model can perform complex logical reasoning required to solve Sudoku puzzles . It achieves 100 % correct solutions on easy puzzles and 96.6 % on hard puzzles which is on par with the state-of-the-art deep learning model which was specifically tailored for logical reasoning tasks . 2 RELATED WORK . Convolutional Neural Networks ( CNN ) are the primary tools for processing data with a 2D gridlike topology . For instance , VGG ( Simonyan & Zisserman , 2014 ) and ResNet ( He et al. , 2016a ) enable high-accuracy image classification . Convolution is an inherently local operation that limits CNN use for long-range dependency modelling . The problem can be mitigated by using dilated ( atrous ) convolutions that have an expanded receptive field . Such an approach works well on image segmentation ( Yu & Koltun , 2015 ) but is not suitable for algorithmic tasks where generalization on larger inputs is crucial . The attention mechanism ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) is a widespread way of solving the long-range dependency problem in sequence tasks . Unfortunately , its application to 2D data is limited due to it ’ s high O ( n4 ) time complexity for n× n input matrix . Various sparse attention mechanisms have been proposed to deal with the quadratic complexity of dense attention by attending only to a small predetermined subset of locations ( Child et al. , 2019 ; Beltagy et al. , 2020 ; Zaheer et al. , 2020 ) . Reformer ( Kitaev et al. , 2020 ) uses locality-sensitive hashing to approximate attention in time O ( n log n ) . Linformer ( Wang et al. , 2020 ) uses a linear complexity approximation to the original attention by creating a low-rank factorization of the attention matrix . Sparse attention achieves great results on language modelling tasks , yet their application to complex data , where attending to entire input is required , is limited . Graph Convolutional Neural Networks ( Micheli , 2009 ; Atwood & Towsley , 2016 ) generalizes the convolution operation from the grid to graph data . They have emerged as powerful tools for processing graphs with complex relations ( see Wu et al . ( 2019 ) for great reference ) . Such networks have successfully been applied to image segmentation ( Gong et al. , 2019 ) , program reasoning ( Allamanis et al. , 2018 ) , and combinatorial optimization ( Li et al. , 2018 ) tasks . Nonetheless , Xu et al . ( 2018 ) has shown that Graph Convolutional Networks may not distinguish some simple graph structures . To alleviate this problem they introduce Graph Isomorphism Network , that is as powerful as Weisfeiler-Lehman graph isomorphism test and is the most expressive among Graph Neural Network models . Neural algorithm synthesis and induction is a widely explored topic for 1D sequence problems ( Abolafia et al. , 2020 ; Freivalds et al. , 2019 ; Freivalds & Liepins , 2018 ; Kaiser & Sutskever , 2015 ; Draguns et al. , 2020 ) but for 2D data , only a few works exist . Shin et al . ( 2018 ) has proposed Karel program synthesis from the input-output image pairs and execution trace . Differentiable Neural Computer ( Graves et al. , 2016 ) , which employs external memory , has been applied to the SGRDLU puzzle game , shortest path finding and traversal tasks on small synthetic graphs . Several neural network architectures have been developed for learning to play board games ( Silver et al. , 2018 ) , including chess ( David et al. , 2016 ) and go ( Silver et al. , 2016 ; 2017 ) , often using complex architectures or reinforcement learning . 3 1D SHUFFLE-EXCHANGE NETWORKS . Here we review the Neural Shuffle-Exchange ( NSE ) network for sequence-to-sequence processing , recently introduced by Freivalds et al . ( 2019 ) and revised by Draguns et al . ( 2020 ) . This architecture offers an efficient alternative to the attention mechanism and allows modelling of long-range dependencies in sequences of length n in O ( n log n ) time . NSE network is a neural adaption of well-known Shuffle-Exchange and Beneš interconnections networks , that allows linking any two devices using a logarithmic number of switching layers ( see Dally & Towles ( 2004 ) for an excellent introduction ) . The NSE network works for sequences of length n = 2k , where k ∈ Z+ , and it consists of alternating Switch and Shuffle layers . Although all the levels are of the same structure , a network formed of 2k− 2 Switch and Shuffle layers can learn a broad class of functions , including arbitrary permutation of elements . Such a network is called a Beneš block . A deeper and more expressive network may be obtained by stacking several Beneš blocks ; for most tasks , two blocks are enough . The first k− 1 Switch and Shuffle layers of the Beneš block form Shuffle-Exchange block , the rest of the k − 1 Switch and Inverse Shuffle layers form its mirror counterpart . In the Beneš block , only Switch layers have learnable parameters and weight sharing is employed between layers of the same Shuffle-Exchange block . A single Shuffle-Exchange block is sufficient to guarantee that any input is connected to any output through the network , i.e . the network has ‘ receptive field ’ spanning the whole sequence . In the Switch layer , elements of the input sequence are divided into adjacent non-overlapping pairs , and the Switch Unit is applied to each pair . The Residual Switch Unit ( RSU ) proposed by Draguns et al . ( 2020 ) works the best . RSU has two inputs [ i1 , i2 ] , two outputs [ o1 , o2 ] , and two linear transformations on the feature dimension . After the first transformation , root mean square layer normalization ( RMSNorm ) ( Zhang & Sennrich , 2019 ) and Gaussian Error Linear Unit ( GELU ) ( Hendrycks & Gimpel , 2016 ) follow . By default , the hidden representation has two times more feature maps than the input . The second linear transformation is applied after GELU and its output is scaled by a scalar h. Additionally , the output of the unit is connected to its input using a residual connection that is scaled by a learnable parameter s. RSU is defined as follows : i = [ i1 , i2 ] g = GELU ( RMSNorm ( Zi ) ) c = Wg + b [ o1 , o2 ] = σ ( s ) i + h c In the above equations , Z , W are weight matrices of size 2m × 4m and 4m × 2m , respectively , where m is the number of feature maps ; s is a vector of size 2m and b is a bias vector – all of those are learnable parameters ; denotes element-wise vector multiplication and σ is the sigmoid function . s is initialized as σ−1 ( r ) and h is initialized as √ 1− r2 ∗ 0.25 , where r = 0.9 . The Shuffle layer performs the perfect shuffle permutation of the sequence elements . In this permutation , the destination address of an element is obtained by cyclic bit rotation of its source address . The rotation to the right is used for the Shuffle layer and to the left for Inverse Shuffle layer . Shuffle layers do not have learnable parameters . 4 THE MATRIX SHUFFLE-EXCHANGE NETWORK . The naive way to employ NSE on 2D data would be by transforming an input matrix into a sequence using raster scan flattening ( e.g. , tf.reshape ) , but such a solution has two main drawbacks . Firstly , NSE requires more than 8 seconds for 1M element sequence ( which matches flattened 1024x1024 matrix ) in inference mode ( Draguns et al. , 2020 ) , making its use unpractical on large matrices . Secondly , raster scan flattening doesn ’ t preserve element locality if the inputs can be of different sizes , limiting the networks ability to generalize on large inputs – an essential requirement for algorithmic tasks . We propose Matrix Shuffle-Exchange ( Matrix-SE ) network – an adoption of NSE for 2D data , that is significantly faster , generalizes on large matrices and retains the property of receptive field spanning the whole input matrix ( see Appendix B for more details ) . The model is suitable for processing n×n input array , where n = 2k for k ∈ Z+ and each element is vector of m feature maps . Inputs that don ’ t fulfil this requirement has to be padded to the closest 2k × 2k array . The Matrix-SE network works by rearranging the input matrix into a 1D sequence , then applying several interleaved Quaternary Switch and Quaternary Shuffle layers and then converting data back to 2D , see Fig . 1 . Rearranging of n× n input to n2 sequence has to be done in such way that enables generalization on larger matrices . For this purpose , we read out values according to the Z-Order curve , as it has recursive structure and it preserves element locality regardless of the input size.1 The resulting ordering is the same as one would get from a depth-first traversal of a quadtree . We call this transformation a Z-Order flatten . The left matrix in Fig . 2 shows matrix elements indexed according to Z-Order curve . The Z-Order unflatten transforms n2 output sequence of the last Quaternary Switch layer to n× n representation according to Z-Order curve indexing . The middle part involving Quaternary Switch ( QSwitch ) and Quaternary Shuffle ( QShuffle ) layers is structured as one or more Beneš blocks the same way as in the NSE network . The QSwitch and QShuffle layers differ from Switch and Shuffle layers in NSE , by performing the computations in groups of four instead of two . Such principle ( using 4-to-4 switches instead of 2-to-2 ) has been proved for Shuffle-Exchange computer networks ( Dally & Towles , 2004 ) . That reduces network 1https : //en.wikipedia.org/wiki/Z-order_curve depth 2x times preserving the theoretical properties of NSE . Each Beneš block consists of interleaved k−1 QSwitch and QShuffle layers that form Shuffle-Exchange block , followed by k−1 QSwitch and Inverse QShuffle layers that form mirror counterpart of the first Shuffle-Exchange block . In contrast with NSE , we finalize Beneš block with one more QSwitch layer , making it more alike to Beneš computer networks . Last QSwitch layer serves as an output for the Beneš block and improves model expressiveness . Weight sharing is employed between QSwitch layers of same Shuffle-Exchange block . The last QSwitch layer of the Beneš block does not participate in weight sharing . If the network has more than one Beneš block , each block receives a distinct set of weights . In the QSwitch layer , we divide adjacent elements of the sequence ( corresponds to flattened 2D input ) into non-overlapping 4 element tuples . That is implemented by reshaping the input sequence into a 4 times shorter sequence where 4 elements are concatenated along the feature dimension as [ i1 , i2 , i3 , i4 ] . Then Quaternary Switch Unit ( QSU ) is applied to each tuple . QSU is based on RSU but has 4 inputs [ i1 , i2 , i3 , i4 ] and 4 outputs [ o1 , o2 , o3 , o4 ] . The rest of the Unit structure is left unchanged from RSU . The QSU is defined as follows : i = [ i1 , i2 , i3 , i4 ] g = GELU ( RMSNorm ( Zi ) ) c = Wg + b [ o1 , o2 , o3 , o4 ] = σ ( s ) i + h c In the above equations , Z , W are weight matrices of size 4m× 8m and 8m× 4m , respectively ; s is a vector of size 4m and b is a bias vector – all of those are learnable parameters ; h is a scalar value ; denotes element-wise vector multiplication and σ is the sigmoid function . We initialize s and h same as in RSU . Weights sharing is employed between all QSUs of the same QSwitch layer . The QShuffle layer rearranges the elements according to the cyclic digit rotation permutation . This permutation for a sequence S is defined as S [ x ] = S [ qrotate ( x , k ) ] , where qrotate ( x , k ) applies cyclic shift by one digit to x which is treated as a quaternary ( base 4 ) number and k is its length in quaternary digits.2 For the first k − 1 QShuffle layers of the Beneš block , we apply rotation to the right – to the left for the remaining k − 1 layers . | The paper proposes a network architecture called Matrix Shuffle-Exchange (Matrix-SE) that can learn many logical reasoning tasks on 2D data and graph. It has complexity O(n^2 log n) for 2D input of size n x n, which is much smaller than the complexity of naive attention applied to 2D data (O(n^4)). The proposed architecture is an adaptation of the Neural Shuffle-Exchange network architecture (Freivalds et al., 2019), moving from 1D to 2D data. This adaptation is done by using a Z-order iteration of the 2D input, then performing radix-4 shuffle and radix-4 exchange, instead of radix-2. This model is shown to be able to solve several hard tasks on 2D data, such as inferring algorithms on binary matrices (transpose, rotation, bitwise XOR, matrix squaring), graph operations (component labeling, triangle finding, transitivity), and solving Sudoku puzzles. The experiments show impressive results and the model's ability to generalize to test inputs of larger sizes than those in the training set. | SP:05e0d4b6ccaac1bdf0ffa78bad02722d4dfd4659 |
Matrix Shuffle-Exchange Networks for Hard 2D Tasks | 1 INTRODUCTION . Data often comes in a form of two-dimensional matrices . Neural networks are often used for processing such data usually involving convolution as the primary processing method . But convolutions are local , capable of analyzing only neighbouring positions in the data matrix . That is good for images since the neighbouring pixels are closely related , but not sufficient for data having more distant relationships . In this paper , we consider the problem how to efficiently process 2D data in a way that allows both local and long-range relationship modelling and propose a new neural architecture , called Matrix Shuffle-Exchange network , to this end . The complexity of the proposed architecture is O ( n2 log n ) for processing n × n data matrix , which is significantly lower than O ( n4 ) if one would use the attention ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) in its pure form . The architecture is derived from the Neural Shuffle-Exchange networks ( Freivalds et al. , 2019 ; Draguns et al. , 2020 ) by lifting their architecture from 1D to 2D . We validate our model on tasks with differently structured 2D input/output data . It can handle complex data inter-dependencies present in algorithmic tasks on matrices such as transposition , rotation , arithmetic operations and matrix multiplication . Our model reaches the perfect accuracy on test instances of the same size it was trained on and generalizes on much larger instances . In contrast , a convolutional baseline can be trained only on small instances and does not generalize . The generalization capability is an important measure for algorithmic tasks to say that the model has learned an algorithm , not only fitted the training data . Our model can be used for processing graphs by representing a graph with its adjacency matrix . It has a significant advantage over graph neural networks ( GNN ) in case of dense graphs having additional data associated with graph edges ( for example , edge length ) since GNNs typically attach data to vertices , not edges . We demonstrate that the proposed model can infer important local and non-local graph concepts by evaluating it on component labelling , triangle finding and transitivity tasks . It reaches the perfect accuracy on test instances of the same size it was trained on and generalizes on larger instances while the GNN baseline struggles to find these concepts even on small graphs . The model can perform complex logical reasoning required to solve Sudoku puzzles . It achieves 100 % correct solutions on easy puzzles and 96.6 % on hard puzzles which is on par with the state-of-the-art deep learning model which was specifically tailored for logical reasoning tasks . 2 RELATED WORK . Convolutional Neural Networks ( CNN ) are the primary tools for processing data with a 2D gridlike topology . For instance , VGG ( Simonyan & Zisserman , 2014 ) and ResNet ( He et al. , 2016a ) enable high-accuracy image classification . Convolution is an inherently local operation that limits CNN use for long-range dependency modelling . The problem can be mitigated by using dilated ( atrous ) convolutions that have an expanded receptive field . Such an approach works well on image segmentation ( Yu & Koltun , 2015 ) but is not suitable for algorithmic tasks where generalization on larger inputs is crucial . The attention mechanism ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) is a widespread way of solving the long-range dependency problem in sequence tasks . Unfortunately , its application to 2D data is limited due to it ’ s high O ( n4 ) time complexity for n× n input matrix . Various sparse attention mechanisms have been proposed to deal with the quadratic complexity of dense attention by attending only to a small predetermined subset of locations ( Child et al. , 2019 ; Beltagy et al. , 2020 ; Zaheer et al. , 2020 ) . Reformer ( Kitaev et al. , 2020 ) uses locality-sensitive hashing to approximate attention in time O ( n log n ) . Linformer ( Wang et al. , 2020 ) uses a linear complexity approximation to the original attention by creating a low-rank factorization of the attention matrix . Sparse attention achieves great results on language modelling tasks , yet their application to complex data , where attending to entire input is required , is limited . Graph Convolutional Neural Networks ( Micheli , 2009 ; Atwood & Towsley , 2016 ) generalizes the convolution operation from the grid to graph data . They have emerged as powerful tools for processing graphs with complex relations ( see Wu et al . ( 2019 ) for great reference ) . Such networks have successfully been applied to image segmentation ( Gong et al. , 2019 ) , program reasoning ( Allamanis et al. , 2018 ) , and combinatorial optimization ( Li et al. , 2018 ) tasks . Nonetheless , Xu et al . ( 2018 ) has shown that Graph Convolutional Networks may not distinguish some simple graph structures . To alleviate this problem they introduce Graph Isomorphism Network , that is as powerful as Weisfeiler-Lehman graph isomorphism test and is the most expressive among Graph Neural Network models . Neural algorithm synthesis and induction is a widely explored topic for 1D sequence problems ( Abolafia et al. , 2020 ; Freivalds et al. , 2019 ; Freivalds & Liepins , 2018 ; Kaiser & Sutskever , 2015 ; Draguns et al. , 2020 ) but for 2D data , only a few works exist . Shin et al . ( 2018 ) has proposed Karel program synthesis from the input-output image pairs and execution trace . Differentiable Neural Computer ( Graves et al. , 2016 ) , which employs external memory , has been applied to the SGRDLU puzzle game , shortest path finding and traversal tasks on small synthetic graphs . Several neural network architectures have been developed for learning to play board games ( Silver et al. , 2018 ) , including chess ( David et al. , 2016 ) and go ( Silver et al. , 2016 ; 2017 ) , often using complex architectures or reinforcement learning . 3 1D SHUFFLE-EXCHANGE NETWORKS . Here we review the Neural Shuffle-Exchange ( NSE ) network for sequence-to-sequence processing , recently introduced by Freivalds et al . ( 2019 ) and revised by Draguns et al . ( 2020 ) . This architecture offers an efficient alternative to the attention mechanism and allows modelling of long-range dependencies in sequences of length n in O ( n log n ) time . NSE network is a neural adaption of well-known Shuffle-Exchange and Beneš interconnections networks , that allows linking any two devices using a logarithmic number of switching layers ( see Dally & Towles ( 2004 ) for an excellent introduction ) . The NSE network works for sequences of length n = 2k , where k ∈ Z+ , and it consists of alternating Switch and Shuffle layers . Although all the levels are of the same structure , a network formed of 2k− 2 Switch and Shuffle layers can learn a broad class of functions , including arbitrary permutation of elements . Such a network is called a Beneš block . A deeper and more expressive network may be obtained by stacking several Beneš blocks ; for most tasks , two blocks are enough . The first k− 1 Switch and Shuffle layers of the Beneš block form Shuffle-Exchange block , the rest of the k − 1 Switch and Inverse Shuffle layers form its mirror counterpart . In the Beneš block , only Switch layers have learnable parameters and weight sharing is employed between layers of the same Shuffle-Exchange block . A single Shuffle-Exchange block is sufficient to guarantee that any input is connected to any output through the network , i.e . the network has ‘ receptive field ’ spanning the whole sequence . In the Switch layer , elements of the input sequence are divided into adjacent non-overlapping pairs , and the Switch Unit is applied to each pair . The Residual Switch Unit ( RSU ) proposed by Draguns et al . ( 2020 ) works the best . RSU has two inputs [ i1 , i2 ] , two outputs [ o1 , o2 ] , and two linear transformations on the feature dimension . After the first transformation , root mean square layer normalization ( RMSNorm ) ( Zhang & Sennrich , 2019 ) and Gaussian Error Linear Unit ( GELU ) ( Hendrycks & Gimpel , 2016 ) follow . By default , the hidden representation has two times more feature maps than the input . The second linear transformation is applied after GELU and its output is scaled by a scalar h. Additionally , the output of the unit is connected to its input using a residual connection that is scaled by a learnable parameter s. RSU is defined as follows : i = [ i1 , i2 ] g = GELU ( RMSNorm ( Zi ) ) c = Wg + b [ o1 , o2 ] = σ ( s ) i + h c In the above equations , Z , W are weight matrices of size 2m × 4m and 4m × 2m , respectively , where m is the number of feature maps ; s is a vector of size 2m and b is a bias vector – all of those are learnable parameters ; denotes element-wise vector multiplication and σ is the sigmoid function . s is initialized as σ−1 ( r ) and h is initialized as √ 1− r2 ∗ 0.25 , where r = 0.9 . The Shuffle layer performs the perfect shuffle permutation of the sequence elements . In this permutation , the destination address of an element is obtained by cyclic bit rotation of its source address . The rotation to the right is used for the Shuffle layer and to the left for Inverse Shuffle layer . Shuffle layers do not have learnable parameters . 4 THE MATRIX SHUFFLE-EXCHANGE NETWORK . The naive way to employ NSE on 2D data would be by transforming an input matrix into a sequence using raster scan flattening ( e.g. , tf.reshape ) , but such a solution has two main drawbacks . Firstly , NSE requires more than 8 seconds for 1M element sequence ( which matches flattened 1024x1024 matrix ) in inference mode ( Draguns et al. , 2020 ) , making its use unpractical on large matrices . Secondly , raster scan flattening doesn ’ t preserve element locality if the inputs can be of different sizes , limiting the networks ability to generalize on large inputs – an essential requirement for algorithmic tasks . We propose Matrix Shuffle-Exchange ( Matrix-SE ) network – an adoption of NSE for 2D data , that is significantly faster , generalizes on large matrices and retains the property of receptive field spanning the whole input matrix ( see Appendix B for more details ) . The model is suitable for processing n×n input array , where n = 2k for k ∈ Z+ and each element is vector of m feature maps . Inputs that don ’ t fulfil this requirement has to be padded to the closest 2k × 2k array . The Matrix-SE network works by rearranging the input matrix into a 1D sequence , then applying several interleaved Quaternary Switch and Quaternary Shuffle layers and then converting data back to 2D , see Fig . 1 . Rearranging of n× n input to n2 sequence has to be done in such way that enables generalization on larger matrices . For this purpose , we read out values according to the Z-Order curve , as it has recursive structure and it preserves element locality regardless of the input size.1 The resulting ordering is the same as one would get from a depth-first traversal of a quadtree . We call this transformation a Z-Order flatten . The left matrix in Fig . 2 shows matrix elements indexed according to Z-Order curve . The Z-Order unflatten transforms n2 output sequence of the last Quaternary Switch layer to n× n representation according to Z-Order curve indexing . The middle part involving Quaternary Switch ( QSwitch ) and Quaternary Shuffle ( QShuffle ) layers is structured as one or more Beneš blocks the same way as in the NSE network . The QSwitch and QShuffle layers differ from Switch and Shuffle layers in NSE , by performing the computations in groups of four instead of two . Such principle ( using 4-to-4 switches instead of 2-to-2 ) has been proved for Shuffle-Exchange computer networks ( Dally & Towles , 2004 ) . That reduces network 1https : //en.wikipedia.org/wiki/Z-order_curve depth 2x times preserving the theoretical properties of NSE . Each Beneš block consists of interleaved k−1 QSwitch and QShuffle layers that form Shuffle-Exchange block , followed by k−1 QSwitch and Inverse QShuffle layers that form mirror counterpart of the first Shuffle-Exchange block . In contrast with NSE , we finalize Beneš block with one more QSwitch layer , making it more alike to Beneš computer networks . Last QSwitch layer serves as an output for the Beneš block and improves model expressiveness . Weight sharing is employed between QSwitch layers of same Shuffle-Exchange block . The last QSwitch layer of the Beneš block does not participate in weight sharing . If the network has more than one Beneš block , each block receives a distinct set of weights . In the QSwitch layer , we divide adjacent elements of the sequence ( corresponds to flattened 2D input ) into non-overlapping 4 element tuples . That is implemented by reshaping the input sequence into a 4 times shorter sequence where 4 elements are concatenated along the feature dimension as [ i1 , i2 , i3 , i4 ] . Then Quaternary Switch Unit ( QSU ) is applied to each tuple . QSU is based on RSU but has 4 inputs [ i1 , i2 , i3 , i4 ] and 4 outputs [ o1 , o2 , o3 , o4 ] . The rest of the Unit structure is left unchanged from RSU . The QSU is defined as follows : i = [ i1 , i2 , i3 , i4 ] g = GELU ( RMSNorm ( Zi ) ) c = Wg + b [ o1 , o2 , o3 , o4 ] = σ ( s ) i + h c In the above equations , Z , W are weight matrices of size 4m× 8m and 8m× 4m , respectively ; s is a vector of size 4m and b is a bias vector – all of those are learnable parameters ; h is a scalar value ; denotes element-wise vector multiplication and σ is the sigmoid function . We initialize s and h same as in RSU . Weights sharing is employed between all QSUs of the same QSwitch layer . The QShuffle layer rearranges the elements according to the cyclic digit rotation permutation . This permutation for a sequence S is defined as S [ x ] = S [ qrotate ( x , k ) ] , where qrotate ( x , k ) applies cyclic shift by one digit to x which is treated as a quaternary ( base 4 ) number and k is its length in quaternary digits.2 For the first k − 1 QShuffle layers of the Beneš block , we apply rotation to the right – to the left for the remaining k − 1 layers . | This work proposes the Neural Shuffle-Exchange Network to capture both local and global dependencies for 2D data. The idea extends the 1D Neural Shuffle-Exchange Network to its 2D application. The proposed method first converts 2D data to 1D following the Z-order, then apply several Quaternary Switch and Quaternary Shuffle layers, and finally convert the data back to 2D space. The experimental results show that the proposed method can obtain better performance with reasonable computational cost. | SP:05e0d4b6ccaac1bdf0ffa78bad02722d4dfd4659 |
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space | 1 INTRODUCTION . Deep neural networks ( DNNs ) are a family of powerful models that have demonstrated superior learning capabilities in a wide range of applications such as image classification , object detection and natural language processing . However , DNNs are often applied as a black box with limited understanding of what the model has learned from the data . Existing understandings about DNNs have mostly been developed in the deep representation space or using the attention map . DNNs are known to be able to learn high quality representations ( Donahue et al. , 2014 ) , and the representations are well associated with the attention map of the model on the inputs ( Zhou et al. , 2016 ; Selvaraju et al. , 2016 ) . It has also been found that DNNs trained on high resolution images like ImageNet are biased towards texture ( Geirhos et al. , 2019 ) . While these works have significantly contributed to the understanding of DNNs , a method that can intuitively visualize what DNNs learn for each class in the input space ( rather than the deep representation space ) is still missing . Recently , the above understandings have been challenged by the vulnerabilities of DNNs to backdoor ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) and adversarial attacks ( Gu et al. , 2017 ; Chen et al. , 2017 ) . The backdoor vulnerability is believed to be caused by the preference of learning high frequency patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . Nevertheless , no existing method is able to reliably reveal the backdoor patterns , even though it has been well learned into the backdoored model . Adversarial attacks can easily fool state-of-the-art DNNs by either sample-wise ( Goodfellow et al. , 2016 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) adversarial perturbations . One recent explanation for the adversarial vulnerability is that , besides robust features , DNNs also learn useful ( to the prediction ) yet non-robust features which are sensitive to small perturbations ( Ilyas et al. , 2019 ) . Adversarial training , one state-of-the-art adversarial defense method , has been shown can train DNNs to learn sample-wise robust features ( Madry et al. , 2018 ; Ilyas et al. , 2019 ) . However , it is still not clear if adversarially trained DNNs can learn a robust pattern for each class . In this paper , we focus on image classification tasks and propose a visualization method that can reveal the pattern learned by DNNs for each class in the input space . Different from sample-wise visualization methods like attention maps , we aim to reveal the knowledge ( or pattern ) learned by DNNs for each class . Moreover , we reveal these patterns in the input space rather than the deep representation space . This is because input space patterns are arguably much easier to interpret . Furthermore , we are interested in a visualization method that can provide new insights into the backdoor and adversarial vulnerabilities of DNNs , both of which are input space vulnerabilities ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . Given a target class , a canvas image , and a subset of images from the nontarget classes , our method searches for a single pattern ( a set of pixels ) from the canvas image that is highly predictive of the target class . In other words , when the pattern is attached to images from any other ( i.e . nontarget ) classes , the model will consistently predict them as the target class . Figure 1 illustrates a few examples of the class-wise patterns revealed by our method for DNNs trained on natural ( clean ) CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets . In summary , our main contributions are : 1 ) We propose a visualization method to reveal the classwise patterns learned by DNNs in the input space , and show the difference to attention maps and universal adversarial perturbations . 2 ) With the proposed visualization method , we show that DNNs trained on natural datasets can learn a consistent and predictive pattern for each class , and the pattern contains abstract shapes along with some texture . This sheds new lights on the current texture bias understanding of DNNs . 3 ) When applied on backdoored DNNs , our method can reveal the trigger patterns learned by the model from the poisoned dataset . Our method can serve as an effective tool to assist the detection of backdoored models . 4 ) The existence of class-wise predictive patterns in the input space indicates that even DNNs trained on clean data can have backdoors , and the class-wise patterns identified by our method can be readily applied to “ backdoor ” attack the model . 5 ) By examining the patterns learned by DNNs trained in the adversarially setting , we find that adversarially trained models learn more simplified shape patterns . 2 RELATED WORK . General Understandings of DNNs . DNNs are known to learn more complex and higher quality representations than traditional models . Features learned at intermediate layers of AlexNet have been found to contain both simple patterns like lines and corners and high level shapes ( Donahue et al. , 2014 ) . These features have been found crucial for the superior performance of DNNs ( He et al. , 2015 ) . The exceptional representation learning capability of DNNs has also been found related to structures of the networks like depth and width ( Safran & Shamir , 2017 ; Telgarsky , 2016 ) . One recent work found that ImageNet-trained DNNs are biased towards texture features ( Geirhos et al. , 2019 ) . Attention maps have also been used to develop better understandings of the decisions made by DNNs on a given input ( Simonyan et al. , 2014 ; Springenberg et al. , 2015 ; Zeiler & Fergus , 2014 ; Gan et al. , 2015 ) . The Grad-CAM technique proposed by Selvaraju et al . ( 2016 ) utilizes input gradients to produce intuitive attention maps . Whilst these works mostly focus on deep representations or sample-wise attention , an understanding and visualization of what DNNs learn for each class in the input space is still missing from the current literature . Understanding Vulnerabilities of DNNs . Recent works have found that DNNs are vulnerable to backdoor and adversarial attacks . A backdoor attack implants a backdoor trigger into a victim model by injecting the trigger into a small proportion of training data ( Gu et al. , 2017 ; Liu et al. , 2018 ) . The model trained on poisoned dataset will learn a noticeable correlation between the trigger and a target label . A backdoored model behaves normally on clean test data , yet consistently predict a target ( incorrect ) label whenever the trigger appears in a test example ( Zhao et al. , 2020 ; Yao et al. , 2019 ; Liu et al. , 2020 ) . This is believed to be caused by the fact that DNNs tend to learn more high frequency ( e.g . backdoor ) patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . However , it is still unclear whether DNNs can learn such patterns from natural ( clean ) data . Moreover , despite a few attempts ( Wang et al. , 2019 ; Qiao et al. , 2019 ) , the trigger pattern still can not be reliably revealed , even though it has been well learned by the backdoored model . DNNs can also be easily fooled by small , imperceptible adversarial perturbations into making incorrect predictions ( Szegedy et al. , 2014 ; Goodfellow et al. , 2016 ) . Adversarial perturbations can be either sample-wise ( Madry et al. , 2018 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) . This has been found to be caused by learning useful ( to prediction ) but nonrobust ( to adversarial perturbation ) features ( Ilyas et al. , 2019 ) . Meanwhile , adversarial training has been shown to learn more robust features and deliver effective defenses ( Madry et al. , 2018 ) . However , existing understandings of adversarial training are established based on sample-wise attention ( Ilyas et al. , 2019 ) . It still unclear , from the class-wise perspective , what robust or nonrobust input patterns look like . In this paper , we will propose a method to reveal the patterns ( e.g . backdoor or adversarially robust/nonrobust ) learned by DNNs for each class . 3 PROPOSED VISUALIZATION METHOD . In this section , we first define the input space class-wise pattern searching problem , then introduce our proposed searching method . Motivation and Intuition . We focus on image classification with deep neural networks . We denote the training and test dataset as Dtrain and Dtest , respectively . Given a DNN model f trained on a K-class Dtrain and a target class y ∈ { 1 , · · · , K } , our goal is to find an input space pattern , i.e , a small set of pixels , that are extremely predictive of the target class . A highly predictive pattern of a class can largely capture the knowledge the model learned for the class . In backdoor attack , a predictive ( i.e . backdoor trigger ) pattern learned by the model can even control the model ’ s prediction . Intuitively , a predictive pattern of a target class should be able to make the model consistently predict the target class whenever it is attached to images from any other ( e.g . nontarget ) classes . Class-wise Pattern Searching . For a target class y , our method searches for a predictive pattern py from a canvas image xc , based on a small test subset Dn of images from the nontarget classes ( i.e . Dn ⊂ Dtest ) . The canvas image xc is the image where the pattern ( a set of pixels ) is extracted . The search is done via an optimization process based on a mixed input between the canvas image xc and an image xn ∈ Dn . The mixed input x̃ is defined as follows : x̃ =m ∗ xc + ( 1−m ) ∗ xn , ( 1 ) where m is a mask that has the same size as either xc or xn , and mij ≥ 0 . The mixed input image is labeled as the target class y regardless of its original class . This mixing strategy is reminiscent of the mixup ( Zhang et al. , 2018 ) data augmentation algorithm . However , we do not mix the class labels and our purpose is for pattern optimization rather than data augmentation . During the searching process , the mask is iteratively updated to minimize the following loss : L = − log fy ( x̃ ) + α 1 n ‖m‖1 , ( 2 ) where , fy is network ’ s probability output with respect to target class y , ‖·‖1 is the L1 norm , α is a parameter that balances the two loss terms , and n is the size of the input image as well as the mask . The first loss term is the commonly used cross entropy loss . The second term increases the sparsity of mask as we are interested in simple patterns with a small number of highly predictive pixels . During the search process , we pair the canvas image xc randomly with images from Dn , and iteratively update the mask m using standard Stochastic Gradient Decent ( SGD ) while keeping the model parameters unchanged . At each iteration , the mask m will also be clipped into [ 0 , 1 ] . Once a mask is learned , we further clip the values in the mask that are smaller than γ to zero , larger than γ to one . We denote this clipped mask by mγ . We then extract the pattern from the canvas image by py =mγ ∗ xc . The γ parameter can be flexibly determined in different applications . A large γ may lead to less predictive pattern while a small γ will produce more of a sample-wise pattern that overfits to the canvas image . The above search method is repeatedly applied to N canvas images to generate N patterns for each class . We then select the pattern that has the lowest loss value as the final pattern of the class . This additional step is to find the most predictive pattern by exploring different canvases . The complete procedure of our method is described in Algorithm 1 in Appendix A. Canvas Sampling . We propose four different sampling strategies for the selection of the N canvas images : positive sampling , negative sampling , random sampling and white canvas . Positive sampling selects the top-N confident images from the target class according to the logits output of model f . Negative sampling selects the top-N most confidently misclassified images from any nontarget class into the target class . The random sampling randomly chooses N images from the target class y . The white canvas simply uses an image with all white pixels as the canvas . Both positive and the negative sampling aim to find the most well-learned examples by the model , but from different perspectives : well-learned correctly ( e.g . positive ) vs. well-learned incorrectly ( e.g . negative ) . The white canvas is interesting since the pattern found from the white canvas will have the texture “ removed ” , which is useful for scenarios where only the shape features are of interest . The patterns found based on different canvases are compared in Figure 4 . After applying our method on each class , we can obtain a set of class-wise patterns : P = { p1 , · · · , pK } . This set of predictive patterns can revel the knowledge learned by model f for each class from a unique perspective . Why is it Class-wise ? At first sight , one might wonder if the discovered pattern could be samplewise , rather than class-wise , given the use of the canvas sample . Note that , however , even though we are using a single sample as a canvas , the pattern found by the optimization algorithm is dependent on how the model has learnt the entire class , in terms of its loss . This is particularly evident in the case of the all white canvas , which bears no relation to any input sample . Hence our designation of the pattern as being “ class-wise ” . While our method can find consistent and predictive classwise patterns in the experiments , it might still be extendable . For example , using multiple positive canvas images at the same time , using noise rather than the non-target images , or using universal adversarial perturbation ( UAP ) ( Moosavi-Dezfooli et al. , 2017 ) but in a more controlled manner . We leave further explorations of these methods as our future work . Difference to Universal Adversarial Perturbation . UAP can also be applied to craft class-wise adversarial patterns that can make the model predict an adversarial target class . In this view , both UAP and our method find predictive patterns to the target class . However , the two methods work in different ways . By fooling the network , UAP explores the unlearned space ( low-probability “ pockets ” ) of the network ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . In contrast , our method is a searching ( rather than perturbing ) method that does not rely on adversarial perturbations . Thus , it has to find the optimal pixel locations in the input space that are well-learned by the model for the pattern to be predictive of the class . In Section 4.2 and Appendix E , we have experiments showing the difference of the patterns found by class-wise UAP and our method . | This paper proposes a visualization method to reveal the class-specific discriminative patterns of DNNs in the input space. When added to images from another class, such patterns can lead the DNN to classify the images into the pattern's class. From the experimental results, the authors conjecture that images trained on natural data can have backdoors. It also claims that the method reveals the trigger patterns of backdoor attacks, that adversarially trained models learn simplified shape patterns, but an intentionally-perturbed robust dataset improves model robustness by sacrificing its ability to represent shapes. | SP:244bbe4a2152e42292a91a9a05290205ee8deb5f |
What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space | 1 INTRODUCTION . Deep neural networks ( DNNs ) are a family of powerful models that have demonstrated superior learning capabilities in a wide range of applications such as image classification , object detection and natural language processing . However , DNNs are often applied as a black box with limited understanding of what the model has learned from the data . Existing understandings about DNNs have mostly been developed in the deep representation space or using the attention map . DNNs are known to be able to learn high quality representations ( Donahue et al. , 2014 ) , and the representations are well associated with the attention map of the model on the inputs ( Zhou et al. , 2016 ; Selvaraju et al. , 2016 ) . It has also been found that DNNs trained on high resolution images like ImageNet are biased towards texture ( Geirhos et al. , 2019 ) . While these works have significantly contributed to the understanding of DNNs , a method that can intuitively visualize what DNNs learn for each class in the input space ( rather than the deep representation space ) is still missing . Recently , the above understandings have been challenged by the vulnerabilities of DNNs to backdoor ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) and adversarial attacks ( Gu et al. , 2017 ; Chen et al. , 2017 ) . The backdoor vulnerability is believed to be caused by the preference of learning high frequency patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . Nevertheless , no existing method is able to reliably reveal the backdoor patterns , even though it has been well learned into the backdoored model . Adversarial attacks can easily fool state-of-the-art DNNs by either sample-wise ( Goodfellow et al. , 2016 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) adversarial perturbations . One recent explanation for the adversarial vulnerability is that , besides robust features , DNNs also learn useful ( to the prediction ) yet non-robust features which are sensitive to small perturbations ( Ilyas et al. , 2019 ) . Adversarial training , one state-of-the-art adversarial defense method , has been shown can train DNNs to learn sample-wise robust features ( Madry et al. , 2018 ; Ilyas et al. , 2019 ) . However , it is still not clear if adversarially trained DNNs can learn a robust pattern for each class . In this paper , we focus on image classification tasks and propose a visualization method that can reveal the pattern learned by DNNs for each class in the input space . Different from sample-wise visualization methods like attention maps , we aim to reveal the knowledge ( or pattern ) learned by DNNs for each class . Moreover , we reveal these patterns in the input space rather than the deep representation space . This is because input space patterns are arguably much easier to interpret . Furthermore , we are interested in a visualization method that can provide new insights into the backdoor and adversarial vulnerabilities of DNNs , both of which are input space vulnerabilities ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . Given a target class , a canvas image , and a subset of images from the nontarget classes , our method searches for a single pattern ( a set of pixels ) from the canvas image that is highly predictive of the target class . In other words , when the pattern is attached to images from any other ( i.e . nontarget ) classes , the model will consistently predict them as the target class . Figure 1 illustrates a few examples of the class-wise patterns revealed by our method for DNNs trained on natural ( clean ) CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets . In summary , our main contributions are : 1 ) We propose a visualization method to reveal the classwise patterns learned by DNNs in the input space , and show the difference to attention maps and universal adversarial perturbations . 2 ) With the proposed visualization method , we show that DNNs trained on natural datasets can learn a consistent and predictive pattern for each class , and the pattern contains abstract shapes along with some texture . This sheds new lights on the current texture bias understanding of DNNs . 3 ) When applied on backdoored DNNs , our method can reveal the trigger patterns learned by the model from the poisoned dataset . Our method can serve as an effective tool to assist the detection of backdoored models . 4 ) The existence of class-wise predictive patterns in the input space indicates that even DNNs trained on clean data can have backdoors , and the class-wise patterns identified by our method can be readily applied to “ backdoor ” attack the model . 5 ) By examining the patterns learned by DNNs trained in the adversarially setting , we find that adversarially trained models learn more simplified shape patterns . 2 RELATED WORK . General Understandings of DNNs . DNNs are known to learn more complex and higher quality representations than traditional models . Features learned at intermediate layers of AlexNet have been found to contain both simple patterns like lines and corners and high level shapes ( Donahue et al. , 2014 ) . These features have been found crucial for the superior performance of DNNs ( He et al. , 2015 ) . The exceptional representation learning capability of DNNs has also been found related to structures of the networks like depth and width ( Safran & Shamir , 2017 ; Telgarsky , 2016 ) . One recent work found that ImageNet-trained DNNs are biased towards texture features ( Geirhos et al. , 2019 ) . Attention maps have also been used to develop better understandings of the decisions made by DNNs on a given input ( Simonyan et al. , 2014 ; Springenberg et al. , 2015 ; Zeiler & Fergus , 2014 ; Gan et al. , 2015 ) . The Grad-CAM technique proposed by Selvaraju et al . ( 2016 ) utilizes input gradients to produce intuitive attention maps . Whilst these works mostly focus on deep representations or sample-wise attention , an understanding and visualization of what DNNs learn for each class in the input space is still missing from the current literature . Understanding Vulnerabilities of DNNs . Recent works have found that DNNs are vulnerable to backdoor and adversarial attacks . A backdoor attack implants a backdoor trigger into a victim model by injecting the trigger into a small proportion of training data ( Gu et al. , 2017 ; Liu et al. , 2018 ) . The model trained on poisoned dataset will learn a noticeable correlation between the trigger and a target label . A backdoored model behaves normally on clean test data , yet consistently predict a target ( incorrect ) label whenever the trigger appears in a test example ( Zhao et al. , 2020 ; Yao et al. , 2019 ; Liu et al. , 2020 ) . This is believed to be caused by the fact that DNNs tend to learn more high frequency ( e.g . backdoor ) patterns ( Chen et al. , 2017 ; Liu et al. , 2020 ; Wang et al. , 2020 ) . However , it is still unclear whether DNNs can learn such patterns from natural ( clean ) data . Moreover , despite a few attempts ( Wang et al. , 2019 ; Qiao et al. , 2019 ) , the trigger pattern still can not be reliably revealed , even though it has been well learned by the backdoored model . DNNs can also be easily fooled by small , imperceptible adversarial perturbations into making incorrect predictions ( Szegedy et al. , 2014 ; Goodfellow et al. , 2016 ) . Adversarial perturbations can be either sample-wise ( Madry et al. , 2018 ) or universal ( Moosavi-Dezfooli et al. , 2017 ) . This has been found to be caused by learning useful ( to prediction ) but nonrobust ( to adversarial perturbation ) features ( Ilyas et al. , 2019 ) . Meanwhile , adversarial training has been shown to learn more robust features and deliver effective defenses ( Madry et al. , 2018 ) . However , existing understandings of adversarial training are established based on sample-wise attention ( Ilyas et al. , 2019 ) . It still unclear , from the class-wise perspective , what robust or nonrobust input patterns look like . In this paper , we will propose a method to reveal the patterns ( e.g . backdoor or adversarially robust/nonrobust ) learned by DNNs for each class . 3 PROPOSED VISUALIZATION METHOD . In this section , we first define the input space class-wise pattern searching problem , then introduce our proposed searching method . Motivation and Intuition . We focus on image classification with deep neural networks . We denote the training and test dataset as Dtrain and Dtest , respectively . Given a DNN model f trained on a K-class Dtrain and a target class y ∈ { 1 , · · · , K } , our goal is to find an input space pattern , i.e , a small set of pixels , that are extremely predictive of the target class . A highly predictive pattern of a class can largely capture the knowledge the model learned for the class . In backdoor attack , a predictive ( i.e . backdoor trigger ) pattern learned by the model can even control the model ’ s prediction . Intuitively , a predictive pattern of a target class should be able to make the model consistently predict the target class whenever it is attached to images from any other ( e.g . nontarget ) classes . Class-wise Pattern Searching . For a target class y , our method searches for a predictive pattern py from a canvas image xc , based on a small test subset Dn of images from the nontarget classes ( i.e . Dn ⊂ Dtest ) . The canvas image xc is the image where the pattern ( a set of pixels ) is extracted . The search is done via an optimization process based on a mixed input between the canvas image xc and an image xn ∈ Dn . The mixed input x̃ is defined as follows : x̃ =m ∗ xc + ( 1−m ) ∗ xn , ( 1 ) where m is a mask that has the same size as either xc or xn , and mij ≥ 0 . The mixed input image is labeled as the target class y regardless of its original class . This mixing strategy is reminiscent of the mixup ( Zhang et al. , 2018 ) data augmentation algorithm . However , we do not mix the class labels and our purpose is for pattern optimization rather than data augmentation . During the searching process , the mask is iteratively updated to minimize the following loss : L = − log fy ( x̃ ) + α 1 n ‖m‖1 , ( 2 ) where , fy is network ’ s probability output with respect to target class y , ‖·‖1 is the L1 norm , α is a parameter that balances the two loss terms , and n is the size of the input image as well as the mask . The first loss term is the commonly used cross entropy loss . The second term increases the sparsity of mask as we are interested in simple patterns with a small number of highly predictive pixels . During the search process , we pair the canvas image xc randomly with images from Dn , and iteratively update the mask m using standard Stochastic Gradient Decent ( SGD ) while keeping the model parameters unchanged . At each iteration , the mask m will also be clipped into [ 0 , 1 ] . Once a mask is learned , we further clip the values in the mask that are smaller than γ to zero , larger than γ to one . We denote this clipped mask by mγ . We then extract the pattern from the canvas image by py =mγ ∗ xc . The γ parameter can be flexibly determined in different applications . A large γ may lead to less predictive pattern while a small γ will produce more of a sample-wise pattern that overfits to the canvas image . The above search method is repeatedly applied to N canvas images to generate N patterns for each class . We then select the pattern that has the lowest loss value as the final pattern of the class . This additional step is to find the most predictive pattern by exploring different canvases . The complete procedure of our method is described in Algorithm 1 in Appendix A. Canvas Sampling . We propose four different sampling strategies for the selection of the N canvas images : positive sampling , negative sampling , random sampling and white canvas . Positive sampling selects the top-N confident images from the target class according to the logits output of model f . Negative sampling selects the top-N most confidently misclassified images from any nontarget class into the target class . The random sampling randomly chooses N images from the target class y . The white canvas simply uses an image with all white pixels as the canvas . Both positive and the negative sampling aim to find the most well-learned examples by the model , but from different perspectives : well-learned correctly ( e.g . positive ) vs. well-learned incorrectly ( e.g . negative ) . The white canvas is interesting since the pattern found from the white canvas will have the texture “ removed ” , which is useful for scenarios where only the shape features are of interest . The patterns found based on different canvases are compared in Figure 4 . After applying our method on each class , we can obtain a set of class-wise patterns : P = { p1 , · · · , pK } . This set of predictive patterns can revel the knowledge learned by model f for each class from a unique perspective . Why is it Class-wise ? At first sight , one might wonder if the discovered pattern could be samplewise , rather than class-wise , given the use of the canvas sample . Note that , however , even though we are using a single sample as a canvas , the pattern found by the optimization algorithm is dependent on how the model has learnt the entire class , in terms of its loss . This is particularly evident in the case of the all white canvas , which bears no relation to any input sample . Hence our designation of the pattern as being “ class-wise ” . While our method can find consistent and predictive classwise patterns in the experiments , it might still be extendable . For example , using multiple positive canvas images at the same time , using noise rather than the non-target images , or using universal adversarial perturbation ( UAP ) ( Moosavi-Dezfooli et al. , 2017 ) but in a more controlled manner . We leave further explorations of these methods as our future work . Difference to Universal Adversarial Perturbation . UAP can also be applied to craft class-wise adversarial patterns that can make the model predict an adversarial target class . In this view , both UAP and our method find predictive patterns to the target class . However , the two methods work in different ways . By fooling the network , UAP explores the unlearned space ( low-probability “ pockets ” ) of the network ( Szegedy et al. , 2014 ; Ma et al. , 2018 ) . In contrast , our method is a searching ( rather than perturbing ) method that does not rely on adversarial perturbations . Thus , it has to find the optimal pixel locations in the input space that are well-learned by the model for the pattern to be predictive of the class . In Section 4.2 and Appendix E , we have experiments showing the difference of the patterns found by class-wise UAP and our method . | The papers proposes a simple method for visualizing the patterns learned by deep neural networks in the supervised classification setting. Informally, suppose you have an image x that is "representative" of the class y and let X be a set of images that belong to other classes. The authors propose an optimization problem that looks for a mask (i.e. set of pixels) along with values of those pixels such that when this pattern is added to any image in X, the model will predict the new image to have the label y. This optimization problem can be solved using iterative thresholding and one may control the level of sparsity as the authors studied. Despite its simplicity, it can reveal clear patterns, particularly on high resolution images, such as ImageNet. The authors, then, show how this method can be used to interpret neural networks, detect backdoor attacks during training, and verify robustness. | SP:244bbe4a2152e42292a91a9a05290205ee8deb5f |
Optimism in Reinforcement Learning with Generalized Linear Function Approximation | ( H √ d3T ) whereH is the horizon , d is the dimensionality of the state-action features and T is the number of episodes . This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions . 1 INTRODUCTION . We study episodic reinforcement learning problems with infinitely large state spaces , where the agent must use function approximation to generalize across states while simultaneously engaging in strategic exploration . Such problems form the core of modern empirical/deep-RL , but relatively little work focuses on exploration , and even fewer algorithms enjoy strong sample efficiency guarantees . On the theoretical side , classical sample efficiency results from the early 00s focus on “ tabular ” environments with small finite state spaces ( Kearns & Singh , 2002 ; Brafman & Tennenholtz , 2002 ; Strehl et al. , 2006 ) , but as these methods scale with the number of states , they do not address problems with infinite or large state spaces . While this classical work has inspired practically effective approaches for large state spaces ( Bellemare et al. , 2016 ; Osband et al. , 2016 ; Tang et al. , 2017 ) , these methods do not enjoy sample efficiency guarantees . More recent theoretical progress has produced provably sample efficient algorithms for complex environments where function approximation is required , but these algorithms are relatively impractical ( Krishnamurthy et al. , 2016 ; Jiang et al. , 2017 ) . In particular , these methods are computationally inefficient or rely crucially on strong dynamics assumptions ( Du et al. , 2019b ) . In this paper , with an eye toward practicality , we study a simple variation of Q-learning , where we approximate the optimal Q-function with a generalized linear model . The algorithm is appealingly simple : collect a trajectory by following the greedy policy corresponding to the current model , perform a dynamic programming back-up to update the model , and repeat . The key difference over traditional Q-learning-like algorithms is in the dynamic programming step . Here we ensure that the updated model is optimistic in the sense that it always overestimates the optimal Q-function . This optimism is essential for our guarantees . Optimism in the face of uncertainty is a well-understood and powerful algorithmic principle in shorthorizon ( e.g , . bandit ) problems , as well as in tabular reinforcement learning ( Azar et al. , 2017 ; Dann et al. , 2017 ; Jin et al. , 2018 ) . With linear function approximation , Yang & Wang ( 2019 ) and Jin et al . ( 2019 ) show that the optimism principle can also yield provably sample-efficient algorithms , when the environment dynamics satisfy certain linearity properties . Their assumptions are always satisfied in tabular problems , but are somewhat unnatural in settings where function approximation is required . Moreover as these assumptions are directly on the dynamics , it is unclear how their analysis can accommodate other forms of function approximation , including generalized linear models . In the present paper , we replace explicit dynamics assumptions with expressivity assumptions on the function approximator , and , by analyzing a similar algorithm to Jin et al . ( 2019 ) , we show that the optimism principle succeeds under these strictly weaker assumptions.1 More importantly , the relaxed assumption facilitates moving beyond linear models , and we demonstrate this by providing the first practical and provably efficient RL algorithm with generalized linear function approximation . The paper is organized as follows : In Section 2 we formalize our setting , introduce the optimistic closure assumption , and discuss related assumptions in the literature . In Section 3 we study optimistic closure in detail and verify that it is strictly weaker than the recently proposed Linear MDP assumption . Our main algorithm and results are presented in Section 4 , with the main proof in Section A . We close with some final remarks and future directions in Section 5 . 2 PRELIMINARIES . We consider episodic reinforcement learning in a finite-horizon markov decision process ( MDP ) with possibly infinitely large state space S , finite action space A , initial distribution µ ∈ ∆ ( S ) , transition operator P : S × A → ∆ ( S ) , reward function R : S × A → ∆ ( [ 0 , 1 ] ) and horizon H . The agent interacts with the MDP in episodes and , in each episode , a trajectory ( s1 , a1 , r1 , s2 , a2 , r2 , . . . , sH , aH , rH ) is generated where s1 ∼ µ , for h > 1 we have sh ∼ P ( · | sh−1 , ah−1 ) , rh ∼ R ( sh , ah ) , and actions a1 : H are chosen by the agent . For normalization , we assume that ∑H h=1 rh ∈ [ 0 , 1 ] almost surely . A ( deterministic , nonstationary ) policy π = ( π1 , · · · , πH ) consists of H mappings πh : S → A , where πh ( sh ) denotes the action to be taken at time point h if at state sh ∈ S The value function for a policy π is a collection of functions ( V π1 , . . . , V π H ) where V π h : S → R is the expected future reward the policy collects if it starts in a particular state at time point h. Formally , V πh ( s ) , E [ H∑ h′=h rh′ | sh = s , ah : H ∼ π ] . The value for a policy π is simply V π , Es1∼µ [ V π1 ( s1 ) ] , and the optimal value is V ? , maxπ V π , where the maximization is over all nonstationary policies . The typical goal is to find an approximately optimal policy , and in this paper , we measure performance by the regret accumulated over T episodes , Reg ( T ) , TV ? − E [ T∑ t=1 H∑ h=1 rh , t ] . Here rh , t is the reward collected by the agent at time point h in the tth episode . We seek algorithms with regret that is sublinear in T , which demonstrates the agent ’ s ability to act near-optimally over the long run . 2.1 Q-VALUES AND FUNCTION APPROXIMATION . For any policy π , the state-action value function , or the Q-function is a sequence of mappings Qπ = ( Qπ1 , . . . , Q π H ) where Q π h : S ×A → R is defined as Qπh ( s , a ) , E [ H∑ h′=h rh′ | sh = s , ah = a , ah+1 : H ∼ π ] . The optimal Q-function is Q ? h , Q π ? h where π ? , argmaxπ V π is the optimal policy . In the value-based function approximation setting , we use a function class G to model Q ? . In this paper , we always take G to be a class of generalized linear models ( GLMs ) , defined as follows : Let d ∈ N be a dimensionality parameter and let Bd , { x ∈ Rd : ‖x‖2 ≤ 1 } be the ` 2 ball in Rd . 1This is also mentioned as a remark in Jin et al . ( 2019 ) . Definition 1 . For a known feature map φ : S × A → Bd and a known link function f : [ −1 , 1 ] 7→ [ −1 , 1 ] the class of generalized linear models is G , { ( s , a ) 7→ f ( 〈φ ( s , a ) , θ〉 ) : θ ∈ Bd } . As is standard in the literature ( Filippi et al. , 2010 ; Li et al. , 2017 ) , we assume the link function satisfies certain regularity conditions . Assumption 1. f ( · ) is either monotonically increasing or decreasing . Furthermore , there exist absolute constants 0 < κ < K < ∞ and M < ∞ such that κ ≤ |f ′ ( z ) | ≤ K and |f ′′ ( z ) | ≤M for all |z| ≤ 1 . For intuition , two example link functions are the identity map f ( z ) = z and the logistic map f ( z ) = 1/ ( 1 + e−z ) with bounded z . It is easy to verify that both of these maps satisfy Assumption 1 . 2.2 EXPRESSIVITY ASSUMPTIONS : REALIZABILITY AND OPTIMISTIC CLOSURE . To obtain sample complexity guarantees that scale polynomially with problem parameters in the function approximation setting , it is necessary to posit expressivity assumptions on the function class G ( Krishnamurthy et al. , 2016 ; Du et al. , 2019a ) . The weakest such condition is realizability , which posits that the optimal Q function is in G , or at least well-approximated by G. Realizability alone suffices for provably efficient algorithms in the “ contextual bandits ” setting where H = 1 ( Li et al. , 2017 ; Filippi et al. , 2010 ; Abbasi-Yadkori et al. , 2011 ) , but it does not seem to be sufficient whenH > 1 . Indeed in these settings it is common to make stronger expressivity assumptions ( Chen & Jiang , 2019 ; Yang & Wang , 2019 ; Jin et al. , 2019 ) . Following these works , our main assumption is a closure property of the Bellman update operator Th . This operator has type Th : ( S ×A → R ) → ( S ×A → R ) and is defined for all s ∈ S , a ∈ A as Th ( Q ) ( s , a ) , E [ rh + VQ ( sh+1 ) | sh = s , ah = a ] , VQ ( s ) , max a∈A Q ( s , a ) . The Bellman update operator for time point H is simply TH ( Q ) ( s , a ) , E [ rH | sH = s , aH = a ] , which is degenerate . To state the assumption , we must first define the enlarged function class Gup . For a d × d matrix A , A 0 denotes that A is positive semi-definite . For a positive semi-definite matrix A , ‖A‖op is the matrix operator norm , which is just the largest eigenvalue , and ‖x‖A , √ x > Ax is the matrix Mahalanobis seminorm . For a fixed constant Γ ∈ R+ that we will set to be polynomial in d and log ( T ) , define Gup , { ( s , a ) 7→ 1 ∧ f ( 〈φ ( s , a ) , θ〉 ) + γ ‖φ ( s , a ) ‖A : θ ∈ Bd , A 0 , ‖A‖op ≤ 1 } , Here we use a∧b , min { a , b } . The class Gup contains G in addition to all possible upper confidence bounds that arise from solving least squares regression problems using the class G. We now state our main expressivity assumption , which we call optimistic closure . Assumption 2 ( Optimistic closure ) . For any 1 ≤ h < H and g ∈ Gup , we have Th ( g ) ∈ G. In words , when we perform a Bellman backup on any upper confidence bound function for time point h + 1 , we obtain a generalized linear function at time h. While this property seems quite strong , we note that a similar notion is mentioned informally in Jin et al . ( 2019 ) and that related closure-type assumptions are common in the literature ( see Section 2.3 for detailed discussion ) . More importantly , we will prove in Section 3 that optimistic closure is actually strictly weaker than previous assumptions used in our RL setting where exploration is required . Before turning to these discussions , we mention two basic properties of optimistic closure . Fact 1 ( Optimistic closure and realizability ) . Optimistic closure implies that Q ? ∈ G ( realizability ) . Proof . We will solve for Q ? via dynamic programming , starting from time point H . In this case , the Bellman update operator is degenerate , and we start by observing that TH ( g ) ≡ Q ? H for all g. Consequently we have Q ? H ∈ G. Next , inductively we assume that we have Q ? h+1 ∈ G , which implies that Q ? h+1 ∈ Gup as we may take the same parameter θ and set A ≡ 0 . Then , by the standard Bellman fixed-point characterization , we know that Q ? h = Th ( Q ? h+1 ) , at which point Assumption 2 yields that Q ? h ∈ G. Fact 2 ( Optimistic closure in tabular settings ) . If S is finite and φ ( s , a ) = es , a is the standard-basis feature map , then under Assumption 1 we have optimistic closure . Proof . We simply verify that G contains all mappings from ( s , a ) 7→ [ 0 , 1 ] , at which point the result is immediate . To see why , observe that via Assumption 1 we know that f is invertible ( it is monotonic with derivative bounded from above and below ) . Then , note that any function ( s , a ) 7→ [ 0 , 1 ] can be written as a vector v ∈ [ 0 , 1 ] |S|×|A| . For such a vector v , if we define θs , a , f−1 ( vs , a ) we have that f ( 〈es , a , θ〉 ) = vs , a . Hence G contains all functions , so we trivially have optimistic closure . | This paper analyses an existing algorithm (LSVI-UCB) with generalized linear function approximation instead of conventional linear function approximation. Under this generalized linear setting, they propose a so-called “optimistic closure” assumption which is shown to be strictly weaker than the expressivity assumption in the conventional linear setting. The paper then proves that LSVI-UCB still enjoys sub-linear regret in the generalized linear setting with strictly weaker assumptions. The paper also derives a general error propagation through steps that do not require a closed-form expression of the empirical dynamic and reward functions as in the linear case; this could be applicable to general function approximations. | SP:b9477062862ad7bab901f295b471a254dcf78e1f |
Optimism in Reinforcement Learning with Generalized Linear Function Approximation | ( H √ d3T ) whereH is the horizon , d is the dimensionality of the state-action features and T is the number of episodes . This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions . 1 INTRODUCTION . We study episodic reinforcement learning problems with infinitely large state spaces , where the agent must use function approximation to generalize across states while simultaneously engaging in strategic exploration . Such problems form the core of modern empirical/deep-RL , but relatively little work focuses on exploration , and even fewer algorithms enjoy strong sample efficiency guarantees . On the theoretical side , classical sample efficiency results from the early 00s focus on “ tabular ” environments with small finite state spaces ( Kearns & Singh , 2002 ; Brafman & Tennenholtz , 2002 ; Strehl et al. , 2006 ) , but as these methods scale with the number of states , they do not address problems with infinite or large state spaces . While this classical work has inspired practically effective approaches for large state spaces ( Bellemare et al. , 2016 ; Osband et al. , 2016 ; Tang et al. , 2017 ) , these methods do not enjoy sample efficiency guarantees . More recent theoretical progress has produced provably sample efficient algorithms for complex environments where function approximation is required , but these algorithms are relatively impractical ( Krishnamurthy et al. , 2016 ; Jiang et al. , 2017 ) . In particular , these methods are computationally inefficient or rely crucially on strong dynamics assumptions ( Du et al. , 2019b ) . In this paper , with an eye toward practicality , we study a simple variation of Q-learning , where we approximate the optimal Q-function with a generalized linear model . The algorithm is appealingly simple : collect a trajectory by following the greedy policy corresponding to the current model , perform a dynamic programming back-up to update the model , and repeat . The key difference over traditional Q-learning-like algorithms is in the dynamic programming step . Here we ensure that the updated model is optimistic in the sense that it always overestimates the optimal Q-function . This optimism is essential for our guarantees . Optimism in the face of uncertainty is a well-understood and powerful algorithmic principle in shorthorizon ( e.g , . bandit ) problems , as well as in tabular reinforcement learning ( Azar et al. , 2017 ; Dann et al. , 2017 ; Jin et al. , 2018 ) . With linear function approximation , Yang & Wang ( 2019 ) and Jin et al . ( 2019 ) show that the optimism principle can also yield provably sample-efficient algorithms , when the environment dynamics satisfy certain linearity properties . Their assumptions are always satisfied in tabular problems , but are somewhat unnatural in settings where function approximation is required . Moreover as these assumptions are directly on the dynamics , it is unclear how their analysis can accommodate other forms of function approximation , including generalized linear models . In the present paper , we replace explicit dynamics assumptions with expressivity assumptions on the function approximator , and , by analyzing a similar algorithm to Jin et al . ( 2019 ) , we show that the optimism principle succeeds under these strictly weaker assumptions.1 More importantly , the relaxed assumption facilitates moving beyond linear models , and we demonstrate this by providing the first practical and provably efficient RL algorithm with generalized linear function approximation . The paper is organized as follows : In Section 2 we formalize our setting , introduce the optimistic closure assumption , and discuss related assumptions in the literature . In Section 3 we study optimistic closure in detail and verify that it is strictly weaker than the recently proposed Linear MDP assumption . Our main algorithm and results are presented in Section 4 , with the main proof in Section A . We close with some final remarks and future directions in Section 5 . 2 PRELIMINARIES . We consider episodic reinforcement learning in a finite-horizon markov decision process ( MDP ) with possibly infinitely large state space S , finite action space A , initial distribution µ ∈ ∆ ( S ) , transition operator P : S × A → ∆ ( S ) , reward function R : S × A → ∆ ( [ 0 , 1 ] ) and horizon H . The agent interacts with the MDP in episodes and , in each episode , a trajectory ( s1 , a1 , r1 , s2 , a2 , r2 , . . . , sH , aH , rH ) is generated where s1 ∼ µ , for h > 1 we have sh ∼ P ( · | sh−1 , ah−1 ) , rh ∼ R ( sh , ah ) , and actions a1 : H are chosen by the agent . For normalization , we assume that ∑H h=1 rh ∈ [ 0 , 1 ] almost surely . A ( deterministic , nonstationary ) policy π = ( π1 , · · · , πH ) consists of H mappings πh : S → A , where πh ( sh ) denotes the action to be taken at time point h if at state sh ∈ S The value function for a policy π is a collection of functions ( V π1 , . . . , V π H ) where V π h : S → R is the expected future reward the policy collects if it starts in a particular state at time point h. Formally , V πh ( s ) , E [ H∑ h′=h rh′ | sh = s , ah : H ∼ π ] . The value for a policy π is simply V π , Es1∼µ [ V π1 ( s1 ) ] , and the optimal value is V ? , maxπ V π , where the maximization is over all nonstationary policies . The typical goal is to find an approximately optimal policy , and in this paper , we measure performance by the regret accumulated over T episodes , Reg ( T ) , TV ? − E [ T∑ t=1 H∑ h=1 rh , t ] . Here rh , t is the reward collected by the agent at time point h in the tth episode . We seek algorithms with regret that is sublinear in T , which demonstrates the agent ’ s ability to act near-optimally over the long run . 2.1 Q-VALUES AND FUNCTION APPROXIMATION . For any policy π , the state-action value function , or the Q-function is a sequence of mappings Qπ = ( Qπ1 , . . . , Q π H ) where Q π h : S ×A → R is defined as Qπh ( s , a ) , E [ H∑ h′=h rh′ | sh = s , ah = a , ah+1 : H ∼ π ] . The optimal Q-function is Q ? h , Q π ? h where π ? , argmaxπ V π is the optimal policy . In the value-based function approximation setting , we use a function class G to model Q ? . In this paper , we always take G to be a class of generalized linear models ( GLMs ) , defined as follows : Let d ∈ N be a dimensionality parameter and let Bd , { x ∈ Rd : ‖x‖2 ≤ 1 } be the ` 2 ball in Rd . 1This is also mentioned as a remark in Jin et al . ( 2019 ) . Definition 1 . For a known feature map φ : S × A → Bd and a known link function f : [ −1 , 1 ] 7→ [ −1 , 1 ] the class of generalized linear models is G , { ( s , a ) 7→ f ( 〈φ ( s , a ) , θ〉 ) : θ ∈ Bd } . As is standard in the literature ( Filippi et al. , 2010 ; Li et al. , 2017 ) , we assume the link function satisfies certain regularity conditions . Assumption 1. f ( · ) is either monotonically increasing or decreasing . Furthermore , there exist absolute constants 0 < κ < K < ∞ and M < ∞ such that κ ≤ |f ′ ( z ) | ≤ K and |f ′′ ( z ) | ≤M for all |z| ≤ 1 . For intuition , two example link functions are the identity map f ( z ) = z and the logistic map f ( z ) = 1/ ( 1 + e−z ) with bounded z . It is easy to verify that both of these maps satisfy Assumption 1 . 2.2 EXPRESSIVITY ASSUMPTIONS : REALIZABILITY AND OPTIMISTIC CLOSURE . To obtain sample complexity guarantees that scale polynomially with problem parameters in the function approximation setting , it is necessary to posit expressivity assumptions on the function class G ( Krishnamurthy et al. , 2016 ; Du et al. , 2019a ) . The weakest such condition is realizability , which posits that the optimal Q function is in G , or at least well-approximated by G. Realizability alone suffices for provably efficient algorithms in the “ contextual bandits ” setting where H = 1 ( Li et al. , 2017 ; Filippi et al. , 2010 ; Abbasi-Yadkori et al. , 2011 ) , but it does not seem to be sufficient whenH > 1 . Indeed in these settings it is common to make stronger expressivity assumptions ( Chen & Jiang , 2019 ; Yang & Wang , 2019 ; Jin et al. , 2019 ) . Following these works , our main assumption is a closure property of the Bellman update operator Th . This operator has type Th : ( S ×A → R ) → ( S ×A → R ) and is defined for all s ∈ S , a ∈ A as Th ( Q ) ( s , a ) , E [ rh + VQ ( sh+1 ) | sh = s , ah = a ] , VQ ( s ) , max a∈A Q ( s , a ) . The Bellman update operator for time point H is simply TH ( Q ) ( s , a ) , E [ rH | sH = s , aH = a ] , which is degenerate . To state the assumption , we must first define the enlarged function class Gup . For a d × d matrix A , A 0 denotes that A is positive semi-definite . For a positive semi-definite matrix A , ‖A‖op is the matrix operator norm , which is just the largest eigenvalue , and ‖x‖A , √ x > Ax is the matrix Mahalanobis seminorm . For a fixed constant Γ ∈ R+ that we will set to be polynomial in d and log ( T ) , define Gup , { ( s , a ) 7→ 1 ∧ f ( 〈φ ( s , a ) , θ〉 ) + γ ‖φ ( s , a ) ‖A : θ ∈ Bd , A 0 , ‖A‖op ≤ 1 } , Here we use a∧b , min { a , b } . The class Gup contains G in addition to all possible upper confidence bounds that arise from solving least squares regression problems using the class G. We now state our main expressivity assumption , which we call optimistic closure . Assumption 2 ( Optimistic closure ) . For any 1 ≤ h < H and g ∈ Gup , we have Th ( g ) ∈ G. In words , when we perform a Bellman backup on any upper confidence bound function for time point h + 1 , we obtain a generalized linear function at time h. While this property seems quite strong , we note that a similar notion is mentioned informally in Jin et al . ( 2019 ) and that related closure-type assumptions are common in the literature ( see Section 2.3 for detailed discussion ) . More importantly , we will prove in Section 3 that optimistic closure is actually strictly weaker than previous assumptions used in our RL setting where exploration is required . Before turning to these discussions , we mention two basic properties of optimistic closure . Fact 1 ( Optimistic closure and realizability ) . Optimistic closure implies that Q ? ∈ G ( realizability ) . Proof . We will solve for Q ? via dynamic programming , starting from time point H . In this case , the Bellman update operator is degenerate , and we start by observing that TH ( g ) ≡ Q ? H for all g. Consequently we have Q ? H ∈ G. Next , inductively we assume that we have Q ? h+1 ∈ G , which implies that Q ? h+1 ∈ Gup as we may take the same parameter θ and set A ≡ 0 . Then , by the standard Bellman fixed-point characterization , we know that Q ? h = Th ( Q ? h+1 ) , at which point Assumption 2 yields that Q ? h ∈ G. Fact 2 ( Optimistic closure in tabular settings ) . If S is finite and φ ( s , a ) = es , a is the standard-basis feature map , then under Assumption 1 we have optimistic closure . Proof . We simply verify that G contains all mappings from ( s , a ) 7→ [ 0 , 1 ] , at which point the result is immediate . To see why , observe that via Assumption 1 we know that f is invertible ( it is monotonic with derivative bounded from above and below ) . Then , note that any function ( s , a ) 7→ [ 0 , 1 ] can be written as a vector v ∈ [ 0 , 1 ] |S|×|A| . For such a vector v , if we define θs , a , f−1 ( vs , a ) we have that f ( 〈es , a , θ〉 ) = vs , a . Hence G contains all functions , so we trivially have optimistic closure . | The authors studies an episodic MDP learning problem, where they propose to study an Optimistic Closure assumption which allows the Q function to be expressed as a generalized linear function plus a positive semi-definite quadratic form. They motivate the assumption by showing that the assumption allows the tabular MDP case to be modeled, and that the Optimistic Closure is in fact a strictly weaker assumption than the linear MDP assumption made in previous related works. The authors then proceed to the design and analysis of the LSVI-UCB algorithm, which involves estimating the the parameter of the GLM model by a ridge estimator and adding an optimistic exploration bonus to the Q function. The authors propose a regret bound for the algorithm. | SP:b9477062862ad7bab901f295b471a254dcf78e1f |
Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling | 1 INTRODUCTION Recently , energy-based models ( EBMs ) ( Zhu et al. , 1998 ; LeCun et al. , 2006 ) parameterized by modern neural networks have drawn much attention from the deep learning communities . Successful applications with EBMs include generations of images ( Xie et al. , 2016 ; 2018b ; Du & Mordatch , 2019 ) , videos ( Xie et al. , 2017 ; 2019 ) , 3D volumetric shapes ( Xie et al. , 2018c ; 2020 ) , unordered point clouds ( Xie et al. , 2021a ) , texts ( Deng et al. , 2020 ) , molecules ( Ingraham et al. , 2018 ; Du et al. , 2019 ) , etc. , as well as image-to-image translation ( Xie et al. , 2021b ; c ) , out-of-distribution detection ( Liu et al. , 2020 ) and inverse optimal control ( Xu et al. , 2019 ) . EBMs are characterized by ( i ) Simplicity : The maximum likelihood learning of EBMs unifies representation and generation in a single model , and ( ii ) Explicitness : EBMs provide an explicit density distribution of data by training an energy function that assigns lower values to observed data and higher values to unobserved ones . However , it is still difficult to train an EBM to synthesize diverse and high-fidelity images . The maximum likelihood estimation ( MLE ) of EBMs requires the Markov chain Monte Carlo ( MCMC ) ( Liu , 2008 ; Barbu & Zhu , 2020 ) to sample from the model and then updates the model parameters according to the difference between those samples and the observed data . Such an “ analysis by synthesis ” ( Grenander et al. , 2007 ) learning scheme is challenging because the sampling step is neither efficient nor stable . In particular , when the energy function is multimodal due to the highly varied or high resolution training data , it is not easy for the MCMC chains to traverse the modes of the learned model . Fortunately , it is common knowledge that the manifold residing in a downsampled low-dimensional image space is smoother than that in the original high-dimensional counterpart . Thus , learning an EBM from low-dimensional data is much stabler and faster than learning from high-dimensional data in terms of convergence ( Odena et al. , 2017 ; Gao et al. , 2018 ) . Inspired by the above knowledge , we propose to train EBMs via a multistage coarse-to-fine expanding and sampling strategy ( CF-EBM ) . As shown in Figure 1 ( a ) , the approach starts with learning a coarselevel EBM on low resolution images and then smoothly transits to learn the finer-level EBM by adding new layers that take into account the higher resolution information as the learning progresses . The gradient-based short-run MCMC ( Nijkamp et al. , 2019 ) , e.g. , Langevin dynamics ( Neal et al. , 2011 ) , is used for sampling . From the modeling aspect , the coarse-level training can be useful for exploring the global structure of image , while the fine-level training will then gradually refine the image details . Recent works have demonstrated the advantages of this incremental learning ( Karras et al. , 2018 ; Wang et al. , 2018 ) . However , there have been no works focusing on the incremental learning of EBMs that incorporates bottom-up representation and top-down sampling in a single net . Besides , as shown in Figure 1 ( a ) , the top-down gradient information for synthesis flows from coarse-level layers towards fine-level layers . Thus , during the coarse-to-fine expanding , we can use the coarse-level synthesis to help the fine-level synthesis to stabilize the sampling . Such a coarse-to-fine expanding and sampling scheme is useful for high-fidelity synthesis in several vision tasks . See Figure 1 ( b ) . Furthermore , we propose a one-sided energy-based unsupervised image-to-image translation method and scale it up to high resolution . The approach is immediately available with the FC-EBM by using its iterative Langevin dynamics without the need of the cycle consistency ( Zhu et al. , 2017 ) or geometry constraints ( Fu et al. , 2019 ) . Specifically , we learn an EBM of target domain with Langevin dynamics initialized by the examples from source domain . The resulting translator is the short-run MCMC . Compared with those prior works ( Zhu et al. , 2017 ; Huang et al. , 2018 ; Park et al. , 2020 ) that learn black-box encoder-decoder networks between domains , our method is much more interpretable in the sense that ours can be explained by a visualization method ( Simonyan et al. , 2014 ; Adebayo et al. , 2018 ) that uses gradients to visualize the most essential regions , i.e. , the generative saliency , when translating an image from the source domain to the target domain . See Figure 1 ( c ) . The contributions of our paper can be summarized as : • To the best of our knowledge , this is the first work that trains EBMs under the “ analysis by synthesis ” scheme via a multistage coarse-to-fine expanding and sampling strategy . Besides , we propose several essential techniques for improving EBM , e.g. , smooth activations . Particularly , our work is the first to train a pure EBM for synthesizing 512× 512 images . • We propose a novel energy-based unsupervised image-to-image translation approach , which is essentially different from all other existing GAN-based approaches . We demonstrate noticeable results in terms of both translation quality and efficiency of time and memory . • We conduct extensive experiments to validate our approach , including image generation , denoising , inpainting , out-of-distribution detection and unsupervised image translation . Strong results show that our method outperforms or is competitive with the prior art . The rest of the paper is organized as follows . Section 2 summarizes related works and how the proposed method is different from the prior art . Section 3 introduces the proposed methodology in detail . Section 4 presents extensive experiments to test our method . Section 5 concludes our paper and discuss some future research directions . 2 RELATED WORK . 2.1 ENERGY-BASED GENERATIVE MODELS . The main challenge to train EBMs via MLE lies in drawing fair samples from the model , especially when the energy function is parameterized by a highly non-linear ConvNet . The contrastive divergence ( CD ) ( Hinton , 2002 ; Tieleman , 2008 ) , with MCMC chains initialized from data distribution , is an efficient but biased way to train EBMs . Another direction is to adopt the idea of energy-based correction of a more tractable model to train EBMs . Noise contrastive estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; Gao et al. , 2020 ) and introspective neural networks ( INNs ) ( Lazarow et al. , 2017 ; Jin et al. , 2017 ; Lee et al. , 2018b ) belong to this theme . Generative cooperative networks ( CoopNets ) ( Xie et al. , 2018b ; 2021d ; b ) train an EBM with a generator or a variational auto-encoder ( VAE ) ( Kingma & Welling , 2014 ) as amortized sampler by MCMC teaching ( Xie et al. , 2018a ) . Triangle divergence ( Han et al. , 2019 ) trains an EBM without MCMC by amortizing the MCMC via a VAE . However , these frameworks still struggle to scale up and model multimodal data . There have been several strategies to improve the EBM training . Gao et al . ( 2018 ) adopts a multi-grid method that trains multiple EBMs at different grids simultaneously , where the EBM at coarser grid is used to initialize the image generation by EBM at finer grid . However , optimizing and sampling from multiple EBMs will result in low efficiency of both time and memory . To stabilize the training , Nijkamp et al . ( 2019 ) ; Grathwohl et al . ( 2020 ) add the Gaussian white noise to the observed data , resulting in noisy synthesized images . In contrast , our paper proposes to train a single EBM via a coarse-to-fine growing strategy , along with some improved techniques . With smooth parameter training and image sampling , our model can preserve EBM ’ s compatibility and synthesize high-fidelity images . 2.2 UNSUPERVISED IMAGE-TO-IMAGE TRANSLATION . Unsupervised image-to-image translation aims at learning two directions of mappings between two unpaired domains . Recent successes are all based on adversarial learning , e.g. , CycleGAN ( Zhu et al. , 2017 ) , UNIT ( Liu et al. , 2017 ) , MUNIT ( Huang et al. , 2018 ) , DRIT ( Lee et al. , 2018a ) and U-GATIT ( Kim et al. , 2020 ) . These methods typically train two GANs with two levels of learning objectives : ( i ) Distribution level : Two adversarial losses are used to capture style discrepancy between source and target domain ; ( ii ) Instance level : To tackle the difficulty of unpaired setting , they adopt a cycle consistency loss for content preservation . This loss enables an instance-level supervision to regularize the training of two mappings by enforcing them to be a bijective function between two domains . Except for works about two-sided translation , efforts on research about one-sided unsupervised image translation have also been made , e.g. , DistanceGAN ( Benaim & Wolf , 2017 ) , GcGAN ( Fu et al. , 2019 ) and CUT ( Park et al. , 2020 ) , which apply geometric or contrastive constraints . We solve this problem from the prospective of EBM , which is different from GAN-based methods . The proposed concise EBM solution only relies on its built-in objective , which is a distribution-level statistics matching , to accomplish the one-sided image translation . It transfers the style and preserves the source content by MCMC without using the cycle-consistency loss . The model demonstrates better performances with less time and memory . Another distinction between our method and GAN-based methods is the natural interpretability of Langevin dynamics . It provides a gradient-based saliency map to visualize those key regions that make the two domains distinct , as illustrated in Figure 1 ( c ) . 3 METHOD . In this section , we first present the EBM learning framework , and then the proposed CF-EBM approach . After that , we generalize our model to the unsupervised image-to-image translation . 3.1 MCMC-BASED MAXIMUM LIKELIHOOD LEARNING OF ENERGY-BASED MODEL . Let x ∈ RD be the observed example , e.g. , an image . An energy-based model is defined as follows : pθ ( x ) = 1 Z ( θ ) exp ( −Eθ ( x ) ) , ( 1 ) where Eθ ( x ) : RD −→ R is the energy function defined by a bottom-up ConvNet parameterized by θ . Z ( θ ) = ∫ exp ( −Eθ ( x ) ) dx is the intractable normalizing constant or the partition function . Given N observed examples { xi } Ni=1 ∼ pdata ( x ) , where pdata ( x ) denotes the unknown data distribution , the model can be trained by maximizing the log-likelihood L ( θ ) = 1N ∑N i=1 log pθ ( xi ) ≈ Ex∼pdata ( x ) log ( pθ ( x ) ) . The derivative of the negative log-likelihood is given by −∇θL ( θ ) = Ex∼pdata ( x ) [ ∇θEθ ( x ) ] − Ex̃∼pθ ( x ) [ ∇θEθ ( x̃ ) ] , ( 2 ) where the second expectation term under pθ ( x ) is intractable and can be approximated via MCMC . Given that , the EBM is updated by gradient descent . To sample x̃ ∼ pθ ( x ) via MCMC , we rely on gradient-based Langevin dynamics that recursively computes the following step x̃t+1 = x̃t − ηt 2 ∇x̃Eθ ( x̃t ) + √ ηt t , t ∼ N ( 0 , I ) , ( 3 ) where ηt is the step size of Langevin step and also the variance of Gaussian noise t. Theoretically , to ensure convergence , the MCMC is typically performed with infinite steps and an infinitesimal stepsize ( Welling & Teh , 2011 ) . However , it is impractical for training EBMs . In this paper , we follow Nijkamp et al . ( 2019 ) to use short-run MCMC , which always starts from a fixed noise distribution and runs a fixed number T of Langevin steps in both training and testing stages . The training with a short-run MCMC might result in a biased estimation of EBM but the learned short-run MCMC is still a valid generator , which enables us to synthesize realistic images and efficiently train the model , as seen in most well-established EBM works ( Nijkamp et al. , 2019 ; Grathwohl et al. , 2020 ; Pang et al. , 2020 ) . In this paper , we keep the step size constant and linearly decay the noise variance till 0 . | Much like progressive growing of GANs two years ago, this paper adopts a similar coarse-to-fine procedure for scaling EBMs to higher resolutions. In particular, the approach starts from learning EBMs on low-resolution images and then smoothly transitions to higher resolution by carefully designing an expand layer and a smooth sampling procedure. Authors were able to obtain competitive FID scores on CIFAR-10, and demonstrate the first set of 256x256 image samples from EBMs. In addition, authors demonstrate successful application of EBMs to unpaired image-to-image translation. | SP:bc69ea7519d15ff99678ebc5a228da631480c39d |
Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling | 1 INTRODUCTION Recently , energy-based models ( EBMs ) ( Zhu et al. , 1998 ; LeCun et al. , 2006 ) parameterized by modern neural networks have drawn much attention from the deep learning communities . Successful applications with EBMs include generations of images ( Xie et al. , 2016 ; 2018b ; Du & Mordatch , 2019 ) , videos ( Xie et al. , 2017 ; 2019 ) , 3D volumetric shapes ( Xie et al. , 2018c ; 2020 ) , unordered point clouds ( Xie et al. , 2021a ) , texts ( Deng et al. , 2020 ) , molecules ( Ingraham et al. , 2018 ; Du et al. , 2019 ) , etc. , as well as image-to-image translation ( Xie et al. , 2021b ; c ) , out-of-distribution detection ( Liu et al. , 2020 ) and inverse optimal control ( Xu et al. , 2019 ) . EBMs are characterized by ( i ) Simplicity : The maximum likelihood learning of EBMs unifies representation and generation in a single model , and ( ii ) Explicitness : EBMs provide an explicit density distribution of data by training an energy function that assigns lower values to observed data and higher values to unobserved ones . However , it is still difficult to train an EBM to synthesize diverse and high-fidelity images . The maximum likelihood estimation ( MLE ) of EBMs requires the Markov chain Monte Carlo ( MCMC ) ( Liu , 2008 ; Barbu & Zhu , 2020 ) to sample from the model and then updates the model parameters according to the difference between those samples and the observed data . Such an “ analysis by synthesis ” ( Grenander et al. , 2007 ) learning scheme is challenging because the sampling step is neither efficient nor stable . In particular , when the energy function is multimodal due to the highly varied or high resolution training data , it is not easy for the MCMC chains to traverse the modes of the learned model . Fortunately , it is common knowledge that the manifold residing in a downsampled low-dimensional image space is smoother than that in the original high-dimensional counterpart . Thus , learning an EBM from low-dimensional data is much stabler and faster than learning from high-dimensional data in terms of convergence ( Odena et al. , 2017 ; Gao et al. , 2018 ) . Inspired by the above knowledge , we propose to train EBMs via a multistage coarse-to-fine expanding and sampling strategy ( CF-EBM ) . As shown in Figure 1 ( a ) , the approach starts with learning a coarselevel EBM on low resolution images and then smoothly transits to learn the finer-level EBM by adding new layers that take into account the higher resolution information as the learning progresses . The gradient-based short-run MCMC ( Nijkamp et al. , 2019 ) , e.g. , Langevin dynamics ( Neal et al. , 2011 ) , is used for sampling . From the modeling aspect , the coarse-level training can be useful for exploring the global structure of image , while the fine-level training will then gradually refine the image details . Recent works have demonstrated the advantages of this incremental learning ( Karras et al. , 2018 ; Wang et al. , 2018 ) . However , there have been no works focusing on the incremental learning of EBMs that incorporates bottom-up representation and top-down sampling in a single net . Besides , as shown in Figure 1 ( a ) , the top-down gradient information for synthesis flows from coarse-level layers towards fine-level layers . Thus , during the coarse-to-fine expanding , we can use the coarse-level synthesis to help the fine-level synthesis to stabilize the sampling . Such a coarse-to-fine expanding and sampling scheme is useful for high-fidelity synthesis in several vision tasks . See Figure 1 ( b ) . Furthermore , we propose a one-sided energy-based unsupervised image-to-image translation method and scale it up to high resolution . The approach is immediately available with the FC-EBM by using its iterative Langevin dynamics without the need of the cycle consistency ( Zhu et al. , 2017 ) or geometry constraints ( Fu et al. , 2019 ) . Specifically , we learn an EBM of target domain with Langevin dynamics initialized by the examples from source domain . The resulting translator is the short-run MCMC . Compared with those prior works ( Zhu et al. , 2017 ; Huang et al. , 2018 ; Park et al. , 2020 ) that learn black-box encoder-decoder networks between domains , our method is much more interpretable in the sense that ours can be explained by a visualization method ( Simonyan et al. , 2014 ; Adebayo et al. , 2018 ) that uses gradients to visualize the most essential regions , i.e. , the generative saliency , when translating an image from the source domain to the target domain . See Figure 1 ( c ) . The contributions of our paper can be summarized as : • To the best of our knowledge , this is the first work that trains EBMs under the “ analysis by synthesis ” scheme via a multistage coarse-to-fine expanding and sampling strategy . Besides , we propose several essential techniques for improving EBM , e.g. , smooth activations . Particularly , our work is the first to train a pure EBM for synthesizing 512× 512 images . • We propose a novel energy-based unsupervised image-to-image translation approach , which is essentially different from all other existing GAN-based approaches . We demonstrate noticeable results in terms of both translation quality and efficiency of time and memory . • We conduct extensive experiments to validate our approach , including image generation , denoising , inpainting , out-of-distribution detection and unsupervised image translation . Strong results show that our method outperforms or is competitive with the prior art . The rest of the paper is organized as follows . Section 2 summarizes related works and how the proposed method is different from the prior art . Section 3 introduces the proposed methodology in detail . Section 4 presents extensive experiments to test our method . Section 5 concludes our paper and discuss some future research directions . 2 RELATED WORK . 2.1 ENERGY-BASED GENERATIVE MODELS . The main challenge to train EBMs via MLE lies in drawing fair samples from the model , especially when the energy function is parameterized by a highly non-linear ConvNet . The contrastive divergence ( CD ) ( Hinton , 2002 ; Tieleman , 2008 ) , with MCMC chains initialized from data distribution , is an efficient but biased way to train EBMs . Another direction is to adopt the idea of energy-based correction of a more tractable model to train EBMs . Noise contrastive estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; Gao et al. , 2020 ) and introspective neural networks ( INNs ) ( Lazarow et al. , 2017 ; Jin et al. , 2017 ; Lee et al. , 2018b ) belong to this theme . Generative cooperative networks ( CoopNets ) ( Xie et al. , 2018b ; 2021d ; b ) train an EBM with a generator or a variational auto-encoder ( VAE ) ( Kingma & Welling , 2014 ) as amortized sampler by MCMC teaching ( Xie et al. , 2018a ) . Triangle divergence ( Han et al. , 2019 ) trains an EBM without MCMC by amortizing the MCMC via a VAE . However , these frameworks still struggle to scale up and model multimodal data . There have been several strategies to improve the EBM training . Gao et al . ( 2018 ) adopts a multi-grid method that trains multiple EBMs at different grids simultaneously , where the EBM at coarser grid is used to initialize the image generation by EBM at finer grid . However , optimizing and sampling from multiple EBMs will result in low efficiency of both time and memory . To stabilize the training , Nijkamp et al . ( 2019 ) ; Grathwohl et al . ( 2020 ) add the Gaussian white noise to the observed data , resulting in noisy synthesized images . In contrast , our paper proposes to train a single EBM via a coarse-to-fine growing strategy , along with some improved techniques . With smooth parameter training and image sampling , our model can preserve EBM ’ s compatibility and synthesize high-fidelity images . 2.2 UNSUPERVISED IMAGE-TO-IMAGE TRANSLATION . Unsupervised image-to-image translation aims at learning two directions of mappings between two unpaired domains . Recent successes are all based on adversarial learning , e.g. , CycleGAN ( Zhu et al. , 2017 ) , UNIT ( Liu et al. , 2017 ) , MUNIT ( Huang et al. , 2018 ) , DRIT ( Lee et al. , 2018a ) and U-GATIT ( Kim et al. , 2020 ) . These methods typically train two GANs with two levels of learning objectives : ( i ) Distribution level : Two adversarial losses are used to capture style discrepancy between source and target domain ; ( ii ) Instance level : To tackle the difficulty of unpaired setting , they adopt a cycle consistency loss for content preservation . This loss enables an instance-level supervision to regularize the training of two mappings by enforcing them to be a bijective function between two domains . Except for works about two-sided translation , efforts on research about one-sided unsupervised image translation have also been made , e.g. , DistanceGAN ( Benaim & Wolf , 2017 ) , GcGAN ( Fu et al. , 2019 ) and CUT ( Park et al. , 2020 ) , which apply geometric or contrastive constraints . We solve this problem from the prospective of EBM , which is different from GAN-based methods . The proposed concise EBM solution only relies on its built-in objective , which is a distribution-level statistics matching , to accomplish the one-sided image translation . It transfers the style and preserves the source content by MCMC without using the cycle-consistency loss . The model demonstrates better performances with less time and memory . Another distinction between our method and GAN-based methods is the natural interpretability of Langevin dynamics . It provides a gradient-based saliency map to visualize those key regions that make the two domains distinct , as illustrated in Figure 1 ( c ) . 3 METHOD . In this section , we first present the EBM learning framework , and then the proposed CF-EBM approach . After that , we generalize our model to the unsupervised image-to-image translation . 3.1 MCMC-BASED MAXIMUM LIKELIHOOD LEARNING OF ENERGY-BASED MODEL . Let x ∈ RD be the observed example , e.g. , an image . An energy-based model is defined as follows : pθ ( x ) = 1 Z ( θ ) exp ( −Eθ ( x ) ) , ( 1 ) where Eθ ( x ) : RD −→ R is the energy function defined by a bottom-up ConvNet parameterized by θ . Z ( θ ) = ∫ exp ( −Eθ ( x ) ) dx is the intractable normalizing constant or the partition function . Given N observed examples { xi } Ni=1 ∼ pdata ( x ) , where pdata ( x ) denotes the unknown data distribution , the model can be trained by maximizing the log-likelihood L ( θ ) = 1N ∑N i=1 log pθ ( xi ) ≈ Ex∼pdata ( x ) log ( pθ ( x ) ) . The derivative of the negative log-likelihood is given by −∇θL ( θ ) = Ex∼pdata ( x ) [ ∇θEθ ( x ) ] − Ex̃∼pθ ( x ) [ ∇θEθ ( x̃ ) ] , ( 2 ) where the second expectation term under pθ ( x ) is intractable and can be approximated via MCMC . Given that , the EBM is updated by gradient descent . To sample x̃ ∼ pθ ( x ) via MCMC , we rely on gradient-based Langevin dynamics that recursively computes the following step x̃t+1 = x̃t − ηt 2 ∇x̃Eθ ( x̃t ) + √ ηt t , t ∼ N ( 0 , I ) , ( 3 ) where ηt is the step size of Langevin step and also the variance of Gaussian noise t. Theoretically , to ensure convergence , the MCMC is typically performed with infinite steps and an infinitesimal stepsize ( Welling & Teh , 2011 ) . However , it is impractical for training EBMs . In this paper , we follow Nijkamp et al . ( 2019 ) to use short-run MCMC , which always starts from a fixed noise distribution and runs a fixed number T of Langevin steps in both training and testing stages . The training with a short-run MCMC might result in a biased estimation of EBM but the learned short-run MCMC is still a valid generator , which enables us to synthesize realistic images and efficiently train the model , as seen in most well-established EBM works ( Nijkamp et al. , 2019 ; Grathwohl et al. , 2020 ; Pang et al. , 2020 ) . In this paper , we keep the step size constant and linearly decay the noise variance till 0 . | This paper presents a number of methods to scale up training and sampling of EBMs on image data. The main contribution consists of an approach for progressively growing the model by increasing the image resolution as training progresses. This approach echos similar approaches used for scaling up GAN training. The approach involves slowly annealing in new blocks to the model during training which processes the image at increasing resolutions. This allows training of image EBMs at notably larger resolutions than published in prior work. | SP:bc69ea7519d15ff99678ebc5a228da631480c39d |
Near-Optimal Glimpse Sequences for Training Hard Attention Neural Networks | 1 INTRODUCTION . Attention can be defined as the “ allocation of limited cognitive processing resources ” ( Anderson , 2005 ) . In humans the density of photoreceptors varies across the retina . It is much greater in the centre ( Bear et al. , 2007 ) and covers an approximately 210 degree field of view ( Traquair , 1949 ) . This means that the visual system is a limited resource with respect to observing the environment and that it must be allocated , or controlled , by some attention mechanism . We refer to this kind of controlled allocation of limited sensor resources as “ hard ” attention . This is in contrast with “ soft ” attention , the controlled application of limited computational resources to full sensory input . Hard attention can solve certain tasks using orders of magnitude less sensor bandwidth and computation than the alternatives ( Katharopoulos & Fleuret , 2019 ; Rensink , 2000 ) . It therefore may enable the use of modern approaches to computer vision in low-power settings such as mobile devices . This paper focuses on the application of hard attention in image classification . Our model of attention ( shown in Fig . 1 ) is as follows : a recurrent neural network ( RNN ) is given T steps to classify some unchanging input image . Before each step , the RNN outputs the coordinates of a pixel in the image . A patch of the image centered around this pixel is then fed into the RNN . We call this image patch a glimpse , and the coordinates a glimpse location . As such , the RNN controls its input by selecting each glimpse location , and this decision can be based on previous glimpses . After T steps , the RNN ’ s hidden state is mapped to a classification output . As with most artificial hard attention mechanisms ( Mnih et al. , 2014 ; Ba et al. , 2014 ) , this output is not differentiable with respect to the sequence of glimpse locations selected . This makes training with standard gradient backpropagation impossible , and so high variance gradient estimators such as REINFORCE ( Williams , 1992 ) are commonly used instead ( Mnih et al. , 2014 ; Ba et al. , 2014 ) . The resulting noisy gradient estimates make training difficult , especially for large T . In order to improve hard attention training , we take inspiration from neuroscience literature which suggests that visual attention is directed so as to maximally reduce entropy in an agent ’ s world model ( Bruce & Tsotsos , 2009 ; Itti & Baldi , 2009 ; Schwartenbeck et al. , 2013 ; Feldman & Friston , 2010 ) . There is a corresponding mathematical formulation of such an objective , namely Bayesian optimal experimental design ( BOED ) ( Chaloner & Verdinelli , 1995 ) . BOED tackles the problem of designing an experiment to maximally reduce uncertainty in some unknown variable . When classifying an image with hard visual attention , the ‘ experiment ’ is the process of taking a glimpse ; the ‘ design ’ is the glimpse location ; and the unknown variable is the class label . In general , BOED is applicable only when a probabilistic model of the experiment exists . This could be , for example , a prior distribution over the class label and a generative model for the observed image patch conditioned on the class label and glimpse location . We leverage generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) to provide such a model . We use methodology from BOED to introduce the following training procedure for hard attention networks , which we call partial supervision by near-optimal glimpse sequences ( PS-NOGS ) . 1 . We assume that we are given an image classification task and a corresponding labelled dataset . Then , for a subset of the training images , we determine an approximately optimal ( in the BOED sense ) glimpse location for a hard attention network to attend to at each time step . We refer to the resulting sequences of glimpse locations as near-optimal glimpse sequences . Section 4 describes our novel method to generate them . 2 . We use these near-optimal glimpse sequences as an additional supervision signal for training a hard attention network . Section 5 introduces our novel training objective for this . We empirically investigate the performance of PS-NOGS and find that it leads to faster training than our baselines , and qualitatively different behaviour with competitive accuracy . We validate the use of BOED to generate glimpse sequences through comparisons with supervision both by hand-crafted glimpse sequences , and by glimpse sequences sampled from a trained hard attention network . 2 HARD ATTENTION . Given an image , I , we consider the task of inferring its label , θ . We use an architecture based on that of Mnih et al . ( 2014 ) , shown in Fig . 1 . It runs for a fixed number of steps , T . At each step t , the RNN samples a glimpse location , lt , from a distribution conditioned on previous glimpses via the RNN ’ s hidden state . A glimpse , in the form of a contiguous square of pixels , is extracted from the image at this location . We denote this yt = ffovea ( I , lt ) . An embedding of yt and lt is then input to the RNN . After T glimpses , the network outputs a classification distribution qφ ( θ|y1 : T , l1 : T ) , where φ are the learnable network parameters . Mnih et al . ( 2014 ) use glimpses consisting of three image patches at different resolutions , but the architectures are otherwise identical . As it directly processes only a fraction of an image , this architecture is suited to low-power scenarios such as use on mobile devices . During optimisation , gradients can not be computed by simple backpropagation since ffovea is nondifferentiable . An alternative , taken by Mnih et al . ( 2014 ) and others in the literature ( Ba et al. , 2014 ; Sermanet et al. , 2014 ) , is to obtain high-variance gradient estimates using REINFORCE ( Williams , 1992 ) . Although these are unbiased , their high variance has made scaling beyond simple problems such as digit classification ( Netzer et al. , 2011 ) challenging . Section 7 describes alternatives ( Ba et al. , 2015 ; Lawson et al. , 2018 ) to training with REINFORCE , but similar problems with scalability exist . This has led many studies to focus on easing the learning task by altering the architecture : e.g. , to process a downsampled image before selecting glimpse locations ( Ba et al. , 2014 ; Sermanet et al. , 2014 ; Katharopoulos & Fleuret , 2019 ) . We summarise these innovations in Section 7 but they tend to be less suitable for low-power computation . We therefore believe that improved training of the architecture in Fig . 1 is an important research problem , and it is the focus of this paper . 3 BAYESIAN OPTIMAL EXPERIMENTAL DESIGN . Designing an experiment to be maximally informative is a fundamental problem that applies as much to tuning the parameters of a political survey ( Warwick & Lininger , 1975 ) as to deciding where to direct attention to answer a query . BOED ( Chaloner & Verdinelli , 1995 ) provides a unifying framework for this by allowing a formal comparison of possible experiments under problem-specific prior knowledge . Consider selecting the design , l , of an experiment to infer some unknown parameter , θ . For example , θ may be the median lethal dose of a drug , and l the doses of this drug given to various groups of rats ( Chaloner & Verdinelli , 1995 ) . Alternatively , as we consider in this paper , θ is the class label of an image and l determines which part of the image we observe . The experiment results in a measurement of y ∼ p ( y|l , θ ) . Following the previous examples , y could be the number of rats which die in each group or the observed pixel values . Given a prior distribution over θ and knowledge of p ( y|l , θ ) , we can use the measurement to infer a posterior distribution over θ using Bayes ’ rule : p ( θ|y , l ) = p ( y|l , θ ) p ( θ ) ∫ p ( y|l , θ ) p ( θ ) dθ . The aim of our experiment is to infer θ , and so a well designed experiment will reduce the uncertainty about θ by as much as possible . The uncertainty after the experiment can be quantified by the Shannon entropy in the posterior , H [ p ( θ|y , l ) ] = Ep ( θ|y , l ) [ − log p ( θ|y , l ) ] . ( 1 ) To maximally reduce the uncertainty , we wish to select l to minimise this posterior entropy . However , the design of the experiment must be chosen before y is measured and so we can not evaluate the posterior entropy exactly . Instead , we minimise an expectation of it over p ( y|l ) = Ep ( θ ) [ p ( y|l , θ ) ] , the marginal distribution of y . This is the expected posterior entropy , or EPE . EPE ( l ) = Ep ( y|l ) [ H [ p ( θ|y , l ) ] ] . ( 2 ) Above , we considered the case of selecting a one-off design for an experiment , such as taking a single glimpse . For the case where a sequence of glimpses can be taken , we need sequential experimental design . In this scenario , the choice of design lt can be informed by the designs and outcomes of previous experiments , l1 : t−1 and y1 : t−1 . The marginal distribution over outcomes is therefore p ( yt|l1 : t , y1 : t−1 ) rather than p ( yt|lt ) . Similarly , the posterior after observing yt is p ( θ|l1 : t , y1 : t ) . Therefore , in the sequential case which we consider throughout the rest of the paper , we greedily minimise the following form of the EPE on each iteration : EPEy1 : t−1 , l1 : t−1 ( lt ) = Ep ( yt|y1 : t−1 , l1 : t ) [ H [ p ( θ|y1 : t , l1 : t ) ] ] . ( 3 ) To summarise , sequential BOED involves , at each time t , selecting lt = arg minlt EPEy1 : t−1 , l1 : t−1 ( lt ) and then performing the experiment with design lt to observe yt . 4 GENERATING NEAR-OPTIMAL GLIMPSE SEQUENCES . Role of BOED pipeline To reiterate the outline of our method , we first annotate a portion of the training data with glimpse sequences , and then in the second stage use these to speed up the training of a hard attention mechanism . This section details our BOED pipeline for the first stage . EPE estimator BOED requires a probabilistic model of the measurements and parameters we wish to infer . That is , we need to define p ( θ , y1 : t|l1 : t ) for any l1 : t. To do so in the visual attention setting , we first define p ( θ , I ) to be the intractable joint distribution over labels and images from which our training and test data originate . To be consistent with our definition in Section 2 of y as a deterministic function of I and l , we then define p ( yi|I , li ) to be a Dirac-delta distribution on ffovea ( I , li ) . The joint distribution is then p ( θ , y1 : t|l1 : t ) = ∫ p ( θ , I ) t∏ i=1 p ( yi|I , li ) dI . ( 4 ) Given this joint distribution , EPEy1 : t−1 , l1 : t−1 ( lt ) is well defined but intractable in general . We therefore consider how to approximate it . To simplify our method for doing so , we first rearrange the expression given in Eq . ( 3 ) so that the expectation is over I rather than yt . Taking advantage of the fact that yi is a deterministic function of I and li allows it to be rewritten as follows ( proof in the appendix ) . Defining ffovea ( I , l1 : t ) = { ffovea ( I , l1 ) , . . . , ffovea ( I , lt ) } , EPEy1 : t−1 , l1 : t−1 ( lt ) = Ep ( I|y1 : t−1 , l1 : t−1 ) [ H [ p ( θ|ffovea ( I , l1 : t ) , l1 : t ) ] ] . ( 5 ) Given this form of the expected posterior entropy , we can approximate it if we can leverage the dataset to obtain : • a learned attentional variational posterior , gAVP ( θ|y1 : t , l1 : t ) ≈ p ( θ|y1 : t , l1 : t ) , • and stochastic image completion distribution rimg ( I|y1 : t−1 , l1 : t−1 ) ≈ p ( I|y1 : t−1 , l1 : t−1 ) . We expand on the form of each of these approximations later in this section . First , combining them with Eq . ( 5 ) and using a Monte Carlo estimate of the expectation yields our estimator for the EPE : EPEy1 : t−1 , l1 : t−1 ( lt ) ≈ 1 N N∑ n=1 H [ gAVP ( θ|ffovea ( I ( n ) , l1 : t ) , l1 : t ) ] ( 6 ) with I ( 1 ) , . . . , I ( N ) ∼ rimg ( I|y1 : t−1 , l1 : t−1 ) . Overview of BOED pipeline We select lt with a grid search . That is , denoting the set of allowed values of lt as L , we compute our approximation of EPEy1 : t−1 , l1 : t−1 ( lt ) for all lt ∈ L. We then select the value of lt for which this is least . To do so , our full BOED pipeline is as follows . 1 . Sample I ( 1 ) . . . , I ( N ) ∼ rimg ( I|y1 : t−1 , l1 : t−1 ) . 2 . For each lt ∈ L , approximate the expected posterior entropy with Eq . ( 6 ) . 3 . Select the value of lt for which this approximation is least . Repeating these steps for t = 1 , . . . , T yields a near-optimal glimpse sequence l1 : T for image I . Figure 2 shows an example of this process . We must do this for all images in some subset of a dataset to be able to partially supervise hard attention training as described in Section 5 . We now describe the form of gAVP ( the attentional variational posterior ) and rimg ( stochastic image completion ) . Attentional variational posterior In this section we introduce our novel approach for efficiently approximating the intractable posterior p ( θ|y1 : t , l1 : t ) . We train a convolutional neural network ( CNN ) to map from a sequence of glimpses , y1 : t , and their locations , l1 : t , to gAVP ( θ|y1 : t , l1 : t ) , an approximation of this posterior . We call this the attentional variational posterior CNN ( AVP-CNN ) . To allow a single CNN to cope with varying y1 : t , l1 : t , and even varying t , we embed its input as shown in Fig . 3 . Essentially , l1 : t is used to create an image-sized mask which is 1 for observed pixels and 0 for unobserved pixels . Elementwise multiplication of this mask with the input image sets unobserved pixels to zero . The mask is then concatenated as an additional channel . This embedding naturally maintains spatial information while enforcing an invariance to permutations of the glimpse sequence . We use a Densenet-121 ( Huang et al. , 2017 ) CNN architecture ( pretrained on ImageNet ( Deng et al. , 2009 ) ) to map from this embedding to a vector of probabilities representing gAVP . We train the network to minimise the KL divergence between its output and p ( θ|y1 : t , l1 : t ) . That is , DKL ( p ( θ|y1 : t , l1 : t ) ||gAVP ( θ|y1 : t , l1 : t ) ) . To ensure that gAVP is close for all t , l1 : t and y1 : t , the loss used is an expectation of this KL divergence over p ( y1 : t|l1 : t ) u ( t , l1 : t ) . We factorise u ( t , l1 : t ) as u ( t ) ∏t i=1 u ( li ) where , so that all times and glimpse locations are weighted equally in the loss , u ( t ) is a uniform distribution over 1 , . . . , T and u ( li ) is a uniform distribution over all image locations . Denoting the network parameters λ , the gradient of this loss is ∂ ∂λ Lλ = Ep ( θ , y1 : t|l1 : t ) u ( t , l1 : t ) [ − ∂ ∂λ log gλAVP ( θ|y1 : t , l1 : t ) ] . ( 7 ) This gradient is the same as that of a cross-entropy loss on data sampled from p ( θ , y1 : t|l1 : t ) u ( t , l1 : t ) , and can be approximated by a Monte Carlo estimate . Our approximation of the EPE in Eq . ( 6 ) involves the entropy of gAVP . Since gAVP is a categorical distribution , this is simply computed analytically . This amortised approximation of the posterior entropy is inspired by Foster et al . ( 2019 ) , but has two important differences to their estimator : • Foster et al . learn a mapping from yt to g ( θ|y1 : t , l1 : t ) , sharing information between “ nearby ” samples of yt to reduce the computational cost of the experimental design . Our AVPCNN takes this amortization further by learning a single mapping from t , l1 : t and y1 : t to gAVP ( θ|y1 : t , l1 : t ) , which yields significant further efficiency gains in our setting . • Whereas we approximate H [ p ] with H [ gAVP ] = EgAVP [ − log gAVP ] , Foster et al . use Ep [ − log g ] . This provides an upper bound on H [ p ] but is not applicable in our case as we can not sample from p ( θ|y1 : t , l1 : t ) . Both approximations are exact when gAVP = p. Stochastic image completion We considered numerous ways to form rimg ( I|y1 : t−1 , l1 : t−1 ) including inpainting ( Pathak et al. , 2016 ; Isola et al. , 2017 ) and Markov chain Monte Carlo in a generative model . Future research in generative modelling may provide alternatives to this component of our method but , for now , we choose to represent rimg using a technique we developed based on image retrieval ( Jégou et al. , 2010 ) . Of the methods we considered , this gave the best trade-off between speed and sample quality . It involves creating an empirical image distribution with 1.5 million images for each experiment using GANs with publicly available pre-trained weights ( StyleGAN ( Karras et al. , 2018 ) for CelebA-HQ and FineGAN ( Singh et al. , 2019 ) for Caltech-UCSD Birds ) . We note that the use of pre-trained models makes test leakage possible but verify in Appendix C.4 that this is unlikely to impact our results . During sampling , the database is searched for images that ‘ match ’ the previous glimpses ( y1 : t−1 and l1 : t−1 ) . How well these glimpses match a database image , I ′ , is measured by the squared distance in pixel space at glimpse locations : ∑t−1 i=1 ‖yi − ffovea ( I ′ , li ) ‖ 2 2 . This distance defines a probability distribution over the images in the database . To reduce computation , we first compare approximations of the observed parts of each image using principal component analysis ( Jolliffe , 2011 ) , and compute exact distances only when these are close . The overall procedure to sample from rimg corresponds to importance sampling ( Arulampalam et al. , 2002 ) in a model where p ( yt|I , lt ) is relaxed from a Dirac-delta distribution to a Gaussian . See the appendix for details . | The paper trains hard attention for image classification. The network is partially supervised by attention locations proposed to maximally reduce the entropy of the image label distribution. To propose these locations, the method needs an already trained image classifier conditioned on glimpses and their locations. Additionally, the method needs a generator of images, conditioned on the glimpses and their locations. This generator is approximated by searching a set of 1.5 million pre-generated images for close matches. | SP:223bbaf9169ba486cbfbc0d8c35d662ea211c358 |
Near-Optimal Glimpse Sequences for Training Hard Attention Neural Networks | 1 INTRODUCTION . Attention can be defined as the “ allocation of limited cognitive processing resources ” ( Anderson , 2005 ) . In humans the density of photoreceptors varies across the retina . It is much greater in the centre ( Bear et al. , 2007 ) and covers an approximately 210 degree field of view ( Traquair , 1949 ) . This means that the visual system is a limited resource with respect to observing the environment and that it must be allocated , or controlled , by some attention mechanism . We refer to this kind of controlled allocation of limited sensor resources as “ hard ” attention . This is in contrast with “ soft ” attention , the controlled application of limited computational resources to full sensory input . Hard attention can solve certain tasks using orders of magnitude less sensor bandwidth and computation than the alternatives ( Katharopoulos & Fleuret , 2019 ; Rensink , 2000 ) . It therefore may enable the use of modern approaches to computer vision in low-power settings such as mobile devices . This paper focuses on the application of hard attention in image classification . Our model of attention ( shown in Fig . 1 ) is as follows : a recurrent neural network ( RNN ) is given T steps to classify some unchanging input image . Before each step , the RNN outputs the coordinates of a pixel in the image . A patch of the image centered around this pixel is then fed into the RNN . We call this image patch a glimpse , and the coordinates a glimpse location . As such , the RNN controls its input by selecting each glimpse location , and this decision can be based on previous glimpses . After T steps , the RNN ’ s hidden state is mapped to a classification output . As with most artificial hard attention mechanisms ( Mnih et al. , 2014 ; Ba et al. , 2014 ) , this output is not differentiable with respect to the sequence of glimpse locations selected . This makes training with standard gradient backpropagation impossible , and so high variance gradient estimators such as REINFORCE ( Williams , 1992 ) are commonly used instead ( Mnih et al. , 2014 ; Ba et al. , 2014 ) . The resulting noisy gradient estimates make training difficult , especially for large T . In order to improve hard attention training , we take inspiration from neuroscience literature which suggests that visual attention is directed so as to maximally reduce entropy in an agent ’ s world model ( Bruce & Tsotsos , 2009 ; Itti & Baldi , 2009 ; Schwartenbeck et al. , 2013 ; Feldman & Friston , 2010 ) . There is a corresponding mathematical formulation of such an objective , namely Bayesian optimal experimental design ( BOED ) ( Chaloner & Verdinelli , 1995 ) . BOED tackles the problem of designing an experiment to maximally reduce uncertainty in some unknown variable . When classifying an image with hard visual attention , the ‘ experiment ’ is the process of taking a glimpse ; the ‘ design ’ is the glimpse location ; and the unknown variable is the class label . In general , BOED is applicable only when a probabilistic model of the experiment exists . This could be , for example , a prior distribution over the class label and a generative model for the observed image patch conditioned on the class label and glimpse location . We leverage generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) to provide such a model . We use methodology from BOED to introduce the following training procedure for hard attention networks , which we call partial supervision by near-optimal glimpse sequences ( PS-NOGS ) . 1 . We assume that we are given an image classification task and a corresponding labelled dataset . Then , for a subset of the training images , we determine an approximately optimal ( in the BOED sense ) glimpse location for a hard attention network to attend to at each time step . We refer to the resulting sequences of glimpse locations as near-optimal glimpse sequences . Section 4 describes our novel method to generate them . 2 . We use these near-optimal glimpse sequences as an additional supervision signal for training a hard attention network . Section 5 introduces our novel training objective for this . We empirically investigate the performance of PS-NOGS and find that it leads to faster training than our baselines , and qualitatively different behaviour with competitive accuracy . We validate the use of BOED to generate glimpse sequences through comparisons with supervision both by hand-crafted glimpse sequences , and by glimpse sequences sampled from a trained hard attention network . 2 HARD ATTENTION . Given an image , I , we consider the task of inferring its label , θ . We use an architecture based on that of Mnih et al . ( 2014 ) , shown in Fig . 1 . It runs for a fixed number of steps , T . At each step t , the RNN samples a glimpse location , lt , from a distribution conditioned on previous glimpses via the RNN ’ s hidden state . A glimpse , in the form of a contiguous square of pixels , is extracted from the image at this location . We denote this yt = ffovea ( I , lt ) . An embedding of yt and lt is then input to the RNN . After T glimpses , the network outputs a classification distribution qφ ( θ|y1 : T , l1 : T ) , where φ are the learnable network parameters . Mnih et al . ( 2014 ) use glimpses consisting of three image patches at different resolutions , but the architectures are otherwise identical . As it directly processes only a fraction of an image , this architecture is suited to low-power scenarios such as use on mobile devices . During optimisation , gradients can not be computed by simple backpropagation since ffovea is nondifferentiable . An alternative , taken by Mnih et al . ( 2014 ) and others in the literature ( Ba et al. , 2014 ; Sermanet et al. , 2014 ) , is to obtain high-variance gradient estimates using REINFORCE ( Williams , 1992 ) . Although these are unbiased , their high variance has made scaling beyond simple problems such as digit classification ( Netzer et al. , 2011 ) challenging . Section 7 describes alternatives ( Ba et al. , 2015 ; Lawson et al. , 2018 ) to training with REINFORCE , but similar problems with scalability exist . This has led many studies to focus on easing the learning task by altering the architecture : e.g. , to process a downsampled image before selecting glimpse locations ( Ba et al. , 2014 ; Sermanet et al. , 2014 ; Katharopoulos & Fleuret , 2019 ) . We summarise these innovations in Section 7 but they tend to be less suitable for low-power computation . We therefore believe that improved training of the architecture in Fig . 1 is an important research problem , and it is the focus of this paper . 3 BAYESIAN OPTIMAL EXPERIMENTAL DESIGN . Designing an experiment to be maximally informative is a fundamental problem that applies as much to tuning the parameters of a political survey ( Warwick & Lininger , 1975 ) as to deciding where to direct attention to answer a query . BOED ( Chaloner & Verdinelli , 1995 ) provides a unifying framework for this by allowing a formal comparison of possible experiments under problem-specific prior knowledge . Consider selecting the design , l , of an experiment to infer some unknown parameter , θ . For example , θ may be the median lethal dose of a drug , and l the doses of this drug given to various groups of rats ( Chaloner & Verdinelli , 1995 ) . Alternatively , as we consider in this paper , θ is the class label of an image and l determines which part of the image we observe . The experiment results in a measurement of y ∼ p ( y|l , θ ) . Following the previous examples , y could be the number of rats which die in each group or the observed pixel values . Given a prior distribution over θ and knowledge of p ( y|l , θ ) , we can use the measurement to infer a posterior distribution over θ using Bayes ’ rule : p ( θ|y , l ) = p ( y|l , θ ) p ( θ ) ∫ p ( y|l , θ ) p ( θ ) dθ . The aim of our experiment is to infer θ , and so a well designed experiment will reduce the uncertainty about θ by as much as possible . The uncertainty after the experiment can be quantified by the Shannon entropy in the posterior , H [ p ( θ|y , l ) ] = Ep ( θ|y , l ) [ − log p ( θ|y , l ) ] . ( 1 ) To maximally reduce the uncertainty , we wish to select l to minimise this posterior entropy . However , the design of the experiment must be chosen before y is measured and so we can not evaluate the posterior entropy exactly . Instead , we minimise an expectation of it over p ( y|l ) = Ep ( θ ) [ p ( y|l , θ ) ] , the marginal distribution of y . This is the expected posterior entropy , or EPE . EPE ( l ) = Ep ( y|l ) [ H [ p ( θ|y , l ) ] ] . ( 2 ) Above , we considered the case of selecting a one-off design for an experiment , such as taking a single glimpse . For the case where a sequence of glimpses can be taken , we need sequential experimental design . In this scenario , the choice of design lt can be informed by the designs and outcomes of previous experiments , l1 : t−1 and y1 : t−1 . The marginal distribution over outcomes is therefore p ( yt|l1 : t , y1 : t−1 ) rather than p ( yt|lt ) . Similarly , the posterior after observing yt is p ( θ|l1 : t , y1 : t ) . Therefore , in the sequential case which we consider throughout the rest of the paper , we greedily minimise the following form of the EPE on each iteration : EPEy1 : t−1 , l1 : t−1 ( lt ) = Ep ( yt|y1 : t−1 , l1 : t ) [ H [ p ( θ|y1 : t , l1 : t ) ] ] . ( 3 ) To summarise , sequential BOED involves , at each time t , selecting lt = arg minlt EPEy1 : t−1 , l1 : t−1 ( lt ) and then performing the experiment with design lt to observe yt . 4 GENERATING NEAR-OPTIMAL GLIMPSE SEQUENCES . Role of BOED pipeline To reiterate the outline of our method , we first annotate a portion of the training data with glimpse sequences , and then in the second stage use these to speed up the training of a hard attention mechanism . This section details our BOED pipeline for the first stage . EPE estimator BOED requires a probabilistic model of the measurements and parameters we wish to infer . That is , we need to define p ( θ , y1 : t|l1 : t ) for any l1 : t. To do so in the visual attention setting , we first define p ( θ , I ) to be the intractable joint distribution over labels and images from which our training and test data originate . To be consistent with our definition in Section 2 of y as a deterministic function of I and l , we then define p ( yi|I , li ) to be a Dirac-delta distribution on ffovea ( I , li ) . The joint distribution is then p ( θ , y1 : t|l1 : t ) = ∫ p ( θ , I ) t∏ i=1 p ( yi|I , li ) dI . ( 4 ) Given this joint distribution , EPEy1 : t−1 , l1 : t−1 ( lt ) is well defined but intractable in general . We therefore consider how to approximate it . To simplify our method for doing so , we first rearrange the expression given in Eq . ( 3 ) so that the expectation is over I rather than yt . Taking advantage of the fact that yi is a deterministic function of I and li allows it to be rewritten as follows ( proof in the appendix ) . Defining ffovea ( I , l1 : t ) = { ffovea ( I , l1 ) , . . . , ffovea ( I , lt ) } , EPEy1 : t−1 , l1 : t−1 ( lt ) = Ep ( I|y1 : t−1 , l1 : t−1 ) [ H [ p ( θ|ffovea ( I , l1 : t ) , l1 : t ) ] ] . ( 5 ) Given this form of the expected posterior entropy , we can approximate it if we can leverage the dataset to obtain : • a learned attentional variational posterior , gAVP ( θ|y1 : t , l1 : t ) ≈ p ( θ|y1 : t , l1 : t ) , • and stochastic image completion distribution rimg ( I|y1 : t−1 , l1 : t−1 ) ≈ p ( I|y1 : t−1 , l1 : t−1 ) . We expand on the form of each of these approximations later in this section . First , combining them with Eq . ( 5 ) and using a Monte Carlo estimate of the expectation yields our estimator for the EPE : EPEy1 : t−1 , l1 : t−1 ( lt ) ≈ 1 N N∑ n=1 H [ gAVP ( θ|ffovea ( I ( n ) , l1 : t ) , l1 : t ) ] ( 6 ) with I ( 1 ) , . . . , I ( N ) ∼ rimg ( I|y1 : t−1 , l1 : t−1 ) . Overview of BOED pipeline We select lt with a grid search . That is , denoting the set of allowed values of lt as L , we compute our approximation of EPEy1 : t−1 , l1 : t−1 ( lt ) for all lt ∈ L. We then select the value of lt for which this is least . To do so , our full BOED pipeline is as follows . 1 . Sample I ( 1 ) . . . , I ( N ) ∼ rimg ( I|y1 : t−1 , l1 : t−1 ) . 2 . For each lt ∈ L , approximate the expected posterior entropy with Eq . ( 6 ) . 3 . Select the value of lt for which this approximation is least . Repeating these steps for t = 1 , . . . , T yields a near-optimal glimpse sequence l1 : T for image I . Figure 2 shows an example of this process . We must do this for all images in some subset of a dataset to be able to partially supervise hard attention training as described in Section 5 . We now describe the form of gAVP ( the attentional variational posterior ) and rimg ( stochastic image completion ) . Attentional variational posterior In this section we introduce our novel approach for efficiently approximating the intractable posterior p ( θ|y1 : t , l1 : t ) . We train a convolutional neural network ( CNN ) to map from a sequence of glimpses , y1 : t , and their locations , l1 : t , to gAVP ( θ|y1 : t , l1 : t ) , an approximation of this posterior . We call this the attentional variational posterior CNN ( AVP-CNN ) . To allow a single CNN to cope with varying y1 : t , l1 : t , and even varying t , we embed its input as shown in Fig . 3 . Essentially , l1 : t is used to create an image-sized mask which is 1 for observed pixels and 0 for unobserved pixels . Elementwise multiplication of this mask with the input image sets unobserved pixels to zero . The mask is then concatenated as an additional channel . This embedding naturally maintains spatial information while enforcing an invariance to permutations of the glimpse sequence . We use a Densenet-121 ( Huang et al. , 2017 ) CNN architecture ( pretrained on ImageNet ( Deng et al. , 2009 ) ) to map from this embedding to a vector of probabilities representing gAVP . We train the network to minimise the KL divergence between its output and p ( θ|y1 : t , l1 : t ) . That is , DKL ( p ( θ|y1 : t , l1 : t ) ||gAVP ( θ|y1 : t , l1 : t ) ) . To ensure that gAVP is close for all t , l1 : t and y1 : t , the loss used is an expectation of this KL divergence over p ( y1 : t|l1 : t ) u ( t , l1 : t ) . We factorise u ( t , l1 : t ) as u ( t ) ∏t i=1 u ( li ) where , so that all times and glimpse locations are weighted equally in the loss , u ( t ) is a uniform distribution over 1 , . . . , T and u ( li ) is a uniform distribution over all image locations . Denoting the network parameters λ , the gradient of this loss is ∂ ∂λ Lλ = Ep ( θ , y1 : t|l1 : t ) u ( t , l1 : t ) [ − ∂ ∂λ log gλAVP ( θ|y1 : t , l1 : t ) ] . ( 7 ) This gradient is the same as that of a cross-entropy loss on data sampled from p ( θ , y1 : t|l1 : t ) u ( t , l1 : t ) , and can be approximated by a Monte Carlo estimate . Our approximation of the EPE in Eq . ( 6 ) involves the entropy of gAVP . Since gAVP is a categorical distribution , this is simply computed analytically . This amortised approximation of the posterior entropy is inspired by Foster et al . ( 2019 ) , but has two important differences to their estimator : • Foster et al . learn a mapping from yt to g ( θ|y1 : t , l1 : t ) , sharing information between “ nearby ” samples of yt to reduce the computational cost of the experimental design . Our AVPCNN takes this amortization further by learning a single mapping from t , l1 : t and y1 : t to gAVP ( θ|y1 : t , l1 : t ) , which yields significant further efficiency gains in our setting . • Whereas we approximate H [ p ] with H [ gAVP ] = EgAVP [ − log gAVP ] , Foster et al . use Ep [ − log g ] . This provides an upper bound on H [ p ] but is not applicable in our case as we can not sample from p ( θ|y1 : t , l1 : t ) . Both approximations are exact when gAVP = p. Stochastic image completion We considered numerous ways to form rimg ( I|y1 : t−1 , l1 : t−1 ) including inpainting ( Pathak et al. , 2016 ; Isola et al. , 2017 ) and Markov chain Monte Carlo in a generative model . Future research in generative modelling may provide alternatives to this component of our method but , for now , we choose to represent rimg using a technique we developed based on image retrieval ( Jégou et al. , 2010 ) . Of the methods we considered , this gave the best trade-off between speed and sample quality . It involves creating an empirical image distribution with 1.5 million images for each experiment using GANs with publicly available pre-trained weights ( StyleGAN ( Karras et al. , 2018 ) for CelebA-HQ and FineGAN ( Singh et al. , 2019 ) for Caltech-UCSD Birds ) . We note that the use of pre-trained models makes test leakage possible but verify in Appendix C.4 that this is unlikely to impact our results . During sampling , the database is searched for images that ‘ match ’ the previous glimpses ( y1 : t−1 and l1 : t−1 ) . How well these glimpses match a database image , I ′ , is measured by the squared distance in pixel space at glimpse locations : ∑t−1 i=1 ‖yi − ffovea ( I ′ , li ) ‖ 2 2 . This distance defines a probability distribution over the images in the database . To reduce computation , we first compare approximations of the observed parts of each image using principal component analysis ( Jolliffe , 2011 ) , and compute exact distances only when these are close . The overall procedure to sample from rimg corresponds to importance sampling ( Arulampalam et al. , 2002 ) in a model where p ( yt|I , lt ) is relaxed from a Dirac-delta distribution to a Gaussian . See the appendix for details . | This paper presents a learning framework for a hard attention mechanism. The glimpses captured by the attention mechanism are guided by the goal of minimizing output uncertainty for a downstream task such as classification. The authors pose this problem in a probabilistic framework which is based on Bayesian optimal experimental design (BOED). They devise a tractable approximation to the entropy over images and glimpse sequences and search for the glimpse sequences which minimize the output entropy of a recurrent classification model. | SP:223bbaf9169ba486cbfbc0d8c35d662ea211c358 |
Bayesian Metric Learning for Robust Training of Deep Models under Noisy Labels | 1 INTRODUCTION . Deep learning has been shown as a dominant learning framework in various domains of machine learning and computer vision . One of the major limitations of deep learning is that it often requires relatively clean data sets that do not contain label noise naturally caused by human labeling errors , measurement errors , subjective biases and other issues ( Frénay et al. , 2014 ; Ghosh et al. , 2017 ; Algan & Ulusoy , 2019 ) . The performance of a machine learning method can be significantly affected by noisy labels both in terms of the reduction in the accuracy rate and the increase in sample complexity . Particularly for deep learning , a deep neural network ( DNN ) can generalize poorly when trained with noisy training sets which contain high proportion of noisy labels since a DNN can over-fit those noisy training data sets ( Zhang et al. , 2016 ; Algan & Ulusoy , 2020 ) . Developing deep learning methods that can perform well on noisy training data is essential since it can enable the use of deep models in many real-life applications . There have been several approaches proposed to handle learning issues caused by label noise , for example : data cleaning ( Angelova et al. , 2005 ; Chu et al. , 2016 ) , label correction ( Reed et al. , 2014 ) , additional linear correction layers ( Sukhbaatar et al. , 2014 ) , dimensionality-driven learning ( Ma et al. , 2018 ) , bootstrapping ( Reed et al. , 2014 ) , curriculum learning-model based approach such as MentorNet ( Jiang et al. , 2018 ) or CoTeaching ( Han et al. , 2018 ) , loss correction ( or noisetolerant loss ) ( Masnadi-Shirazi & Vasconcelos , 2009 ; Ghosh et al. , 2017 ; Zhang & Sabuncu , 2018 ; Thulasidasan et al. , 2019 ; Ma et al. , 2020 ) , or a combination of the techniques above ( Li et al. , 2020 ; Nguyen et al. , 2019 ) . Relevant to this paper is an existing theoretically sound approach : Bayesian large margin nearest neighbor classification ( BLMNN ) ( Wang & Tan , 2018 ) that employs Bayesian inference to improve the robustness of a point estimation-based linear metric learning method . BLMNN then introduces a method to approximate the posterior distribution of the underlying distance parameter given the triplet data by using the stochastic variational inference . More importantly , BLMNN ( Wang & Tan , 2018 ) also provides a theoretical guarantee about the robustness of the method , which says that it can work with non-uniform label noise . Although BLMNN has been mathematically shown to be robust against label noise , it only focuses on a simple linear Mahalanobis distance that can not capture the nonlinear relationships of data points in deep metric learning ( Lu et al. , 2017 ) . In this paper , we introduce a Bayesian deep metric learning framework that is robust against noisy labels . Our proposed method ( depicted in Fig . 1 ) is inspired by the BLMNN ( Wang & Tan , 2018 ) , deep metric learning ( Hoffer & Ailon , 2015 ; Hu et al. , 2015 ; Wang et al. , 2017 ; Lu et al. , 2017 ; Do et al. , 2019 ) , and Bayes by Backprop ( Blundell et al. , 2015 ) . Compared to the BLMNN that only considers a linear metric learning , our framework can handle non-linear deep metric learning , which is useful for many real-life applications . Moreover , directly applying the variational Bayes learning ( Wang & Tan , 2018 ) in deep learning is challenging since it requires sampling from a distribution of the neural network parameters . Instead , we adapt the variational inference by Blundell et al . ( Blundell et al. , 2015 ) , which allows to efficiently sample the parameters of a Bayes neural networks by using a backpropagation-compatible algorithm . We also theoretically show the robustness of our proposed method when working with label noise . The experimental results on several noisy data sets show that our novel proposed method can generalize better compared to the linear BLMNN ( Wang & Tan , 2018 ) and the point estimation-based deep metric learning ( Hoffer & Ailon , 2015 ; Lu et al. , 2017 ) , especially when the noise level increases . It is important to emphasize that the motivation of our method is to produce a better calibrated model that is more robust to noisy label training , and , as a result , less likely to overfit the training set than the linear BLMNN ( Wang & Tan , 2018 ) and the point estimation-based deep metric learning ( Hoffer & Ailon , 2015 ; Lu et al. , 2017 ) . Therefore this is a paper that introduces a new theoretical framework to solve noisy label learning instead of presenting method that is competitive against the best approaches of the field ( such as Mentornet ( Jiang et al. , 2018 ) or Co-teaching ( Han et al. , 2018 ) ) in large-scale datasets ( e.g. , webvision ( Li et al. , 2017 ) and Clothing 1M ( Xiao et al. , 2015 ) ) . Furthermore , deep metric learning has been in fact considered in the Bayesian settings before ( Ishfaq et al. , 2018 ; Karaletsos et al. , 2015 ) , and recently , in ( Lin et al. , 2018 ) , but not in the context of noisy labels . Consequently , our proposed framework can be used by other methods that can deal with noisy label learning , but the extension of those methods using our proposed approach is out of the scope of this paper . 2 RELATED WORK . 2.1 POINT ESTIMATION-BASED DISTANCE METRIC LEARNING . The goal of distance metric learning ( or metric learning ) is to learn a distance function to measure the similarity between training samples . Metric learning has been shown to have great success in many visual applications such as face recognition , image classification , visual search , visual tracking , and person re-identification ( Lu et al. , 2017 ) . In principle , a supervised metric learning method aims to learn a distance metric which pulls together samples from the same class while pushing away those from different classes . Based on the complexity of the distance , metric learning can be classified into two types : linear , focusing on linear distance ( e.g. , Mahalanobis ) , which often suffers from the nonlinear relationship of data points ( Lu et al. , 2017 ) ; and non-linear , which nowadays is mostly based on deep learning . Deep metric learning ( DML ) is motivated by the fact that deep learning is an effective solution to a non-linear transformation of input samples ( Lu et al. , 2017 ; Do et al. , 2019 ) . The key idea of DML is to explicitly learn a set of hierarchical non-linear transformations to map input data points into a feature space that is used for comparing or matching these data points in a more effective manner . DML unifies feature learning and metric learning into a joint learning framework . DML is shown to be more advantageous compared to traditional models , for example , with respect to classification performance since DML does not rely on the classification layer of a trained model that often depends on the type of problems ( Lu et al. , 2017 ; Do et al. , 2019 ) . Relevant to this paper is the triplet loss-based metric learning ( Lu et al. , 2017 ; Schroff et al. , 2015 ) , in which the original training data set is represented by a set of independent triplets , formed by an anchor sample , one sample of the same class and another sample from a different class . The training process is then performed by minimizing the triplet loss to simultaneously approximate the anchor to the sample with the same class and separate the anchor from the sample of different class . 2.2 ROBUST LINEAR DISTANCE METRIC LEARNING VIA BAYESIAN INFERENCE . One drawback of the point estimation DML is that it is likely to over-fit the noisy labels ( Wang & Tan , 2014 ; 2018 ) . Wang et al . ( 2016 ) introduces the Deep Stochastic Neighbor Compression ( DSNC ) method that aims to jointly learn a nonlinear transformation that preserves the neighborhood of the data , and a compressed version of the training data set ( Ahmed et al. ) . DSNC is also robust against label noise . Motivated by the fact that Bayesian learning is a good choice for robust learning ( Zhu et al. , 2014 ; Yang et al. , 2012 ) , Wang & Tan ( 2018 ) introduced the theoretically sound Bayesian large margin nearest neighbor classification ( BLMNN ) to improve the robustness of the linear metric learning under the presence of label noise . The BLMNN framework represents the large margin nearest neighbor classification using a linear metric learning in the form of a variational Bayes method that takes the prior distribution of the transformation matrix into account and estimate the posterior distribution via stochastic variational inference ( SVI ) . BLMNN provides mathematical definitions of the noisy label triplet ( a type of non-uniform label noise ) and the β-robust algorithm against noisy label triplets . More importantly , a theoretically guarantee of the robustness of BLMNN is also provided in ( Wang & Tan , 2018 ) . Although BLMNN efficiently addresses the training issue caused by the noisy label , one key limitation of BLMNN is that it is based on a linear metric transformation that can not capture the nonlinear relationships of data points in deep metric learning . However , the extension of BLMNN to the non-linear case is not straightforward due to the complexity of the posterior distribution estimation as well as the sampling process of high-dimensional parameters , and such extension is the main target of our paper . 2.3 BAYES BY BACKPROP . Bayesian neural networks ( MacKay , 1995 ; Neal , 2012 ; Gal & Ghahramani , 2015 ) aim to estimate the posterior distribution of the network parameters given the training data . However , that inference framework is often intractable , especially when working with high dimensional parameters ( Blundell et al. , 2015 ) . Moreover , exactly calculating the posterior of the weights is challenging since it requires the integration that is known to be computationally expensive . Blundell et al . ( Blundell et al. , 2015 ) introduced the “ backpropagation-compatible ” Bayes by Backprop method for estimating the posterior distribution of the network parameter . That method is inspired by the variational free energy inference in which the exact posterior is approximated by a variational distribution by solving an optimization problem . In principle , Bayes by Backprop can directly work on the network parameter by minimizing a compression cost function ( Blundell et al. , 2015 ) . Our proposed method is inspired by ( Wang & Tan , 2018 ) , where we replace the linear Mahalanobis metric by a non-linear deep metric . Our novel proposed method therefore leverages the good performance of the deep metric learning ( Lu et al. , 2017 ) and the robustness to label noise of a Bayesian framework ( Yang et al. , 2012 ; Wang & Tan , 2018 ) . We first represent a triplet-based deep metric learning using Bayesian inference . By replacing the softmax loss by a triplet loss ( and so learning a metric ) , we impose more strict constraints , where points of the same class are forced to collapse into a small region of the feature space instead of just belonging to regions within the class boundaries . We argue that this more strict constraints has the potential to introduce more robust feature spaces for classification under noisy label training . To approximate the posterior of the network parameters , we then employ the efficient variational framework ( Blundell et al. , 2015 ) that allows to sample from a Bayesian neural network using the backpropagation framework . We theoretically show that our proposed method is robust against noisy triplet relying on a Bayesian inference framework . | This paper introduces a Bayesian deep metric learning framework that is robust against noise labels. The proposed method is inspired by the BLMNN (Wang & Tan, 2018), deep metric learning (Hoffer & Ailon, 2015; Hu et al., 2015; Wang et al., 2017; Lu et al., 2017; Do et al., 2019), and Bayes by Backprop (Blundell et al., 2015). Different from BLMNN that only considers a linear metric learning, the authors’ framework can handle non-linear deep metric learning, which is useful for many real-life applications. Moreover, directly applying the variational Bayes learning (Wang & Tan, 2018) in deep learning is challenging since it requires sampling from a distribution of the neural network parameters. Instead, The author adapt the variational inference by Blundell et al. (Blundell et al., 2015), which allows to efficiently sample the parameters of a Bayes neural networks by using a backpropagation-compatible algorithm. They also theoretically show the robustness of the proposed method when working with label noise. The experimental results on several noisy data sets show that their novel proposed method can generalize better compared to the linear BLMNN (Wang & Tan, 2018) and the point estimation-based deep metric learning (Hoffer & Ailon, 2015; Lu et al., 2017), especially when the noise level increases. In my opinion, the main novelty of this | SP:d26682cab15475af1eedf1431fb8596e311b965d |
Bayesian Metric Learning for Robust Training of Deep Models under Noisy Labels | 1 INTRODUCTION . Deep learning has been shown as a dominant learning framework in various domains of machine learning and computer vision . One of the major limitations of deep learning is that it often requires relatively clean data sets that do not contain label noise naturally caused by human labeling errors , measurement errors , subjective biases and other issues ( Frénay et al. , 2014 ; Ghosh et al. , 2017 ; Algan & Ulusoy , 2019 ) . The performance of a machine learning method can be significantly affected by noisy labels both in terms of the reduction in the accuracy rate and the increase in sample complexity . Particularly for deep learning , a deep neural network ( DNN ) can generalize poorly when trained with noisy training sets which contain high proportion of noisy labels since a DNN can over-fit those noisy training data sets ( Zhang et al. , 2016 ; Algan & Ulusoy , 2020 ) . Developing deep learning methods that can perform well on noisy training data is essential since it can enable the use of deep models in many real-life applications . There have been several approaches proposed to handle learning issues caused by label noise , for example : data cleaning ( Angelova et al. , 2005 ; Chu et al. , 2016 ) , label correction ( Reed et al. , 2014 ) , additional linear correction layers ( Sukhbaatar et al. , 2014 ) , dimensionality-driven learning ( Ma et al. , 2018 ) , bootstrapping ( Reed et al. , 2014 ) , curriculum learning-model based approach such as MentorNet ( Jiang et al. , 2018 ) or CoTeaching ( Han et al. , 2018 ) , loss correction ( or noisetolerant loss ) ( Masnadi-Shirazi & Vasconcelos , 2009 ; Ghosh et al. , 2017 ; Zhang & Sabuncu , 2018 ; Thulasidasan et al. , 2019 ; Ma et al. , 2020 ) , or a combination of the techniques above ( Li et al. , 2020 ; Nguyen et al. , 2019 ) . Relevant to this paper is an existing theoretically sound approach : Bayesian large margin nearest neighbor classification ( BLMNN ) ( Wang & Tan , 2018 ) that employs Bayesian inference to improve the robustness of a point estimation-based linear metric learning method . BLMNN then introduces a method to approximate the posterior distribution of the underlying distance parameter given the triplet data by using the stochastic variational inference . More importantly , BLMNN ( Wang & Tan , 2018 ) also provides a theoretical guarantee about the robustness of the method , which says that it can work with non-uniform label noise . Although BLMNN has been mathematically shown to be robust against label noise , it only focuses on a simple linear Mahalanobis distance that can not capture the nonlinear relationships of data points in deep metric learning ( Lu et al. , 2017 ) . In this paper , we introduce a Bayesian deep metric learning framework that is robust against noisy labels . Our proposed method ( depicted in Fig . 1 ) is inspired by the BLMNN ( Wang & Tan , 2018 ) , deep metric learning ( Hoffer & Ailon , 2015 ; Hu et al. , 2015 ; Wang et al. , 2017 ; Lu et al. , 2017 ; Do et al. , 2019 ) , and Bayes by Backprop ( Blundell et al. , 2015 ) . Compared to the BLMNN that only considers a linear metric learning , our framework can handle non-linear deep metric learning , which is useful for many real-life applications . Moreover , directly applying the variational Bayes learning ( Wang & Tan , 2018 ) in deep learning is challenging since it requires sampling from a distribution of the neural network parameters . Instead , we adapt the variational inference by Blundell et al . ( Blundell et al. , 2015 ) , which allows to efficiently sample the parameters of a Bayes neural networks by using a backpropagation-compatible algorithm . We also theoretically show the robustness of our proposed method when working with label noise . The experimental results on several noisy data sets show that our novel proposed method can generalize better compared to the linear BLMNN ( Wang & Tan , 2018 ) and the point estimation-based deep metric learning ( Hoffer & Ailon , 2015 ; Lu et al. , 2017 ) , especially when the noise level increases . It is important to emphasize that the motivation of our method is to produce a better calibrated model that is more robust to noisy label training , and , as a result , less likely to overfit the training set than the linear BLMNN ( Wang & Tan , 2018 ) and the point estimation-based deep metric learning ( Hoffer & Ailon , 2015 ; Lu et al. , 2017 ) . Therefore this is a paper that introduces a new theoretical framework to solve noisy label learning instead of presenting method that is competitive against the best approaches of the field ( such as Mentornet ( Jiang et al. , 2018 ) or Co-teaching ( Han et al. , 2018 ) ) in large-scale datasets ( e.g. , webvision ( Li et al. , 2017 ) and Clothing 1M ( Xiao et al. , 2015 ) ) . Furthermore , deep metric learning has been in fact considered in the Bayesian settings before ( Ishfaq et al. , 2018 ; Karaletsos et al. , 2015 ) , and recently , in ( Lin et al. , 2018 ) , but not in the context of noisy labels . Consequently , our proposed framework can be used by other methods that can deal with noisy label learning , but the extension of those methods using our proposed approach is out of the scope of this paper . 2 RELATED WORK . 2.1 POINT ESTIMATION-BASED DISTANCE METRIC LEARNING . The goal of distance metric learning ( or metric learning ) is to learn a distance function to measure the similarity between training samples . Metric learning has been shown to have great success in many visual applications such as face recognition , image classification , visual search , visual tracking , and person re-identification ( Lu et al. , 2017 ) . In principle , a supervised metric learning method aims to learn a distance metric which pulls together samples from the same class while pushing away those from different classes . Based on the complexity of the distance , metric learning can be classified into two types : linear , focusing on linear distance ( e.g. , Mahalanobis ) , which often suffers from the nonlinear relationship of data points ( Lu et al. , 2017 ) ; and non-linear , which nowadays is mostly based on deep learning . Deep metric learning ( DML ) is motivated by the fact that deep learning is an effective solution to a non-linear transformation of input samples ( Lu et al. , 2017 ; Do et al. , 2019 ) . The key idea of DML is to explicitly learn a set of hierarchical non-linear transformations to map input data points into a feature space that is used for comparing or matching these data points in a more effective manner . DML unifies feature learning and metric learning into a joint learning framework . DML is shown to be more advantageous compared to traditional models , for example , with respect to classification performance since DML does not rely on the classification layer of a trained model that often depends on the type of problems ( Lu et al. , 2017 ; Do et al. , 2019 ) . Relevant to this paper is the triplet loss-based metric learning ( Lu et al. , 2017 ; Schroff et al. , 2015 ) , in which the original training data set is represented by a set of independent triplets , formed by an anchor sample , one sample of the same class and another sample from a different class . The training process is then performed by minimizing the triplet loss to simultaneously approximate the anchor to the sample with the same class and separate the anchor from the sample of different class . 2.2 ROBUST LINEAR DISTANCE METRIC LEARNING VIA BAYESIAN INFERENCE . One drawback of the point estimation DML is that it is likely to over-fit the noisy labels ( Wang & Tan , 2014 ; 2018 ) . Wang et al . ( 2016 ) introduces the Deep Stochastic Neighbor Compression ( DSNC ) method that aims to jointly learn a nonlinear transformation that preserves the neighborhood of the data , and a compressed version of the training data set ( Ahmed et al. ) . DSNC is also robust against label noise . Motivated by the fact that Bayesian learning is a good choice for robust learning ( Zhu et al. , 2014 ; Yang et al. , 2012 ) , Wang & Tan ( 2018 ) introduced the theoretically sound Bayesian large margin nearest neighbor classification ( BLMNN ) to improve the robustness of the linear metric learning under the presence of label noise . The BLMNN framework represents the large margin nearest neighbor classification using a linear metric learning in the form of a variational Bayes method that takes the prior distribution of the transformation matrix into account and estimate the posterior distribution via stochastic variational inference ( SVI ) . BLMNN provides mathematical definitions of the noisy label triplet ( a type of non-uniform label noise ) and the β-robust algorithm against noisy label triplets . More importantly , a theoretically guarantee of the robustness of BLMNN is also provided in ( Wang & Tan , 2018 ) . Although BLMNN efficiently addresses the training issue caused by the noisy label , one key limitation of BLMNN is that it is based on a linear metric transformation that can not capture the nonlinear relationships of data points in deep metric learning . However , the extension of BLMNN to the non-linear case is not straightforward due to the complexity of the posterior distribution estimation as well as the sampling process of high-dimensional parameters , and such extension is the main target of our paper . 2.3 BAYES BY BACKPROP . Bayesian neural networks ( MacKay , 1995 ; Neal , 2012 ; Gal & Ghahramani , 2015 ) aim to estimate the posterior distribution of the network parameters given the training data . However , that inference framework is often intractable , especially when working with high dimensional parameters ( Blundell et al. , 2015 ) . Moreover , exactly calculating the posterior of the weights is challenging since it requires the integration that is known to be computationally expensive . Blundell et al . ( Blundell et al. , 2015 ) introduced the “ backpropagation-compatible ” Bayes by Backprop method for estimating the posterior distribution of the network parameter . That method is inspired by the variational free energy inference in which the exact posterior is approximated by a variational distribution by solving an optimization problem . In principle , Bayes by Backprop can directly work on the network parameter by minimizing a compression cost function ( Blundell et al. , 2015 ) . Our proposed method is inspired by ( Wang & Tan , 2018 ) , where we replace the linear Mahalanobis metric by a non-linear deep metric . Our novel proposed method therefore leverages the good performance of the deep metric learning ( Lu et al. , 2017 ) and the robustness to label noise of a Bayesian framework ( Yang et al. , 2012 ; Wang & Tan , 2018 ) . We first represent a triplet-based deep metric learning using Bayesian inference . By replacing the softmax loss by a triplet loss ( and so learning a metric ) , we impose more strict constraints , where points of the same class are forced to collapse into a small region of the feature space instead of just belonging to regions within the class boundaries . We argue that this more strict constraints has the potential to introduce more robust feature spaces for classification under noisy label training . To approximate the posterior of the network parameters , we then employ the efficient variational framework ( Blundell et al. , 2015 ) that allows to sample from a Bayesian neural network using the backpropagation framework . We theoretically show that our proposed method is robust against noisy triplet relying on a Bayesian inference framework . | This paper proposes a robust Bayesian deep metric learning framework against noise label inspired the BLMNN (Wang & Tan, 2018), deep metric learning (Hoffer & Ailon, 2015; Hu et al., 2015; Wang et al., 2017; Lu et al., 2017; Do et al., 2019), and Bayes by Backprop (Blundell et al., 2015). Directly applying the variational Bayes learning (Wang & Tan, 2018) in deep learning is challenging since it requires sampling from a distribution of the neural network parameters. Instead, this paper adapts the variational inference by Blundell et al. (Blundell et al., 2015), which allows to efficiently sample the parameters of a Bayes neural networks by using a backpropagation-compatible algorithm. The experimental results on several noisy data sets show that our novel proposed method can generalize better compared to the linear BLMNN (Wang & Tan, 2018) and the point estimation-based deep metric learning (Hoffer & Ailon, | SP:d26682cab15475af1eedf1431fb8596e311b965d |
Monotonic Robust Policy Optimization with Model Discrepancy | 1 INTRODUCTION . With deep neural network approximation , deep reinforcement learning ( DRL ) has extended classical reinforcement learning ( RL ) algorithms to successfully solving complex control tasks , e.g. , playing computer games with human-level performance ( Mnih et al. , 2013 ; Silver et al. , 2018 ) and continuous robotic control ( Schulman et al. , 2017 ) . By random exploration , DRL often requires tremendous amounts of data to train a reliable policy . It is thus infeasible for many tasks , such as robotic control and autonomous driving , as training in the real world is not only time-consuming and expensive , but also dangerous . Therefore , training is often conducted on a very limited set of samples , resulting in overfitting and poor generalization capability . One alternative solution is to learn a policy in a simulator ( i.e. , source/training environment ) and then transfer it to the real world ( i.e. , target/testing environment ) . Currently , it is impossible to model the exact environment and physics of the real world . For instance , the physical effects like nonrigidity and fluid dynamics are quite difficult to be accurately modeled by simulation . How to mitigate the model discrepancy between the training and target environments remains challenging for the generalization in RL . To simulate the dynamics of the environment , domain randomization ( DR ) , a simple but effective method is proposed . It randomizes the simulator ( e.g. , by randomizing the distribution of environment parameters ) to generate a variety of environments for training the policy in the source domain . Compared with training in a single environment , recent researches have shown that policies learned through an ensemble of environment dynamics obtained by DR achieve better generalization performance with respect to the expected return . The expected return is referred to as the average per- formance across all the trajectories sampled from different environments . Since these trajectories , regardless of their performance , are uniformly sampled , the trajectories with the worst performance would severely degrade the overall performance . In contrast , another line of research on the generalization in RL is from the perspective of control theory , i.e. , learning policies that are robust to environment perturbations . Robust RL algorithms learn policies , also using model ensembles produced by perturbing the parameters of the nominal model . EPOpt ( Rajeswaran et al. , 2017 ) , a representative of them , trains policy solely on the worst performing subset , i.e. , trajectories with the worst α percentile of returns , while discarding all the higher performing trajectories . In other words , it seeks a higher worst-case performance at the cost of degradation on the average performance . In general , robust RL algorithms may sacrifice performance on many environment variants and focus only on environments with the worst performance , such that the policy learned will not behave very badly in a previously unseen environment . In this paper , we focus on the generalization issue in RL , and aim to mitigate the model discrepancy of the transition dynamics between the training and target environments . Considering that both the average and worst-case performance are equally important for evaluating the generalization capability of the policy , we propose a policy optimization approach in which the distribution of the sampled trajectories are specifically designed for concurrently improving both the average and worst-case performance . Our main contributions are summarized as follows . • For a given policy and a wide range of environments , we theoretically derive a lower bound for the worst-case expected return of that policy over all the environments , and prove that maximizing this lower bound ( equivalent to maximizing the worst-case performance ) can be achieved by solving an average performance maximization problem , subject to constraints that bound the update step in policy optimization and statistical distance between the worst and average case environments . To the best of our knowledge , this theoretical analysis of the relationship between the worst-case and average performance is reported for the first time , which provides a practical guidance for updating policies towards both the worst-case and average performance maximization . • Trajectories obtained from diverse environments may contribute differently to the generalization capacity of the policy . Therefore , in face of a huge amount of trajectories , the problem that which types of trajectories are likely to mostly affect the generalization performance should be considered . Unlike traditional uniform sampling without the worst-case performance guarantee , and different from the worst α percentile sampling in which the parameter α is empirically preset , we propose a criterion for the sampling trajectory selection based on the proposed worst-case and average performance maximization , with which both the environment diversity and the worst-case environments are taken into account . • Based on the proposed theorem , we develop a monotonic robust policy optimization ( MRPO ) algorithm to learn the optimal policy with both the maximum worst-case and average performance . Specifically , MRPO carries out a two-step optimization to update the policy and the distribution of the sampled trajectories , respectively . We further prove that the policy optimization problem can be transformed to trust region policy optimization ( TRPO ) ( Schulman et al. , 2015 ) on all possible environments , such that the policy update can be implemented by the commonly used proximal policy optimization ( PPO ) algorithm ( Schulman et al. , 2017 ) . Finally , we prove that by updating the policy with the MRPO , the worst-case expected return can be monotonically increased . • To greatly reduce the computational complexity , we impose Lipschitz continuity assumptions on the transition dynamics and propose a practical implementation of MRPO . We then conduct experiments on five robot control tasks with variable transition dynamics of environments , and show that MRPO can improve both the average and worst-case performance in the training environments compared to DR and Robust RL baselines , and significantly facilitate the learned policy with a better generalization capability in unseen testing environments . 2 BACKGROUND . Under the standard RL setting , the environment is modeled as a Markov decision process ( MDP ) defined by a tuple < S , A , T , R > . S is the state space and A is the action space . For the convenience of derivation , we assume they are finite . T : S × A × S → [ 0 , 1 ] is the transition dynamics determined by the environment parameter p ∈ P , where P denotes the environment pa- rameter space . For example in robot control , environment parameter could be physical coefficient that directly affect the control like friction of joints and torso mass . Throughout this paper , by environment p , we mean that an environment has the transition dynamics determined by parameter p. R : S × A → R is the reward function . At each time step t , the agent observes the state st ∈ S and takes an action at ∈ A guided by policy π ( at|st ) . Then , the agent will receive a reward rt = R ( st , at ) and the environment shifts from current state st to the next state st+1 with probability T ( st+1|st , at , p ) . The goal of RL is to search for a policy π that maximizes the expected cumulative discounted reward η ( π|p ) = Eτ [ G ( τ |p ) ] , G ( τ |p ) = ∑∞ t=0 γ trt . τ = { st , at , rt , st+1 } ∞t=0 denotes the trajectory generated by policy π in environment p and γ ∈ [ 0 , 1 ] is the discount factor . We can then define the state value function as Vπ ( s ) = E [ ∑∞ k=0 γ krt+k|st = s ] , the ac- tion value function as Qπ ( s , a ) = E [ ∑∞ k=0 γ krt+k|st = s , at = a ] , and the advantage function as Aπ ( s , a ) = Qπ ( s , a ) − Vπ ( s ) . We denote the state distribution under environment p and policy π as Pπ ( s|p ) and that at time step t as P tπ ( s|p ) . During the policy optimization in RL , by updating the current policy π to a new policy π̃ , Schulman et al . ( 2015 ) prove that η ( π̃|p ) ≥ Lπ ( π̃|p ) − 2λγ ( 1− γ ) 2 β2 , Lπ ( π̃|p ) = η ( π|p ) + Es∼Pπ ( ·|p ) , a∼π ( ·|s ) [ π̃ ( a|s ) π ( a|s ) Aπ ( s , a ) ] ( 1 ) where λ = maxs |Ea∼π ( a|s ) [ Aπ ( s , a ) ] | is the maximum mean advantage following current policy π and β = maxsDTV ( π ( ·|s ) ‖π̃ ( ·|s ) ) is the maximum total variation ( TV ) distance between π and π̃ . The policy ’ s expected return after updating can be monotonically improved by maximizing the lower bound in ( 1 ) w.r.t . π̃ . Based on this and with certain approximation , Schulman et al . ( 2015 ) then propose a algorithm named trust region policy optimization ( TRPO ) that optimizes π̃ towards the direction of maximizing Lπ ( π̃ ) , subject to the trust region constraint β ≤ δ . In standard RL , environment parameter p is fixed without any model discrepancy . While under the domain randomization ( DR ) settings , because of the existence of model discrepancy , environment parameter should actually be a random variable p following a probability distribution P over P . By introducing DR , the goal of policy optimization is to maximize the mean expected cumulative discounted reward over all possible environment parameters , i.e. , maxπ Ep∼P [ η ( π|p ) ] . In face of model discrepancy , our goal is to provide a performance improvement guarantee for the worst-case environments , and meanwhile to improve the average performance over all environments . Lemma 1 . Guided by a certain policy π , there exists a non-negative constant C ≥ 0 , such that the expected cumulative discounted reward in environment with the worst-case performance satisfies : η ( π|pw ) − Ep∼P [ η ( π|p ) ] ≥ −C , ( 2 ) where environment pw corresponds to the worst-case performance , and C is related to pw and π . Proof . See Appendix A.1 for details . Theorem 1 . In MDPs where reward function is bounded , for any distribution P overP , by updating the current policy π to a new policy π̃ , the following bound holds : η ( π̃|pw ) ≥ Ep∼P [ η ( π̃|p ) ] − 2|r|max γEp∼P [ ( pw‖p ) ] ( 1− γ ) 2 − 4|r|maxα ( 1− γ ) 2 , ( 3 ) where ( pw‖p ) , maxt Es′∼P t ( ·|pw ) Ea∼π ( ·|s′ ) DTV ( T ( s|s′ , a , pw ) ‖T ( s|s′ , a , p ) ) , environment pw corresponds to the worst-case performance under the current policy π , and α , maxt Es′∼P t ( ·|pw ) DTV ( π ( a|s′ ) ‖π̃ ( a|s′ ) ) . Proof . See Appendix A.2 for details , and Appendix A.7 for bounded reward function condition . In ( 3 ) , ( pw‖p ) specifies the model discrepancy between two environments pw and p in terms of the maximum expected TV distance of their transition dynamics of all time steps in trajectory sampled in environment pw using policy π , and α denotes the maximum expected TV distance of two policies along trajectory sampled in environment pw using policy π . In general , the RHS of ( 3 ) provides a lower-bound for the expected return achieved in the worst-case environment pw , where the first term denotes the mean expected cumulative discounted reward over all environments following the Algorithm 1 Monotonic Robust Policy Optimization 1 : Initialize policy π0 , uniform distribution of environment parameters U , number of environment parameters sampled per iteration M , maximum number of iterations N and maximum episode length T . 2 : for k = 0 to N − 1 do 3 : Sample a set of environment parameters { pi } M−1i=0 according to U . 4 : for i = 0 to M − 1 do 5 : Sample L trajectories { τi , j } L−1j=0 in environment pi using πk . 6 : Determine pkw = argminpi∈ { pi } M−1i=0 ∑L−1 j=0 G ( τi , j |pi ) /L . 7 : Compute Ê ( pi , πk ) = ∑L−1 j=0 G ( τi , j |pi ) L − 2|r|maxγ ( pi‖pkw ) ( 1−γ ) 2 for environment pi . 8 : end for 9 : Select the trajectory set T = { τi : Ê ( pi , πk ) ≥ Ê ( pkw , πk ) } . 10 : Use PPO for policy optimization on T to get the updated policy πk+1 . 11 : end for sampling distribution P , while the other two terms can be considered as penalization on a large TV distance between the worst-case environment pw and the average case , and a large update step from the current policy π to the new policy π̃ , respectively . Therefore , by maximizing this lower bound , we can improve the worst-case performance , which in practice is equivalent to the following constrained optimization problem with two constraints : max π̃ , P Ep∼P [ η ( π̃|p ) ] s.t . α ≤ δ1 , Ep∼P [ ( pw‖p ) ] ≤ δ2 . ( 4 ) The optimization objective is to maximize the mean expected cumulative discounted reward over all possible environments , by updating not only the policy π̃ , but the environment parameter ’ s sampling distribution P . The first constraint imposes a similar trust region to TRPO ( Schulman et al. , 2017 ) that constrains the update step in policy optimization . In addition , we further propose a new trust region constraint on the sampling distribution P that the TV distance between the worst-case environment pw and average case over P is bounded , such that by achieving the optimization objective in ( 4 ) , the worst-case performance is also improved . To solve the constrained optimization problem in ( 4 ) , we need to seek for the optimal policy π̃ and the distribution P of the sampled trajectories . In practice , we carry out a two-step optimization procedure to simplify the computational complexity . First , we fix the policy by letting π̃ = π , and optimize the objective in ( 4 ) w.r.t . the distribution P . In this case , we no longer need to consider the first constraint on the policy update , and thus can convert the second constraint on the sampling distribution into the objective with the guidance of Theorem 1 , formulating the following unconstrained optimization problem : max P Ep∼P [ E ( p , π̃ ) ] , ( 5 ) where we denote E ( p , π̃ ) , η ( π̃|p ) − 2|r|maxγ ( p‖pw ) ( 1−γ ) 2 . The first term in E ( p , π̃ ) indicates policy π̃ ’ s performance in environment p , while the second term measures the model discrepancy between environment p and pw . Since the objective function in ( 5 ) is linear to P , we can update P by assigning a higher probability to environment p with higher E ( p , π̃ ) . As a consequence , sampling according to E ( p , π̃ ) would increase the sampling probability of environments with both poor and good-enough performance , and avoid being trapped in the worst-case environment . Specifically , we propose to select samples from environment p that meets E ( p , π̃ ) ≥ E ( pw , π̃ ) for the training of policy π̃ , which is equivalent to assigning a zero probability to the other samples . In the second step , we target at optimizing the policy π with the updated distribution P fixed , i.e. , the following optimization problem : max π̃ Ep∼P [ η ( π̃|p ) ] s.t . α ≤ δ1 ( 6 ) Optimization in ( 6 ) can be transformed to a trust region robust policy optimization similar to TRPO and solve it practically with PPO ( refer to Appendix A.3 and Schulman et al . ( 2017 ) for more information ) . To summarize , we propose a monotonic robust policy optimization ( MRPO ) in Algorithm1 . At each iteration k , we uniformly sample M environments and run a trajectory for each sampled environment . For each environment pi , we sampleL trajectories { τi , j } Lj=1 , approximate η ( πk|pi ) with∑L−1 j=0 G ( τi , j |pi ) /L , and determine the worst-case environment pw based on ∑L−1 j=0 G ( τi , j |pi ) /L of a given set of environments piM−1i=0 , We then optimize the policy with PPO on the selected trajectory subset T according to E ( pi , πk ) and E ( pkw , πk ) . We now formally show that by maximizing the lower bound provided in Theorem1 , the worst-case performance within all the environments can be monotonically improved by MRPO . Theorem 2 . The sequence of policy { π1 , π2 , . . . , πN } generated by Algorithm1 is guaranteed with the monotonic worst-case performance improvement , i.e. , η ( π1|p1w ) ≤ η ( π2|p2w ) ≤ · · · ≤ η ( πN |pNw ) , ( 7 ) where pkw denotes the parameter of environment with the worst-case performance guided by the current policy πk at iteration k. Proof . See Appendix A.4 for details . | This paper focuses on the generalization issue in reinforcemetn leanring, specifically aims to address the problems of domain randomization(DR) technique. Different from standard DR which treats all the sample environment as equal, this paper proposed to improve the performance over all possible environments and the worst-case environment concurrently. This paper theoretically derives a lower bound for the worst-case performance of a given policy over all environment, and in practical, the proposed method, monotonic robust policy optimization(MRPO) carries out a two-step optimization to imporve the lower bound such as to maximize the averaged and worst-case policy perfomance. | SP:e8a08f3ad14ae96021ec69070a156d57811c88be |
Monotonic Robust Policy Optimization with Model Discrepancy | 1 INTRODUCTION . With deep neural network approximation , deep reinforcement learning ( DRL ) has extended classical reinforcement learning ( RL ) algorithms to successfully solving complex control tasks , e.g. , playing computer games with human-level performance ( Mnih et al. , 2013 ; Silver et al. , 2018 ) and continuous robotic control ( Schulman et al. , 2017 ) . By random exploration , DRL often requires tremendous amounts of data to train a reliable policy . It is thus infeasible for many tasks , such as robotic control and autonomous driving , as training in the real world is not only time-consuming and expensive , but also dangerous . Therefore , training is often conducted on a very limited set of samples , resulting in overfitting and poor generalization capability . One alternative solution is to learn a policy in a simulator ( i.e. , source/training environment ) and then transfer it to the real world ( i.e. , target/testing environment ) . Currently , it is impossible to model the exact environment and physics of the real world . For instance , the physical effects like nonrigidity and fluid dynamics are quite difficult to be accurately modeled by simulation . How to mitigate the model discrepancy between the training and target environments remains challenging for the generalization in RL . To simulate the dynamics of the environment , domain randomization ( DR ) , a simple but effective method is proposed . It randomizes the simulator ( e.g. , by randomizing the distribution of environment parameters ) to generate a variety of environments for training the policy in the source domain . Compared with training in a single environment , recent researches have shown that policies learned through an ensemble of environment dynamics obtained by DR achieve better generalization performance with respect to the expected return . The expected return is referred to as the average per- formance across all the trajectories sampled from different environments . Since these trajectories , regardless of their performance , are uniformly sampled , the trajectories with the worst performance would severely degrade the overall performance . In contrast , another line of research on the generalization in RL is from the perspective of control theory , i.e. , learning policies that are robust to environment perturbations . Robust RL algorithms learn policies , also using model ensembles produced by perturbing the parameters of the nominal model . EPOpt ( Rajeswaran et al. , 2017 ) , a representative of them , trains policy solely on the worst performing subset , i.e. , trajectories with the worst α percentile of returns , while discarding all the higher performing trajectories . In other words , it seeks a higher worst-case performance at the cost of degradation on the average performance . In general , robust RL algorithms may sacrifice performance on many environment variants and focus only on environments with the worst performance , such that the policy learned will not behave very badly in a previously unseen environment . In this paper , we focus on the generalization issue in RL , and aim to mitigate the model discrepancy of the transition dynamics between the training and target environments . Considering that both the average and worst-case performance are equally important for evaluating the generalization capability of the policy , we propose a policy optimization approach in which the distribution of the sampled trajectories are specifically designed for concurrently improving both the average and worst-case performance . Our main contributions are summarized as follows . • For a given policy and a wide range of environments , we theoretically derive a lower bound for the worst-case expected return of that policy over all the environments , and prove that maximizing this lower bound ( equivalent to maximizing the worst-case performance ) can be achieved by solving an average performance maximization problem , subject to constraints that bound the update step in policy optimization and statistical distance between the worst and average case environments . To the best of our knowledge , this theoretical analysis of the relationship between the worst-case and average performance is reported for the first time , which provides a practical guidance for updating policies towards both the worst-case and average performance maximization . • Trajectories obtained from diverse environments may contribute differently to the generalization capacity of the policy . Therefore , in face of a huge amount of trajectories , the problem that which types of trajectories are likely to mostly affect the generalization performance should be considered . Unlike traditional uniform sampling without the worst-case performance guarantee , and different from the worst α percentile sampling in which the parameter α is empirically preset , we propose a criterion for the sampling trajectory selection based on the proposed worst-case and average performance maximization , with which both the environment diversity and the worst-case environments are taken into account . • Based on the proposed theorem , we develop a monotonic robust policy optimization ( MRPO ) algorithm to learn the optimal policy with both the maximum worst-case and average performance . Specifically , MRPO carries out a two-step optimization to update the policy and the distribution of the sampled trajectories , respectively . We further prove that the policy optimization problem can be transformed to trust region policy optimization ( TRPO ) ( Schulman et al. , 2015 ) on all possible environments , such that the policy update can be implemented by the commonly used proximal policy optimization ( PPO ) algorithm ( Schulman et al. , 2017 ) . Finally , we prove that by updating the policy with the MRPO , the worst-case expected return can be monotonically increased . • To greatly reduce the computational complexity , we impose Lipschitz continuity assumptions on the transition dynamics and propose a practical implementation of MRPO . We then conduct experiments on five robot control tasks with variable transition dynamics of environments , and show that MRPO can improve both the average and worst-case performance in the training environments compared to DR and Robust RL baselines , and significantly facilitate the learned policy with a better generalization capability in unseen testing environments . 2 BACKGROUND . Under the standard RL setting , the environment is modeled as a Markov decision process ( MDP ) defined by a tuple < S , A , T , R > . S is the state space and A is the action space . For the convenience of derivation , we assume they are finite . T : S × A × S → [ 0 , 1 ] is the transition dynamics determined by the environment parameter p ∈ P , where P denotes the environment pa- rameter space . For example in robot control , environment parameter could be physical coefficient that directly affect the control like friction of joints and torso mass . Throughout this paper , by environment p , we mean that an environment has the transition dynamics determined by parameter p. R : S × A → R is the reward function . At each time step t , the agent observes the state st ∈ S and takes an action at ∈ A guided by policy π ( at|st ) . Then , the agent will receive a reward rt = R ( st , at ) and the environment shifts from current state st to the next state st+1 with probability T ( st+1|st , at , p ) . The goal of RL is to search for a policy π that maximizes the expected cumulative discounted reward η ( π|p ) = Eτ [ G ( τ |p ) ] , G ( τ |p ) = ∑∞ t=0 γ trt . τ = { st , at , rt , st+1 } ∞t=0 denotes the trajectory generated by policy π in environment p and γ ∈ [ 0 , 1 ] is the discount factor . We can then define the state value function as Vπ ( s ) = E [ ∑∞ k=0 γ krt+k|st = s ] , the ac- tion value function as Qπ ( s , a ) = E [ ∑∞ k=0 γ krt+k|st = s , at = a ] , and the advantage function as Aπ ( s , a ) = Qπ ( s , a ) − Vπ ( s ) . We denote the state distribution under environment p and policy π as Pπ ( s|p ) and that at time step t as P tπ ( s|p ) . During the policy optimization in RL , by updating the current policy π to a new policy π̃ , Schulman et al . ( 2015 ) prove that η ( π̃|p ) ≥ Lπ ( π̃|p ) − 2λγ ( 1− γ ) 2 β2 , Lπ ( π̃|p ) = η ( π|p ) + Es∼Pπ ( ·|p ) , a∼π ( ·|s ) [ π̃ ( a|s ) π ( a|s ) Aπ ( s , a ) ] ( 1 ) where λ = maxs |Ea∼π ( a|s ) [ Aπ ( s , a ) ] | is the maximum mean advantage following current policy π and β = maxsDTV ( π ( ·|s ) ‖π̃ ( ·|s ) ) is the maximum total variation ( TV ) distance between π and π̃ . The policy ’ s expected return after updating can be monotonically improved by maximizing the lower bound in ( 1 ) w.r.t . π̃ . Based on this and with certain approximation , Schulman et al . ( 2015 ) then propose a algorithm named trust region policy optimization ( TRPO ) that optimizes π̃ towards the direction of maximizing Lπ ( π̃ ) , subject to the trust region constraint β ≤ δ . In standard RL , environment parameter p is fixed without any model discrepancy . While under the domain randomization ( DR ) settings , because of the existence of model discrepancy , environment parameter should actually be a random variable p following a probability distribution P over P . By introducing DR , the goal of policy optimization is to maximize the mean expected cumulative discounted reward over all possible environment parameters , i.e. , maxπ Ep∼P [ η ( π|p ) ] . In face of model discrepancy , our goal is to provide a performance improvement guarantee for the worst-case environments , and meanwhile to improve the average performance over all environments . Lemma 1 . Guided by a certain policy π , there exists a non-negative constant C ≥ 0 , such that the expected cumulative discounted reward in environment with the worst-case performance satisfies : η ( π|pw ) − Ep∼P [ η ( π|p ) ] ≥ −C , ( 2 ) where environment pw corresponds to the worst-case performance , and C is related to pw and π . Proof . See Appendix A.1 for details . Theorem 1 . In MDPs where reward function is bounded , for any distribution P overP , by updating the current policy π to a new policy π̃ , the following bound holds : η ( π̃|pw ) ≥ Ep∼P [ η ( π̃|p ) ] − 2|r|max γEp∼P [ ( pw‖p ) ] ( 1− γ ) 2 − 4|r|maxα ( 1− γ ) 2 , ( 3 ) where ( pw‖p ) , maxt Es′∼P t ( ·|pw ) Ea∼π ( ·|s′ ) DTV ( T ( s|s′ , a , pw ) ‖T ( s|s′ , a , p ) ) , environment pw corresponds to the worst-case performance under the current policy π , and α , maxt Es′∼P t ( ·|pw ) DTV ( π ( a|s′ ) ‖π̃ ( a|s′ ) ) . Proof . See Appendix A.2 for details , and Appendix A.7 for bounded reward function condition . In ( 3 ) , ( pw‖p ) specifies the model discrepancy between two environments pw and p in terms of the maximum expected TV distance of their transition dynamics of all time steps in trajectory sampled in environment pw using policy π , and α denotes the maximum expected TV distance of two policies along trajectory sampled in environment pw using policy π . In general , the RHS of ( 3 ) provides a lower-bound for the expected return achieved in the worst-case environment pw , where the first term denotes the mean expected cumulative discounted reward over all environments following the Algorithm 1 Monotonic Robust Policy Optimization 1 : Initialize policy π0 , uniform distribution of environment parameters U , number of environment parameters sampled per iteration M , maximum number of iterations N and maximum episode length T . 2 : for k = 0 to N − 1 do 3 : Sample a set of environment parameters { pi } M−1i=0 according to U . 4 : for i = 0 to M − 1 do 5 : Sample L trajectories { τi , j } L−1j=0 in environment pi using πk . 6 : Determine pkw = argminpi∈ { pi } M−1i=0 ∑L−1 j=0 G ( τi , j |pi ) /L . 7 : Compute Ê ( pi , πk ) = ∑L−1 j=0 G ( τi , j |pi ) L − 2|r|maxγ ( pi‖pkw ) ( 1−γ ) 2 for environment pi . 8 : end for 9 : Select the trajectory set T = { τi : Ê ( pi , πk ) ≥ Ê ( pkw , πk ) } . 10 : Use PPO for policy optimization on T to get the updated policy πk+1 . 11 : end for sampling distribution P , while the other two terms can be considered as penalization on a large TV distance between the worst-case environment pw and the average case , and a large update step from the current policy π to the new policy π̃ , respectively . Therefore , by maximizing this lower bound , we can improve the worst-case performance , which in practice is equivalent to the following constrained optimization problem with two constraints : max π̃ , P Ep∼P [ η ( π̃|p ) ] s.t . α ≤ δ1 , Ep∼P [ ( pw‖p ) ] ≤ δ2 . ( 4 ) The optimization objective is to maximize the mean expected cumulative discounted reward over all possible environments , by updating not only the policy π̃ , but the environment parameter ’ s sampling distribution P . The first constraint imposes a similar trust region to TRPO ( Schulman et al. , 2017 ) that constrains the update step in policy optimization . In addition , we further propose a new trust region constraint on the sampling distribution P that the TV distance between the worst-case environment pw and average case over P is bounded , such that by achieving the optimization objective in ( 4 ) , the worst-case performance is also improved . To solve the constrained optimization problem in ( 4 ) , we need to seek for the optimal policy π̃ and the distribution P of the sampled trajectories . In practice , we carry out a two-step optimization procedure to simplify the computational complexity . First , we fix the policy by letting π̃ = π , and optimize the objective in ( 4 ) w.r.t . the distribution P . In this case , we no longer need to consider the first constraint on the policy update , and thus can convert the second constraint on the sampling distribution into the objective with the guidance of Theorem 1 , formulating the following unconstrained optimization problem : max P Ep∼P [ E ( p , π̃ ) ] , ( 5 ) where we denote E ( p , π̃ ) , η ( π̃|p ) − 2|r|maxγ ( p‖pw ) ( 1−γ ) 2 . The first term in E ( p , π̃ ) indicates policy π̃ ’ s performance in environment p , while the second term measures the model discrepancy between environment p and pw . Since the objective function in ( 5 ) is linear to P , we can update P by assigning a higher probability to environment p with higher E ( p , π̃ ) . As a consequence , sampling according to E ( p , π̃ ) would increase the sampling probability of environments with both poor and good-enough performance , and avoid being trapped in the worst-case environment . Specifically , we propose to select samples from environment p that meets E ( p , π̃ ) ≥ E ( pw , π̃ ) for the training of policy π̃ , which is equivalent to assigning a zero probability to the other samples . In the second step , we target at optimizing the policy π with the updated distribution P fixed , i.e. , the following optimization problem : max π̃ Ep∼P [ η ( π̃|p ) ] s.t . α ≤ δ1 ( 6 ) Optimization in ( 6 ) can be transformed to a trust region robust policy optimization similar to TRPO and solve it practically with PPO ( refer to Appendix A.3 and Schulman et al . ( 2017 ) for more information ) . To summarize , we propose a monotonic robust policy optimization ( MRPO ) in Algorithm1 . At each iteration k , we uniformly sample M environments and run a trajectory for each sampled environment . For each environment pi , we sampleL trajectories { τi , j } Lj=1 , approximate η ( πk|pi ) with∑L−1 j=0 G ( τi , j |pi ) /L , and determine the worst-case environment pw based on ∑L−1 j=0 G ( τi , j |pi ) /L of a given set of environments piM−1i=0 , We then optimize the policy with PPO on the selected trajectory subset T according to E ( pi , πk ) and E ( pkw , πk ) . We now formally show that by maximizing the lower bound provided in Theorem1 , the worst-case performance within all the environments can be monotonically improved by MRPO . Theorem 2 . The sequence of policy { π1 , π2 , . . . , πN } generated by Algorithm1 is guaranteed with the monotonic worst-case performance improvement , i.e. , η ( π1|p1w ) ≤ η ( π2|p2w ) ≤ · · · ≤ η ( πN |pNw ) , ( 7 ) where pkw denotes the parameter of environment with the worst-case performance guided by the current policy πk at iteration k. Proof . See Appendix A.4 for details . | This paper introduces Monotonic Robust Policy Optimization (MRPO), an RL algorithm that aims to jointly optimize policy and domain sampling distribution, with the goal of improving policy performance for both average and worst-case scenarios and addressing the model discrepancy between the training and target environments. They derive a lower bound for the worst-case performance, which comprises the average performance, policy change, and the statistical distance between the worst and average case environments. A TRPO-like monotonic performance improvement guarantee is provided for the worst-case expected return. Finally, a practical approximation to MRPO is proposed, which imposes the assumption on Lipschitz continuity with respect to the environment parameters and circumvents the estimation of total variation distance between the worst-case environment and the sampled environment. Experiments are conducted on three control tasks with diverse transition dynamics parameters, where MRPO could improve both average and worst-case performance in the training environments, and it shows better generalization to the unseen test environments than baseline algorithms. | SP:e8a08f3ad14ae96021ec69070a156d57811c88be |
Learning Private Representations with Focal Entropy | 1 INTRODUCTION . Lately , the topics of privacy and security are enjoying increased interest in the machine learning community . This can largely be attributed to the success of big data in conjunction with deep learning and the urge to create and process ever-larger data sets for mining . However , with the emergence of more and more machine learning services becoming part of our daily lives and making use of our data , special measures must be taken to protect privacy and decrease the risk of privacy creep Narayanan & Shmatikov ( 2006 ) ; Backstrom et al . ( 2007 ) . Simultaneously , growing privacy concerns entail the risk of becoming a major deterrent in the widespread adoption of machine learning and the attainment of their concomitant benefits . Therefore , reliable and accurate privacy-preserving methodologies are needed , which is why the topic lately has enjoyed increased attention in the research community . Several efforts have been made in machine learning to develop algorithms that preserve user privacy while achieving reasonable predictive power . Solutions proposed for privacy in the research community are versatile . A standard approach to address privacy issues in the client-server setup is to anonymize the data of clients . This is often achieved by directly obfuscating the private part ( s ) of the data and/or adding random noise to raw data . Consequently , the noise level controls the trade-off between predictive quality and user privacy ( e.g. , data-level Differential Privacy Dwork ( 2006 ) ) . These approaches associate a privacy budget with all operations on the dataset . However , complex training procedures run the risk of exhausting the budget before convergence . A recent solution to such a problem has been federated learning McMahan et al . ( 2016 ) ; Geyer et al . ( 2017 ) , which allows us to collaboratively train a centralized model while keeping the training data decentralized . The idea behind this strategy is that clients transfer the parameters of the training model in the form of gradient updates to a server instead of the data itself . While such an approach is appealing to train a network with data hosted on different clients , transferring the models between clients and server , and averaging the gradients across the clients generates significant data transmission and extra computations , which considerably prolongs training . Another widely adopted solution is to rely on encoded data representation . Following this notion , instead of transferring the client ’ s data , a feature representation is learned on the clients ’ side and transferred to the server . Unfortunately , the learned features may still contain rich information , which can breach user privacy Osia et al . ( 2017 ; 2018 ) . Also , the extracted features can be exploited by an attacker to infer private attributes Salem et al . ( 2019 ) . Yet , another approach is homomorphic encryption Armknecht et al . ( 2015 ) . Despite providing strong cryptographic guarantees , in theory , it incurs considerable computational overhead , which still prevents its applicability for SOTA deep learning architectures Srivastava et al . ( 2019 ) . The recent success of adversarial learning in making the representations fair Louizos et al . ( 2015 ) , unbiased Madras et al . ( 2018 ) , and controllably invariant to sensitive attributes Xie et al . ( 2017 ) , has led to the increased adoption of Adversarial Representation Learning ( ARL ) to control the private information encapsulated within the representation Roy & Boddeti ( 2019 ) ; Sadeghi et al . ( 2019 ) . In the common ARL formalization of the privacy-preserving representation learning , a “ predictor ” seeks to extract the desired target attributes while an “ adversary ” seeks to reveal the private attributes . However , the solutions mentioned earlier can only meet its practical promises when the private attributes do not strongly correlate with the target attributes Roy & Boddeti ( 2019 ) . In this paper , we deal with adversarial privacy-preserving representation learning . In this setting , the sensitive and target attributes are related to each other ( e.g. , ’ Queen Elizabeth II. ’ and ’ wearing hat ’ , or ’ Mahatma Gandhi ’ and ’ wearing eyeglasses ’ ) to a large extent . The objective of this task is to learn a representation that contains all the information about non-sensitive attributes . At the same time , it omits to encode the sensitive attributes of them . Such representation can be transmitted to the server without concerns regarding the privacy revelation of classifiers having equal and higher capacity than the adversarial proxy used during training . For that , we adopt an ARL procedure and propose to learn a representation which maximizes the likelihood of the target information ( i.e. , attribute predictor ) while increasing the uncertainty about the class that each sample belongs to ( i.e. , class adversary ) . With that , we intuitively tie the privacy notion to the class-level information and sanitize the class-revealing information from the representation in a semantic-aware fashion . Specifically , we propose to learn the representation using the popular Variational Autoencoders ( VAE ) Kingma & Welling ( 2013 ) , where the latent representation is additionally decomposed into two latent factors : target and residual . Whereas the target part encodes the information for the target task , the residual part identifies and collects the data ’ s private part . In order to sanitize the target representation , we leverage an ARL procedure . There are two general strategies for ARL : the common solution for adversarial optimization is to maximize the loss of the adversary by minimizing the negative log-likelihood of sensitive variables . However , this is practically sub-optimal from the perspective of preventing information leakage . If the optimization does not reach the equilibrium , the resulting distribution associated with the minimum likelihood solution is subject to leaking the most amount of information . Another solution for adversarial optimization is to maximize the adversary ’ s entropy by enforcing a uniform distribution over the sensitive labels Roy & Boddeti ( 2019 ) ; Sarhan et al . ( 2020 ) . Such a solution provides no information to the adversary . However , it has the risk of weakening the encoder as it partially eliminates the adversary ’ s role in the representation learning phase and is provably bound to the adversary ’ s optimality . However , fulfilling the necessary optimality conditions impractical . Hence we seek to relax optimality by leveraging a quasi-optimal objective . To this end , we propose to maximize a variant of entropy - focal entropy - for dealing with inter-class uncertainty maximization . Focal entropy enforces the uncertainty to focus on a sparse set of similar classes and prevents the vast number of dissimilar classes from overwhelming the uncertainty . Maximization of focal entropy increases the uncertainty in a more organic , namely in a systematic and semantic-aware fashion . Hence , it is leading to a deeper privacy sanitization during the representation learning phase . In summary , the main contributions of this paper are three-fold . First , we propose to learn the privacy-preserving representations . Second , we introduce an ARL setting for this task by adding a novel entropy term to the VAE . Third , we demonstrate experimentally that our proposed method learns a semantically meaningful privacy-preserving sanitized representation . 2 RELATED WORKS . Much research has been conducted in protecting differential privacy Dwork et al . ( 2017 ) ; Dwork ( 2006 ) ; Ryoo et al . ( 2017 ) ; Abadi et al . ( 2016 ) on data and parameter level by anonymizing raw data directly , or incorporating a randomized mechanism into the learning process , respectively . Although successful , our method is fundamentally different from them , as we aim to learn a private representation instead of preserving privacy in data or parameter level . While we do not consider their framework here , our method could employ differential privacy during the post-classifier training . The advantages of learning and transmitting representations instead of data have been investigated recently in many works , see Osia et al . ( 2017 ; 2018 ) , and references therein . Nevertheless , such a representation is proven to contain some privacy revealing information of clients . The recent 𝜇 ! `` # 𝒛 ! `` # 𝒛 $ % ! Residual Target Enc Dec 𝜎 ! `` # 𝜇 $ % ! 𝜎 $ % ! 𝒛 '' # $ 𝒛 % & '' 𝒙 𝑠 𝑞 ! `` # 𝒛 ! `` # 𝒙 ) 𝑞 $ % ! 𝒛 $ % ! 𝒙 ) 𝑝 ( 𝑠|𝒛 ! `` # ) 𝑝 ( 𝒕|𝒛 $ % ! ) , 𝑝 ( 𝑠|𝒛 $ % ! ) , 𝑝 ( 𝒕|𝒛 ! `` # ) 𝒕 𝑠 𝒕 𝑝 𝒙 [ 𝒛 ! '' # |𝒛 $ % ! ] ) Figure 1 : Schematic illustration of the proposed approach . Left : The graphical model associated with the minimax game . Right : Our proposed architecture with the two stream network is based on VAE , and augmented with additional predictor loss and ( focal ) entropy . success of adversarial learning has led to the increased adoption of this technique for learning representations that preserve sensitive information in different types of data . For instance , Srivastava et al . ( 2019 ) proposed to learn privacy-preserving representations for automatic speech recognition ( ASR ) . In Yang et al . ( 2018 ) , a representation is learned on the raw student clickstream event data , captured as they watch lecture videos in massive open online courses . In Li et al . ( 2019 ) , the authors proposed an obfuscator designed to hide privacy-related sensitive information from the features using adversarial training . Similarly , Kim et al . ( 2019 ) is based on adversarial learning , which encodes images to obfuscate the patient identity while preserving enough information for a medical segmentation task . Pittaluga et al . ( 2019 ) considered a formulation based on adversarial optimization between the encoding function and estimators for private tasks . Although our method is also based on adversarial learning , we differently facilitate adversarial sanitization using entropy . This leads to a more privacy-preserving representation while maintaining the method complexity . The most related work to ours is by Roy & Boddeti ( 2019 ) ; Sadeghi et al . ( 2019 ) , which aims at obtaining a sanitized representation using entropy and adversarial representation learning , respectively . Another related work to our paper proposed by Feutry et al . ( 2018 ) aims at learning representations that preserve the relevant part of the information while dismissing information about the private labels corresponding to the clients ’ identity . A key difference compared to this method is that they require labels for the downstream task during representation learning . Recently , Chen et al . ( 2018 ) proposed a complex method for privacy-preserving representation learning . Gabbay & Hoshen ( 2020 ) propose an approach for disentanglement using shared latent optimization and an asymmetric regularization . Edwards & Storkey ( 2016 ) propose sanitize representations utilizing an adversary . Liao et al . ( 2019a ; b ) employ an adversary to obtain a sanitized representation , however , also incorporating fairness constraints . Liu et al . ( 2018 ) proposed to use conventional entropy as an adversary in the context of a vanilla auto-encoder for privacy sanitization . To the best of our knowledge , our paper is the first work that proposes taking the class similarity into account for the adversary ’ s entropy . Furthermore , fairness as proposed in Creager et al . ( 2019 ) ; Locatello et al . ( 2019 ) ; Quadrianto et al . ( 2019 ) ; Sarhan et al . ( 2020 ) is yet another intimately connected notion to privacy . While we do not consider fairness here , our method could also be extended to that problem , leading to exciting future research . Finally , we note that our model is different from the federated learning methodology in McMahan et al . ( 2016 ) ; Geyer et al . ( 2017 ) , which focuses on learning a decentralized private model by sharing gradient updates instead of learning representations . | This paper gives a method in the class of learning representations which have some information censored. In particular, the authors propose a setup where there are many “private” classes, and some classes are more similar than others – this maps (I think) onto the privacy setting, where each class is like one individual. They give a modification of an entropy loss, focal entropy, which is conducive to this type of learning, and show in experiments that this method can be successful. | SP:bbaaeb718f346e866e91cf7e6f9278f0a2bfbab4 |
Learning Private Representations with Focal Entropy | 1 INTRODUCTION . Lately , the topics of privacy and security are enjoying increased interest in the machine learning community . This can largely be attributed to the success of big data in conjunction with deep learning and the urge to create and process ever-larger data sets for mining . However , with the emergence of more and more machine learning services becoming part of our daily lives and making use of our data , special measures must be taken to protect privacy and decrease the risk of privacy creep Narayanan & Shmatikov ( 2006 ) ; Backstrom et al . ( 2007 ) . Simultaneously , growing privacy concerns entail the risk of becoming a major deterrent in the widespread adoption of machine learning and the attainment of their concomitant benefits . Therefore , reliable and accurate privacy-preserving methodologies are needed , which is why the topic lately has enjoyed increased attention in the research community . Several efforts have been made in machine learning to develop algorithms that preserve user privacy while achieving reasonable predictive power . Solutions proposed for privacy in the research community are versatile . A standard approach to address privacy issues in the client-server setup is to anonymize the data of clients . This is often achieved by directly obfuscating the private part ( s ) of the data and/or adding random noise to raw data . Consequently , the noise level controls the trade-off between predictive quality and user privacy ( e.g. , data-level Differential Privacy Dwork ( 2006 ) ) . These approaches associate a privacy budget with all operations on the dataset . However , complex training procedures run the risk of exhausting the budget before convergence . A recent solution to such a problem has been federated learning McMahan et al . ( 2016 ) ; Geyer et al . ( 2017 ) , which allows us to collaboratively train a centralized model while keeping the training data decentralized . The idea behind this strategy is that clients transfer the parameters of the training model in the form of gradient updates to a server instead of the data itself . While such an approach is appealing to train a network with data hosted on different clients , transferring the models between clients and server , and averaging the gradients across the clients generates significant data transmission and extra computations , which considerably prolongs training . Another widely adopted solution is to rely on encoded data representation . Following this notion , instead of transferring the client ’ s data , a feature representation is learned on the clients ’ side and transferred to the server . Unfortunately , the learned features may still contain rich information , which can breach user privacy Osia et al . ( 2017 ; 2018 ) . Also , the extracted features can be exploited by an attacker to infer private attributes Salem et al . ( 2019 ) . Yet , another approach is homomorphic encryption Armknecht et al . ( 2015 ) . Despite providing strong cryptographic guarantees , in theory , it incurs considerable computational overhead , which still prevents its applicability for SOTA deep learning architectures Srivastava et al . ( 2019 ) . The recent success of adversarial learning in making the representations fair Louizos et al . ( 2015 ) , unbiased Madras et al . ( 2018 ) , and controllably invariant to sensitive attributes Xie et al . ( 2017 ) , has led to the increased adoption of Adversarial Representation Learning ( ARL ) to control the private information encapsulated within the representation Roy & Boddeti ( 2019 ) ; Sadeghi et al . ( 2019 ) . In the common ARL formalization of the privacy-preserving representation learning , a “ predictor ” seeks to extract the desired target attributes while an “ adversary ” seeks to reveal the private attributes . However , the solutions mentioned earlier can only meet its practical promises when the private attributes do not strongly correlate with the target attributes Roy & Boddeti ( 2019 ) . In this paper , we deal with adversarial privacy-preserving representation learning . In this setting , the sensitive and target attributes are related to each other ( e.g. , ’ Queen Elizabeth II. ’ and ’ wearing hat ’ , or ’ Mahatma Gandhi ’ and ’ wearing eyeglasses ’ ) to a large extent . The objective of this task is to learn a representation that contains all the information about non-sensitive attributes . At the same time , it omits to encode the sensitive attributes of them . Such representation can be transmitted to the server without concerns regarding the privacy revelation of classifiers having equal and higher capacity than the adversarial proxy used during training . For that , we adopt an ARL procedure and propose to learn a representation which maximizes the likelihood of the target information ( i.e. , attribute predictor ) while increasing the uncertainty about the class that each sample belongs to ( i.e. , class adversary ) . With that , we intuitively tie the privacy notion to the class-level information and sanitize the class-revealing information from the representation in a semantic-aware fashion . Specifically , we propose to learn the representation using the popular Variational Autoencoders ( VAE ) Kingma & Welling ( 2013 ) , where the latent representation is additionally decomposed into two latent factors : target and residual . Whereas the target part encodes the information for the target task , the residual part identifies and collects the data ’ s private part . In order to sanitize the target representation , we leverage an ARL procedure . There are two general strategies for ARL : the common solution for adversarial optimization is to maximize the loss of the adversary by minimizing the negative log-likelihood of sensitive variables . However , this is practically sub-optimal from the perspective of preventing information leakage . If the optimization does not reach the equilibrium , the resulting distribution associated with the minimum likelihood solution is subject to leaking the most amount of information . Another solution for adversarial optimization is to maximize the adversary ’ s entropy by enforcing a uniform distribution over the sensitive labels Roy & Boddeti ( 2019 ) ; Sarhan et al . ( 2020 ) . Such a solution provides no information to the adversary . However , it has the risk of weakening the encoder as it partially eliminates the adversary ’ s role in the representation learning phase and is provably bound to the adversary ’ s optimality . However , fulfilling the necessary optimality conditions impractical . Hence we seek to relax optimality by leveraging a quasi-optimal objective . To this end , we propose to maximize a variant of entropy - focal entropy - for dealing with inter-class uncertainty maximization . Focal entropy enforces the uncertainty to focus on a sparse set of similar classes and prevents the vast number of dissimilar classes from overwhelming the uncertainty . Maximization of focal entropy increases the uncertainty in a more organic , namely in a systematic and semantic-aware fashion . Hence , it is leading to a deeper privacy sanitization during the representation learning phase . In summary , the main contributions of this paper are three-fold . First , we propose to learn the privacy-preserving representations . Second , we introduce an ARL setting for this task by adding a novel entropy term to the VAE . Third , we demonstrate experimentally that our proposed method learns a semantically meaningful privacy-preserving sanitized representation . 2 RELATED WORKS . Much research has been conducted in protecting differential privacy Dwork et al . ( 2017 ) ; Dwork ( 2006 ) ; Ryoo et al . ( 2017 ) ; Abadi et al . ( 2016 ) on data and parameter level by anonymizing raw data directly , or incorporating a randomized mechanism into the learning process , respectively . Although successful , our method is fundamentally different from them , as we aim to learn a private representation instead of preserving privacy in data or parameter level . While we do not consider their framework here , our method could employ differential privacy during the post-classifier training . The advantages of learning and transmitting representations instead of data have been investigated recently in many works , see Osia et al . ( 2017 ; 2018 ) , and references therein . Nevertheless , such a representation is proven to contain some privacy revealing information of clients . The recent 𝜇 ! `` # 𝒛 ! `` # 𝒛 $ % ! Residual Target Enc Dec 𝜎 ! `` # 𝜇 $ % ! 𝜎 $ % ! 𝒛 '' # $ 𝒛 % & '' 𝒙 𝑠 𝑞 ! `` # 𝒛 ! `` # 𝒙 ) 𝑞 $ % ! 𝒛 $ % ! 𝒙 ) 𝑝 ( 𝑠|𝒛 ! `` # ) 𝑝 ( 𝒕|𝒛 $ % ! ) , 𝑝 ( 𝑠|𝒛 $ % ! ) , 𝑝 ( 𝒕|𝒛 ! `` # ) 𝒕 𝑠 𝒕 𝑝 𝒙 [ 𝒛 ! '' # |𝒛 $ % ! ] ) Figure 1 : Schematic illustration of the proposed approach . Left : The graphical model associated with the minimax game . Right : Our proposed architecture with the two stream network is based on VAE , and augmented with additional predictor loss and ( focal ) entropy . success of adversarial learning has led to the increased adoption of this technique for learning representations that preserve sensitive information in different types of data . For instance , Srivastava et al . ( 2019 ) proposed to learn privacy-preserving representations for automatic speech recognition ( ASR ) . In Yang et al . ( 2018 ) , a representation is learned on the raw student clickstream event data , captured as they watch lecture videos in massive open online courses . In Li et al . ( 2019 ) , the authors proposed an obfuscator designed to hide privacy-related sensitive information from the features using adversarial training . Similarly , Kim et al . ( 2019 ) is based on adversarial learning , which encodes images to obfuscate the patient identity while preserving enough information for a medical segmentation task . Pittaluga et al . ( 2019 ) considered a formulation based on adversarial optimization between the encoding function and estimators for private tasks . Although our method is also based on adversarial learning , we differently facilitate adversarial sanitization using entropy . This leads to a more privacy-preserving representation while maintaining the method complexity . The most related work to ours is by Roy & Boddeti ( 2019 ) ; Sadeghi et al . ( 2019 ) , which aims at obtaining a sanitized representation using entropy and adversarial representation learning , respectively . Another related work to our paper proposed by Feutry et al . ( 2018 ) aims at learning representations that preserve the relevant part of the information while dismissing information about the private labels corresponding to the clients ’ identity . A key difference compared to this method is that they require labels for the downstream task during representation learning . Recently , Chen et al . ( 2018 ) proposed a complex method for privacy-preserving representation learning . Gabbay & Hoshen ( 2020 ) propose an approach for disentanglement using shared latent optimization and an asymmetric regularization . Edwards & Storkey ( 2016 ) propose sanitize representations utilizing an adversary . Liao et al . ( 2019a ; b ) employ an adversary to obtain a sanitized representation , however , also incorporating fairness constraints . Liu et al . ( 2018 ) proposed to use conventional entropy as an adversary in the context of a vanilla auto-encoder for privacy sanitization . To the best of our knowledge , our paper is the first work that proposes taking the class similarity into account for the adversary ’ s entropy . Furthermore , fairness as proposed in Creager et al . ( 2019 ) ; Locatello et al . ( 2019 ) ; Quadrianto et al . ( 2019 ) ; Sarhan et al . ( 2020 ) is yet another intimately connected notion to privacy . While we do not consider fairness here , our method could also be extended to that problem , leading to exciting future research . Finally , we note that our model is different from the federated learning methodology in McMahan et al . ( 2016 ) ; Geyer et al . ( 2017 ) , which focuses on learning a decentralized private model by sharing gradient updates instead of learning representations . | ########################################################################## Summary: The paper studies how to learn private representations that only captures the non-sensitive attributes of the dataset. They propose an adversarial representation learning method that employs VAEs. Specifically, the architecture in the VAE contains 6 players with 2 as adversarial classifiers. They introduce focal entropy as the objective function instead of entropy for adversarial classifiers to achieve deep sanitization. They empirically evaluate the method by reporting the target task accuracy and attribute inference accuracy on two datasets. | SP:bbaaeb718f346e866e91cf7e6f9278f0a2bfbab4 |
Protecting DNNs from Theft using an Ensemble of Diverse Models | 1 INTRODUCTION . MS attacks allow an adversary with black-box access to the predictions of the target model to copy its functionality and create a high-accuracy clone model , posing a threat to the confidentiality of proprietary DNNs . Such attacks also open the door to a wide range of security vulnerabilities including adversarial attacks ( Goodfellow et al. , 2014 ) that cause misclassification , membership-inference attacks ( Shokri et al. , 2017 ) that leak membership , and model-inversion attacks ( Fredrikson et al. , 2015 ) that reveal the data used to train the model . MS is carried out using the principle of Knowledge distillation ( KD ) , wherein the adversary uses a dataset D to query the target model . The predictions of the target onD are then used to train a clone model that replicates the target model ’ s functionality . Since access to training data is limited in most real-world settings , attacks typically use some form of OOD data to perform KD . Clone models trained in this way closely approximate the decision boundaries of the target model , achieving high-accuracy on in-distribution examples . The goal of this paper is to defend against MS attacks by creating a target model that is inherently hard to steal using Knowledge Distillation with OOD data . Our key observation is the existing MS attacks ( Orekondy et al. , 2019a ; Papernot et al. , 2017 ; Juuti et al. , 2019 ) implicitly assume that the target model produces continuous predictions in the input space . We hypothesise that making the predictions of the target model discontinuous makes MS attacks harder to carry out . To this end , we propose Ensemble of Diverse Models ( EDM ) to defend against MS attacks . The models in EDM are trained using a novel diversity loss to produce dissimilar predictions on OOD data . Each input query to EDM is serviced by a single model that is selected from the ensemble using an input-based hashing function . We develop a DNN-based perceptual hashing algorithm for this purpose , which is invariant to simple transformations of the input and prevents adaptive attacks . Since different models in the ensemble are used to service different queries and the models are trained to produce dissimilar predictions , the adversary obtains predictions that are highly discontinuous in the input space . The clone model , when trained on these predictions , tries to approximate the complex discontinuous decision boundary of the EDM . Our empirical evaluations show that the resulting clone model generalizes poorly on in-distribution data , degrading clone accuracy and reducing the efficacy of MS attacks . In contrast to existing defenses that rely on perturbing output predictions , our defense does not require modifying the output and instead uses a diverse ensemble to produce discontinuous predictions that are inherently hard to steal . We illustrate the working of our defense by comparing the results of MS attacks on two target models– ( i ) Undefended baseline ( ii ) EDM – trained on a toy binary-classification problem shown in Fig . 1 . The training data and predictions of the target model under attack are shown in Fig . 1a . We use a set of OOD points as shown in Fig . 1b to query the target model and obtain its predictions . The predictions of the target model on the OOD data is finally used to train the clone model to replicate the functionality of the target as shown in Fig . 1c . For the undefended baseline target , which uses a simple linear model to produce continuous predictions , the clone model obtained by the attack closely approximates the decision boundary of the target model , achieving good classification accuracy on in-distribution data . The EDM target consists of two diverse models ( Model-A and Model-B ) that produce dissimilar predictions on the OOD data . EDM uses an input-based hash function to select the model that services the input query . As a result , the target model produces highly discontinuous decisions on OOD data ( Fig . 1a , b for EDM ) . The clone model trained on this data fails to capture any meaningful decision boundary ( Fig . 1c for EDM ) and produces poor accuracy on the classification task , making MS attacks harder to carry out . In summary , our paper makes the following key contributions : 1 . We propose a novel Diversity Loss function to train an Ensemble of Diverse Models ( EDM ) that produce dissimilar predictions on OOD data . 2 . We propose using EDM to defend against model stealing attacks . Our defense creates discontinuous predictions for the adversary ’ s OOD queries , making MS attacks harder to carry out , without causing degradation to the model ’ s accuracy on benign queries . 3 . We develop a DNN-based perceptual hash function , which produces the same hash value even with large changes to the input , making adaptive attacks harder to carry out . 2 RELATED WORK . Model Stealing Attacks : MS Attacks fall into one of three categories : ( i ) parameter stealing , ( ii ) hyper-parameter stealing and ( iii ) functionality stealing attacks . Parameter stealing ( Tramèr et al. , 2016 ; Lowd & Meek , 2005 ) and hyperparameter stealing attacks ( Wang & Gong , 2018 ; Oh et al. , 2019 ) focus on inferring the exact model parameters or hyperparameters used in the architecture/training of the target model respectively . Our work focuses on defending against functionality stealing attacks , where the adversary ’ s goal is to train a clone model that achieves high accuracy on a given task . Since the training data of the target is usually unavailable , attacks leverage alternate forms of data to query the target . Orekondy et al . ( 2019a ) ; Correia-Silva et al . ( 2018 ) use datasets from related problem domains , while other attacks ( Papernot et al. , 2017 ; Juuti et al. , 2019 ; Roberts et al. , 2019 ; Kariyappa et al. , 2020 ) have proposed crafting synthetic examples to query the target to perform model stealing . Model Stealing Defenses : Existing defenses that have been proposed to defend against model functionality stealing involve perturbing the output predictions to prevent the adversary from having reliable access to the predictions of the target model . Perturbation-based defenses modify the output of the model y = f ( x ) to produce a perturbed output y′ = y + δ . These defenses can be further sub-categorized into accuracy-preserving and accuracy-constrained defenses . Accuracy-Preserving Defenses ( Lee et al. , 2018 ) ensure that the class-labels of the perturbed predictions y′ are the same as the original prediction i.e . argmax ( y′ ) = argmax ( y ) . Since the class-labels are unchanged , the benign accuracy of the model is preserved . In contrast , accuracyconstrained defenses ( Kariyappa & Qureshi , 2020 ; Orekondy et al. , 2019b ) allow for the usage of a higher level of perturbation , trading off benign accuracy for increased security against MS attacks . This work presents the first defense that is not reliant on adding perturbations to the predictions of the model . Instead , we aim to create an entirely new class of ensemble-based models that is inherently hard to steal with a data-limited attack , due to the discontinuous nature of the predictions produced by the model . Our method has minimal impact on the benign accuracy and offers a much higher security against MS attacks compared to a single model . Additionally , our model can be used in conjunction with perturbation-based defenses to further improve robustness against MS attacks . 3 PRELIMINARIES . 3.1 ATTACK : OBJECTIVE AND CONSTRAINTS . The goal of the attacker is to maximize the accuracy of a clone model C on a test set Dtest i.e . maxθC Ex , y∼Dtest [ Acc ( C ( x ; θC ) , y ) ] . MS attacks use the principle of knowledge distillation to train a clone model . First , the attacker queries the target model using inputs { xi } ni=1 to obtain the target ’ s predictions { yi = T ( xi ) } . The labeled dataset { xi , yi } created this way can then be used to train a clone model C , completing the attack . The predictions returned from T can either be soft-label prediction , where T returns a probability distribution over the output classes , or hardlabel predictions , where only the argmax class-label is returned . We consider attacks under both prediction types in this paper . Since there is a fixed cost associated with querying T , we assume that the number of queries is upper bounded by a query budget Q . Data Constraints : In most real-world settings , the adversary does not have access to the dataset used to train the target model . A data-limited attacker can use alternate forms of data to carry out model stealing attacks . KnockoffNets ( Orekondy et al. , 2019a ) uses data from an alternate surrogate dataset , which has semantic similarities with the target model and Jacobian-Based Dataset Augmentation ( Papernot et al. , 2017 ) uses synthetic datasets to query the target model . 3.2 DEFENSE : OBJECTIVE AND CONSTRAINTS . The goal of the defense is to minimize the accuracy of the clone model i.e . minEx , y∼Dtest [ Acc ( C ( x ; θC ) , y ) ] . The defense is allowed to make changes to the target model or the model predictions . However , we require the defense to have minimal impact on the accuracy of the model for benign inputs to preserve the utility of the model . The defender is unaware of the kind of attack being used , or the dataset used by the adversary to query the target model . In addition to existing attacks , the defense also needs to be effective against adaptive attacks that are tailored for the defense . 4 OUR PROPOSAL : ENSEMBLE OF DIVERSE MODELS . This paper proposes a novel type of model ensemble called Ensemble of Diverse Models ( EDM ) , which is significantly more robust to MS attacks compared to a single DNN model . EDM consists of a set of diverse models , which produce dissimilar predictions for OOD data . By using an inputbased hash function , EDM selects a single model from the ensemble to service each query . Since the models are trained to produce dissimilar predictions , EDM outputs predictions that are highly discontinuous in the input space for OOD data . The complexity of the discontinuous predictions cause the clone model trained on these predictions to generalize poorly on in-distribution examples , making MS attacks less effective . We explain how EDM models can be trained and deployed for inference in this section . 4.1 TRAINING A DIVERSE ENSEMBLE EDM leverages an ensemble of diverse models to produce discontinuous predictions . Let { fi } i=Ni=1 denote the ensemble of models used in EDM with model parameters { θi } i=Ni=1 . These models are jointly trained with a two-fold training objective using an in-distribution dataset of labeled examples Din and an out-ofdistribution dataset of unlabeled examplesDout as explained below : 1 . Accuracy objective : Models in the ensemble should be trained to produce high accuracy for the examples in the training dataset of in-distribution examples Din as shown in Fig . 2a . For a multi-class classification problem , let { ŷi = fi ( x ; θi ) } Ni=1 be the prediction probabilities of the models where ( x , y ) ∈ Din is the input , label pair . We use the cross-entropy loss averaged across all models : 1 N ∑N i=1 LCE ( ŷi , y ) to train the models in the ensemble to achieve high accuracy on the in-distribution training data . 2 . Diversity objective : The diversity objective requires the models in the ensemble to produce dissimilar predictions for examples in the out of distribution dataset Dout as depicted in Fig . 2b . If { ỹi = fi ( x̃ ; θi ) } Ni=1 are the output probabilities produced by the ensemble for an OOD input x̃ ∈ Dout , then the diversity objective requires the set of output vectors { ỹi } Ni=1 to be misaligned . We use the coherence metric ( Tropp , 2006 ) to measure the alignment of the output probability vectors . Coherence of a set of vectors measures the maximum cosine similarity ( CS ) i.e . cosine of the smallest angle , between all pairs of probability vectors in the set as shown in Fig . 3 . Coherence can be computed as follows : coherence ( { ỹi } Ni=1 ) = max a , b∈ { 1 , .. , N } a 6=b CS ( ỹa , ỹb ) . ( 1 ) In order to increase the misalignment of probability vectors , we need to reduce the value of coherence . Unfortunately , the max function in Eqn 1 makes coherence non-smooth , which precludes the use of first-order methods to optimize the value of coherence . To overcome this limitation , we use the LogSumExp function to get a smooth approximation of the max function that allows it to be used in first-order optimization algorithms . We term the resulting loss function as the diversity loss : DivLoss ( { ỹi } Ni=1 ) = log ( ∑ 1≤a < b≤N exp ( CS ( ỹa , ỹb ) ) ) . ( 2 ) Models in the ensemble are jointly trained on the combination of accuracy and diversity objectives : L = E x , y∼Din , x̃∼Dout [ ( 1 N N∑ i=1 LCE ( ŷi , y ) ) + λD ·DivLoss ( { ỹi } Ni=1 ) ] , ( 3 ) where ŷi = fi ( x ) , ỹi = fi ( x̃ ) . Here λD is a hyperparameter that dictates the ratio of importance between the diversity and accuracy objectives . Note that the loss function described above requires access a dataset of labeled in-distribution examples Din as well as a dataset of out-of-distribution examples Dout . Since we do not know the OOD samples that is used by the adversary a-priori , we make use of an auxiliary OOD dataset to train EDM . Similar use of auxiliary OOD datasets have been explored in anomaly detection literature ( Hendrycks et al. , 2018 ) to detect unseen anomalies . Our experiments in Section 5.3 suggest that the diversity objective generalizes to the unseen OOD dataset used by the adversary . | The paper proposes a method to protect deep neural networks against model stealing. The propose defense trains an ensemble of classifiers using two losses one targeting accuracy and the other diversity of the ensemble. In particular, the trained classifiers are consistent on in-distribution data, but contradict each other on out-of-distribution data. The proposed defense does not affect the test accuracy of the victim model while it strongly limits the test accuracy of the clone model. | SP:fcd72bc92c431b2f991d9e765dbdba684cada4e7 |
Protecting DNNs from Theft using an Ensemble of Diverse Models | 1 INTRODUCTION . MS attacks allow an adversary with black-box access to the predictions of the target model to copy its functionality and create a high-accuracy clone model , posing a threat to the confidentiality of proprietary DNNs . Such attacks also open the door to a wide range of security vulnerabilities including adversarial attacks ( Goodfellow et al. , 2014 ) that cause misclassification , membership-inference attacks ( Shokri et al. , 2017 ) that leak membership , and model-inversion attacks ( Fredrikson et al. , 2015 ) that reveal the data used to train the model . MS is carried out using the principle of Knowledge distillation ( KD ) , wherein the adversary uses a dataset D to query the target model . The predictions of the target onD are then used to train a clone model that replicates the target model ’ s functionality . Since access to training data is limited in most real-world settings , attacks typically use some form of OOD data to perform KD . Clone models trained in this way closely approximate the decision boundaries of the target model , achieving high-accuracy on in-distribution examples . The goal of this paper is to defend against MS attacks by creating a target model that is inherently hard to steal using Knowledge Distillation with OOD data . Our key observation is the existing MS attacks ( Orekondy et al. , 2019a ; Papernot et al. , 2017 ; Juuti et al. , 2019 ) implicitly assume that the target model produces continuous predictions in the input space . We hypothesise that making the predictions of the target model discontinuous makes MS attacks harder to carry out . To this end , we propose Ensemble of Diverse Models ( EDM ) to defend against MS attacks . The models in EDM are trained using a novel diversity loss to produce dissimilar predictions on OOD data . Each input query to EDM is serviced by a single model that is selected from the ensemble using an input-based hashing function . We develop a DNN-based perceptual hashing algorithm for this purpose , which is invariant to simple transformations of the input and prevents adaptive attacks . Since different models in the ensemble are used to service different queries and the models are trained to produce dissimilar predictions , the adversary obtains predictions that are highly discontinuous in the input space . The clone model , when trained on these predictions , tries to approximate the complex discontinuous decision boundary of the EDM . Our empirical evaluations show that the resulting clone model generalizes poorly on in-distribution data , degrading clone accuracy and reducing the efficacy of MS attacks . In contrast to existing defenses that rely on perturbing output predictions , our defense does not require modifying the output and instead uses a diverse ensemble to produce discontinuous predictions that are inherently hard to steal . We illustrate the working of our defense by comparing the results of MS attacks on two target models– ( i ) Undefended baseline ( ii ) EDM – trained on a toy binary-classification problem shown in Fig . 1 . The training data and predictions of the target model under attack are shown in Fig . 1a . We use a set of OOD points as shown in Fig . 1b to query the target model and obtain its predictions . The predictions of the target model on the OOD data is finally used to train the clone model to replicate the functionality of the target as shown in Fig . 1c . For the undefended baseline target , which uses a simple linear model to produce continuous predictions , the clone model obtained by the attack closely approximates the decision boundary of the target model , achieving good classification accuracy on in-distribution data . The EDM target consists of two diverse models ( Model-A and Model-B ) that produce dissimilar predictions on the OOD data . EDM uses an input-based hash function to select the model that services the input query . As a result , the target model produces highly discontinuous decisions on OOD data ( Fig . 1a , b for EDM ) . The clone model trained on this data fails to capture any meaningful decision boundary ( Fig . 1c for EDM ) and produces poor accuracy on the classification task , making MS attacks harder to carry out . In summary , our paper makes the following key contributions : 1 . We propose a novel Diversity Loss function to train an Ensemble of Diverse Models ( EDM ) that produce dissimilar predictions on OOD data . 2 . We propose using EDM to defend against model stealing attacks . Our defense creates discontinuous predictions for the adversary ’ s OOD queries , making MS attacks harder to carry out , without causing degradation to the model ’ s accuracy on benign queries . 3 . We develop a DNN-based perceptual hash function , which produces the same hash value even with large changes to the input , making adaptive attacks harder to carry out . 2 RELATED WORK . Model Stealing Attacks : MS Attacks fall into one of three categories : ( i ) parameter stealing , ( ii ) hyper-parameter stealing and ( iii ) functionality stealing attacks . Parameter stealing ( Tramèr et al. , 2016 ; Lowd & Meek , 2005 ) and hyperparameter stealing attacks ( Wang & Gong , 2018 ; Oh et al. , 2019 ) focus on inferring the exact model parameters or hyperparameters used in the architecture/training of the target model respectively . Our work focuses on defending against functionality stealing attacks , where the adversary ’ s goal is to train a clone model that achieves high accuracy on a given task . Since the training data of the target is usually unavailable , attacks leverage alternate forms of data to query the target . Orekondy et al . ( 2019a ) ; Correia-Silva et al . ( 2018 ) use datasets from related problem domains , while other attacks ( Papernot et al. , 2017 ; Juuti et al. , 2019 ; Roberts et al. , 2019 ; Kariyappa et al. , 2020 ) have proposed crafting synthetic examples to query the target to perform model stealing . Model Stealing Defenses : Existing defenses that have been proposed to defend against model functionality stealing involve perturbing the output predictions to prevent the adversary from having reliable access to the predictions of the target model . Perturbation-based defenses modify the output of the model y = f ( x ) to produce a perturbed output y′ = y + δ . These defenses can be further sub-categorized into accuracy-preserving and accuracy-constrained defenses . Accuracy-Preserving Defenses ( Lee et al. , 2018 ) ensure that the class-labels of the perturbed predictions y′ are the same as the original prediction i.e . argmax ( y′ ) = argmax ( y ) . Since the class-labels are unchanged , the benign accuracy of the model is preserved . In contrast , accuracyconstrained defenses ( Kariyappa & Qureshi , 2020 ; Orekondy et al. , 2019b ) allow for the usage of a higher level of perturbation , trading off benign accuracy for increased security against MS attacks . This work presents the first defense that is not reliant on adding perturbations to the predictions of the model . Instead , we aim to create an entirely new class of ensemble-based models that is inherently hard to steal with a data-limited attack , due to the discontinuous nature of the predictions produced by the model . Our method has minimal impact on the benign accuracy and offers a much higher security against MS attacks compared to a single model . Additionally , our model can be used in conjunction with perturbation-based defenses to further improve robustness against MS attacks . 3 PRELIMINARIES . 3.1 ATTACK : OBJECTIVE AND CONSTRAINTS . The goal of the attacker is to maximize the accuracy of a clone model C on a test set Dtest i.e . maxθC Ex , y∼Dtest [ Acc ( C ( x ; θC ) , y ) ] . MS attacks use the principle of knowledge distillation to train a clone model . First , the attacker queries the target model using inputs { xi } ni=1 to obtain the target ’ s predictions { yi = T ( xi ) } . The labeled dataset { xi , yi } created this way can then be used to train a clone model C , completing the attack . The predictions returned from T can either be soft-label prediction , where T returns a probability distribution over the output classes , or hardlabel predictions , where only the argmax class-label is returned . We consider attacks under both prediction types in this paper . Since there is a fixed cost associated with querying T , we assume that the number of queries is upper bounded by a query budget Q . Data Constraints : In most real-world settings , the adversary does not have access to the dataset used to train the target model . A data-limited attacker can use alternate forms of data to carry out model stealing attacks . KnockoffNets ( Orekondy et al. , 2019a ) uses data from an alternate surrogate dataset , which has semantic similarities with the target model and Jacobian-Based Dataset Augmentation ( Papernot et al. , 2017 ) uses synthetic datasets to query the target model . 3.2 DEFENSE : OBJECTIVE AND CONSTRAINTS . The goal of the defense is to minimize the accuracy of the clone model i.e . minEx , y∼Dtest [ Acc ( C ( x ; θC ) , y ) ] . The defense is allowed to make changes to the target model or the model predictions . However , we require the defense to have minimal impact on the accuracy of the model for benign inputs to preserve the utility of the model . The defender is unaware of the kind of attack being used , or the dataset used by the adversary to query the target model . In addition to existing attacks , the defense also needs to be effective against adaptive attacks that are tailored for the defense . 4 OUR PROPOSAL : ENSEMBLE OF DIVERSE MODELS . This paper proposes a novel type of model ensemble called Ensemble of Diverse Models ( EDM ) , which is significantly more robust to MS attacks compared to a single DNN model . EDM consists of a set of diverse models , which produce dissimilar predictions for OOD data . By using an inputbased hash function , EDM selects a single model from the ensemble to service each query . Since the models are trained to produce dissimilar predictions , EDM outputs predictions that are highly discontinuous in the input space for OOD data . The complexity of the discontinuous predictions cause the clone model trained on these predictions to generalize poorly on in-distribution examples , making MS attacks less effective . We explain how EDM models can be trained and deployed for inference in this section . 4.1 TRAINING A DIVERSE ENSEMBLE EDM leverages an ensemble of diverse models to produce discontinuous predictions . Let { fi } i=Ni=1 denote the ensemble of models used in EDM with model parameters { θi } i=Ni=1 . These models are jointly trained with a two-fold training objective using an in-distribution dataset of labeled examples Din and an out-ofdistribution dataset of unlabeled examplesDout as explained below : 1 . Accuracy objective : Models in the ensemble should be trained to produce high accuracy for the examples in the training dataset of in-distribution examples Din as shown in Fig . 2a . For a multi-class classification problem , let { ŷi = fi ( x ; θi ) } Ni=1 be the prediction probabilities of the models where ( x , y ) ∈ Din is the input , label pair . We use the cross-entropy loss averaged across all models : 1 N ∑N i=1 LCE ( ŷi , y ) to train the models in the ensemble to achieve high accuracy on the in-distribution training data . 2 . Diversity objective : The diversity objective requires the models in the ensemble to produce dissimilar predictions for examples in the out of distribution dataset Dout as depicted in Fig . 2b . If { ỹi = fi ( x̃ ; θi ) } Ni=1 are the output probabilities produced by the ensemble for an OOD input x̃ ∈ Dout , then the diversity objective requires the set of output vectors { ỹi } Ni=1 to be misaligned . We use the coherence metric ( Tropp , 2006 ) to measure the alignment of the output probability vectors . Coherence of a set of vectors measures the maximum cosine similarity ( CS ) i.e . cosine of the smallest angle , between all pairs of probability vectors in the set as shown in Fig . 3 . Coherence can be computed as follows : coherence ( { ỹi } Ni=1 ) = max a , b∈ { 1 , .. , N } a 6=b CS ( ỹa , ỹb ) . ( 1 ) In order to increase the misalignment of probability vectors , we need to reduce the value of coherence . Unfortunately , the max function in Eqn 1 makes coherence non-smooth , which precludes the use of first-order methods to optimize the value of coherence . To overcome this limitation , we use the LogSumExp function to get a smooth approximation of the max function that allows it to be used in first-order optimization algorithms . We term the resulting loss function as the diversity loss : DivLoss ( { ỹi } Ni=1 ) = log ( ∑ 1≤a < b≤N exp ( CS ( ỹa , ỹb ) ) ) . ( 2 ) Models in the ensemble are jointly trained on the combination of accuracy and diversity objectives : L = E x , y∼Din , x̃∼Dout [ ( 1 N N∑ i=1 LCE ( ŷi , y ) ) + λD ·DivLoss ( { ỹi } Ni=1 ) ] , ( 3 ) where ŷi = fi ( x ) , ỹi = fi ( x̃ ) . Here λD is a hyperparameter that dictates the ratio of importance between the diversity and accuracy objectives . Note that the loss function described above requires access a dataset of labeled in-distribution examples Din as well as a dataset of out-of-distribution examples Dout . Since we do not know the OOD samples that is used by the adversary a-priori , we make use of an auxiliary OOD dataset to train EDM . Similar use of auxiliary OOD datasets have been explored in anomaly detection literature ( Hendrycks et al. , 2018 ) to detect unseen anomalies . Our experiments in Section 5.3 suggest that the diversity objective generalizes to the unseen OOD dataset used by the adversary . | This paper tackles a timely problem of protecting deep neural networks from mode stealing attacks. This paper proposed an Ensemble of Diverse Model to provide diversity prediction for the adversary’s OOD query. The main contribution of this paper is the introduction of the diversity loss function on OOD data and the discontinuous prediction provided by ensembled models. The results show that EDM defense reduces the accuracy of stolen models. | SP:fcd72bc92c431b2f991d9e765dbdba684cada4e7 |
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary | 1 INTRODUCTION . Modern deep reinforcement learning agents ( Mnih et al. , 2015 ; Levine et al. , 2015 ; Lillicrap et al. , 2015 ; Silver et al. , 2016 ; Fujimoto et al. , 2018 ) typically use neuron networks as function approximators . Since the discovery of adversarial examples in image classification tasks ( Szegedy et al. , 2013 ) , the vulnerabilities in DRL agents were first demonstrated in ( Huang et al. , 2017 ; Lin et al. , 2017 ; Kos & Song , 2017 ) and further developed under more environments and different attack scenarios ( Behzadan & Munir , 2017a ; Pattanaik et al. , 2018 ; Xiao et al. , 2019 ) . These attacks commonly add imperceptible noises into the observations of states , e.g. , the observed environment slightly differs from true environment . This raises concerns for using RL in safety-crucial applications such as autonomous driving ( Sallab et al. , 2017 ; Voyage , 2019 ) ; additionally , the discrepancy between ground-truth states and agent observations also contributes to the “ reality gap ” - an agent working well in simulated environments may fail in real environments due to noises in observations ( Jakobi et al. , 1995 ; Muratore et al. , 2019 ) , as real-world sensing contains unavoidable noise ( Brooks , 1992 ) . We classify the weakness of a DRL agent on the perturbations of state observations into two classes : the vulnerability in function approximators , which typically originates from the highly non-linear and blackbox nature of neural networks ; and intrinsic weakness of policy : even perfect features for states are extracted , an agent can still make mistakes due to an intrinsic weakness in its policy . For example , in the deep Q networks ( DQNs ) for Atari games , a large convolutional neural network ( CNN ) is used for extracting features from input frames . To act correctly , the network must extract crucial features : e.g. , for the game of Pong , the position and velocity of the ball , which can observed by visualizing convolutional layers ( Hausknecht & Stone , 2015 ; Guo et al. , 2014 ) . Many attacks to the DQN setting add imperceptible noises ( Huang et al. , 2017 ; Lin et al. , 2017 ; Kos & Song , 2017 ; Behzadan & Munir , 2017a ) that exploit the vulnerability of deep neural networks so that they extract wrong features , as we have seen in adversarial examples of image classification tasks . On the other hand , the fragile function approximation is not the only source of the weakness of a RL agent - in a finite-state Markov decision process ( MDP ) , we can use tabular policy and value functions so there is no function approximation error . The agent can still be vulnerable to small perturbations on observations , e.g. , perturbing the observation of a state to one of its four neighbors in a gridworldlike environment can prevent an agent from reaching its goal ( Figure 1 ) . To improve the robustness of RL , we need to take measures from both aspects — a more robust function approximator , and a policy aware of perturbations in observations . Techniques developed in enhancing the robustness of neural network ( NN ) classifiers can be applied to address the vulnerability in function approximators . Especially , for environments like Atari games with images as input and discrete actions as outputs , the policy network πθ behaves similarly to a classifier in test time . Thus , Fischer et al . ( 2019 ) ; Mirman et al . ( 2018a ) utilized existing certified adversarial defense ( Mirman et al. , 2018b ; Wong & Kolter , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2020a ) approaches in supervised learning to enhance the robustness of DQN agents . Another successful approach ( Zhang et al. , 2020b ) for both Atari and high-dimensional continuous control environment regularizes the smoothness of the learned policy such that maxŝ∈B ( s ) D ( πθ ( s ) , πθ ( ŝ ) ) is small for some divergence D and B ( s ) is a neighborhood around s. This maximization can be solved using a gradient based method or convex relaxations of NNs ( Salman et al. , 2019 ; Zhang et al. , 2018 ; Xu et al. , 2020 ) , and then minimized by optimizing θ . Such an adversarial minimax regularization is in the same spirit as the ones used in some adversarial training approaches for ( semi ) supervised learning , e.g. , TRADES ( Zhang et al. , 2019 ) and VAT ( Miyato et al. , 2015 ) . However , regularizing the function approximators does not explicitly improve the intrinsic policy robustness . In this paper , we propose an orthogonal approach , alternating training with learned adversaries ( ATLA ) , to enhance the robustness of DRL agents . We focus on dealing with the intrinsic weakness of the policy by learning an adversary online with the agent during training time , rather than directly regularizing function approximators . Our main contributions can be summarized as : • We follow the framework of state-adversarial Markov decision process ( SA-MDP ) and show how to learn an optimal adversary for perturbing observations . We demonstrate practical attacks under this formulation and obtain learned adversaries that are significantly stronger than previous ones . • We propose the alternating training with learned adversaries ( ATLA ) framework to improve the robustness of DRL agents . The difference between our approach and previous adversarial training approaches is that we use a stronger adversary , which is learned online together with the agent . • Our analysis on SA-MDP also shows that history can be important for learning a robust agent . We thus propose to use a LSTM based policy in the ATLA framework and find that it is more robust than policies parameterized as regular feedforward NNs . • We evaluate our approach empirically on four continuous control environments . We outperform explicit regularization based methods in a few environments , and our approach can also be directly combined with explicit regularizations on function approximators to achieve state-of-the-art results . 2 RELATED WORK . State-adversarial Markov decision process ( SA-MDP ) ( Zhang et al. , 2020b ) characterizes the decision making problem under adversarial attacks on state observations . Most importantly , the true state in the environment is not perturbed by the adversary under this setting ; for example , perturbing pixels in an Atari environment ( Huang et al. , 2017 ; Kos & Song , 2017 ; Lin et al. , 2017 ; Behzadan & Munir , 2017a ; Inkawhich et al. , 2019 ) does not change the true location of an object in the game simulator . SA-MDP can characterize agent performance under natural or adversarial noise from sensor measurements . For example , GPS sensor readings on a car are naturally noisy , but the ground truth location of the car is not affected by the noise . Importantly , this setting is different from robust Markov decision process ( RMDP ) ( Nilim & El Ghaoui , 2004 ; Iyengar , 2005 ) , where the worst case transition probabilities of the environment are considered . “ Robust reinforcement learning ” in some works ( Mankowitz et al. , 2018 ; 2019 ) refer to this different definition of robustness in RMDP , and should not be confused with our setting of robustness against perturbations on state observations . Several works proposed methods to learn an adversary online together with an agent . RARL ( Pinto et al. , 2017 ) proposed to train an agent and an adversary under the two-player Markov game ( Littman , 1994 ) setting . The adversary can change the environment states through actions directly applied to environment . The goal of RARL is to improve the robustness against environment parameter changes , such as mass , length or friction . Gleave et al . ( 2019 ) discussed the learning of an adversary using reinforcement learning to attack a victim agent , by taking adversarial actions that changes the environment and consequentially change the observation of the victim agent . Both Pinto et al . ( 2017 ) ; Gleave et al . ( 2019 ) conduct their attack under on the two-player Markov game framework , rather than considering perturbations on state observations . Besides , Li et al . ( 2019 ) consider a similar Markov game setting in multi-agent RL environments . The difference between these works and ours can be clearly seen in the setting where the adversary is fixed - under the framework of ( Pinto et al. , 2017 ; Gleave et al. , 2019 ) , the learning of agent is still a MDP , but in our setting , it becomes a harder POMDP problem ( Section 3.2 ) . Training DRL agents with perturbed state observations from adversaries have been investigated in a few works , sometimes referred to as adversarial training . Kos & Song ( 2017 ) ; Behzadan & Munir ( 2017b ) used gradient based adversarial attacks to DQN agents and put adversarial frames into replay buffer . This approach is not very successful because for Atari environments the main source of weakness is likely to come from the function approximator , so an adversarial regularization framework such as ( Zhang et al. , 2020b ; Qu et al. , 2020 ) which directly controls the smoothness of the Q function is more effective . For lower dimensional continuous control tasks such as the MuJoCo environments , Mandlekar et al . ( 2017 ) ; Pattanaik et al . ( 2018 ) conducted FGSM and multistep gradient based attacks during training time ; however , their main focus was on the robustness against environment parameter changes and only limited evaluation on the adversarial attack setting was conducted with relatively weak adversaries . Zhang et al . ( 2020b ) systematically tested this approach under newly proposed strong attacks , and found that it can not reliably improve robustness . These early adversarial training approaches typically use gradients from a critic function . They are usually relatively weak , and not sufficient to lead to a robust policy under stronger attacks . The robustness of RL has also been investigated from other perspectives . For example , Tessler et al . ( 2019 ) study MDPs under action perturbations ; Tan et al . ( 2020 ) use adversarial training on action space to enhance agent robustness under action perturbations . Besides , policy teaching ( Zhang & Parkes , 2008 ; Zhang et al. , 2009 ; Ma et al. , 2019 ) and policy poisoning ( Rakhsha et al. , 2020 ; Huang & Zhu , 2019 ) manipulate the reward or cost signal during agent training time to induce a desired agent policy . Essentially , policy teaching is a training time “ attack ” with perturbed rewards from the environments ( which can be analogous to data poisoning attacks in supervised learning settings ) , while our goal is to obtain a robust agent against test time adversarial attacks . All these settings differ from the setting of perturbing state observations discussed in our paper . 3 METHODOLOGY . In this section , we first discuss the case where the agent policy is fixed , and then the case where the adversary is fixed in SA-MDPs . This allows us to propose an alternating training framework to improve robustness of RL agents under perturbations on state observations . Notations and Background We use S and A to represent the state space and the action space , respectively ; P ( S ) defines the set of all possible probability measures on S. We define a Markov decision process ( MDP ) as ( S , A , R , p , γ ) , where R : S × A × S → R and p : S × A → P ( S ) are two mappings represent the reward and transition probability . The transition probability at time step t can be written as p ( s′|s , a ) = Pr ( st+1 = s′|st = s , at = a ) . Reward function is defined as the expected reward R ( s , a , s′ ) : = E [ rt|st = s , at = a , st+1 = s′ ] . γ ∈ [ 0 , 1 ] is the discounting factor . We denote a stationary policy as π : S → P ( A ) which is independent of history . We denote history ht at time t as { s0 , a0 , · · · , st−1 , at−1 , st } and H as the set of all histories . A history-dependent policy is defined as π : H → P ( A ) . A partially observable Markov decision process ( Astrom , 1965 ) ( POMDP ) can be defined as a 7-tuple ( S , A , O , Ω , R , p , γ ) where O is a set of observations and Ω is a set of conditional observation probabilities p ( o|s ) . Unlike MDPs , POMDPs typically require history-dependent optimal policies . To study the decision problem under adversaries on state observations , we use state-adversarial Markov decision process ( SA-MDP ) framework ( Zhang et al. , 2020b ) . In SA-MDP , an adversary ν : S → P ( S ) is introduced to perturb the input state of an agent ; however , the true environment state s is unchanged ( Figure 2 ) . Formally , an SA-MDP is a 6-tuple ( S , A , B , R , p , γ ) where B is a mapping from a state s ∈ S to a set of states B ( s ) ∈ S. The agent sees the perturbed state ŝ ∼ ν ( ·|s ) and takes the action π ( a|ŝ ) accordingly . B limits the power of adver- sary : supp ( ν ( ·|s ) ) ∈ B ( s ) . The goal of SA-MDP is to solve an optimal policy π∗ under its optimal adversary ν∗ ( π∗ ) ; an optimal adversary is defined as ν∗ ( π ) such that π achieves the lowest possible expected discounted return ( or value ) on all states . Zhang et al . ( 2020b ) did not give an explicit algorithm to solve SA-MDP and found that a stationary optimal policy need not exist . | This paper proposes to improve the robustness of a reinforcement learning agent by alternatively training an agent and an adversary who perturbs the state observations. The learning of an “optimal” adversary for a fixed policy is based on the theory of SA-MDP in prior work. The learning of an optimal policy under a fixed adversary is done by solving a POMDP problem. Experimental results show that the proposed alternating training with learned adversaries (ATLA) framework can improve the performance and robustness of PPO. | SP:586149146ed5e74dd231b134fa6ba582f6e1f72b |
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary | 1 INTRODUCTION . Modern deep reinforcement learning agents ( Mnih et al. , 2015 ; Levine et al. , 2015 ; Lillicrap et al. , 2015 ; Silver et al. , 2016 ; Fujimoto et al. , 2018 ) typically use neuron networks as function approximators . Since the discovery of adversarial examples in image classification tasks ( Szegedy et al. , 2013 ) , the vulnerabilities in DRL agents were first demonstrated in ( Huang et al. , 2017 ; Lin et al. , 2017 ; Kos & Song , 2017 ) and further developed under more environments and different attack scenarios ( Behzadan & Munir , 2017a ; Pattanaik et al. , 2018 ; Xiao et al. , 2019 ) . These attacks commonly add imperceptible noises into the observations of states , e.g. , the observed environment slightly differs from true environment . This raises concerns for using RL in safety-crucial applications such as autonomous driving ( Sallab et al. , 2017 ; Voyage , 2019 ) ; additionally , the discrepancy between ground-truth states and agent observations also contributes to the “ reality gap ” - an agent working well in simulated environments may fail in real environments due to noises in observations ( Jakobi et al. , 1995 ; Muratore et al. , 2019 ) , as real-world sensing contains unavoidable noise ( Brooks , 1992 ) . We classify the weakness of a DRL agent on the perturbations of state observations into two classes : the vulnerability in function approximators , which typically originates from the highly non-linear and blackbox nature of neural networks ; and intrinsic weakness of policy : even perfect features for states are extracted , an agent can still make mistakes due to an intrinsic weakness in its policy . For example , in the deep Q networks ( DQNs ) for Atari games , a large convolutional neural network ( CNN ) is used for extracting features from input frames . To act correctly , the network must extract crucial features : e.g. , for the game of Pong , the position and velocity of the ball , which can observed by visualizing convolutional layers ( Hausknecht & Stone , 2015 ; Guo et al. , 2014 ) . Many attacks to the DQN setting add imperceptible noises ( Huang et al. , 2017 ; Lin et al. , 2017 ; Kos & Song , 2017 ; Behzadan & Munir , 2017a ) that exploit the vulnerability of deep neural networks so that they extract wrong features , as we have seen in adversarial examples of image classification tasks . On the other hand , the fragile function approximation is not the only source of the weakness of a RL agent - in a finite-state Markov decision process ( MDP ) , we can use tabular policy and value functions so there is no function approximation error . The agent can still be vulnerable to small perturbations on observations , e.g. , perturbing the observation of a state to one of its four neighbors in a gridworldlike environment can prevent an agent from reaching its goal ( Figure 1 ) . To improve the robustness of RL , we need to take measures from both aspects — a more robust function approximator , and a policy aware of perturbations in observations . Techniques developed in enhancing the robustness of neural network ( NN ) classifiers can be applied to address the vulnerability in function approximators . Especially , for environments like Atari games with images as input and discrete actions as outputs , the policy network πθ behaves similarly to a classifier in test time . Thus , Fischer et al . ( 2019 ) ; Mirman et al . ( 2018a ) utilized existing certified adversarial defense ( Mirman et al. , 2018b ; Wong & Kolter , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2020a ) approaches in supervised learning to enhance the robustness of DQN agents . Another successful approach ( Zhang et al. , 2020b ) for both Atari and high-dimensional continuous control environment regularizes the smoothness of the learned policy such that maxŝ∈B ( s ) D ( πθ ( s ) , πθ ( ŝ ) ) is small for some divergence D and B ( s ) is a neighborhood around s. This maximization can be solved using a gradient based method or convex relaxations of NNs ( Salman et al. , 2019 ; Zhang et al. , 2018 ; Xu et al. , 2020 ) , and then minimized by optimizing θ . Such an adversarial minimax regularization is in the same spirit as the ones used in some adversarial training approaches for ( semi ) supervised learning , e.g. , TRADES ( Zhang et al. , 2019 ) and VAT ( Miyato et al. , 2015 ) . However , regularizing the function approximators does not explicitly improve the intrinsic policy robustness . In this paper , we propose an orthogonal approach , alternating training with learned adversaries ( ATLA ) , to enhance the robustness of DRL agents . We focus on dealing with the intrinsic weakness of the policy by learning an adversary online with the agent during training time , rather than directly regularizing function approximators . Our main contributions can be summarized as : • We follow the framework of state-adversarial Markov decision process ( SA-MDP ) and show how to learn an optimal adversary for perturbing observations . We demonstrate practical attacks under this formulation and obtain learned adversaries that are significantly stronger than previous ones . • We propose the alternating training with learned adversaries ( ATLA ) framework to improve the robustness of DRL agents . The difference between our approach and previous adversarial training approaches is that we use a stronger adversary , which is learned online together with the agent . • Our analysis on SA-MDP also shows that history can be important for learning a robust agent . We thus propose to use a LSTM based policy in the ATLA framework and find that it is more robust than policies parameterized as regular feedforward NNs . • We evaluate our approach empirically on four continuous control environments . We outperform explicit regularization based methods in a few environments , and our approach can also be directly combined with explicit regularizations on function approximators to achieve state-of-the-art results . 2 RELATED WORK . State-adversarial Markov decision process ( SA-MDP ) ( Zhang et al. , 2020b ) characterizes the decision making problem under adversarial attacks on state observations . Most importantly , the true state in the environment is not perturbed by the adversary under this setting ; for example , perturbing pixels in an Atari environment ( Huang et al. , 2017 ; Kos & Song , 2017 ; Lin et al. , 2017 ; Behzadan & Munir , 2017a ; Inkawhich et al. , 2019 ) does not change the true location of an object in the game simulator . SA-MDP can characterize agent performance under natural or adversarial noise from sensor measurements . For example , GPS sensor readings on a car are naturally noisy , but the ground truth location of the car is not affected by the noise . Importantly , this setting is different from robust Markov decision process ( RMDP ) ( Nilim & El Ghaoui , 2004 ; Iyengar , 2005 ) , where the worst case transition probabilities of the environment are considered . “ Robust reinforcement learning ” in some works ( Mankowitz et al. , 2018 ; 2019 ) refer to this different definition of robustness in RMDP , and should not be confused with our setting of robustness against perturbations on state observations . Several works proposed methods to learn an adversary online together with an agent . RARL ( Pinto et al. , 2017 ) proposed to train an agent and an adversary under the two-player Markov game ( Littman , 1994 ) setting . The adversary can change the environment states through actions directly applied to environment . The goal of RARL is to improve the robustness against environment parameter changes , such as mass , length or friction . Gleave et al . ( 2019 ) discussed the learning of an adversary using reinforcement learning to attack a victim agent , by taking adversarial actions that changes the environment and consequentially change the observation of the victim agent . Both Pinto et al . ( 2017 ) ; Gleave et al . ( 2019 ) conduct their attack under on the two-player Markov game framework , rather than considering perturbations on state observations . Besides , Li et al . ( 2019 ) consider a similar Markov game setting in multi-agent RL environments . The difference between these works and ours can be clearly seen in the setting where the adversary is fixed - under the framework of ( Pinto et al. , 2017 ; Gleave et al. , 2019 ) , the learning of agent is still a MDP , but in our setting , it becomes a harder POMDP problem ( Section 3.2 ) . Training DRL agents with perturbed state observations from adversaries have been investigated in a few works , sometimes referred to as adversarial training . Kos & Song ( 2017 ) ; Behzadan & Munir ( 2017b ) used gradient based adversarial attacks to DQN agents and put adversarial frames into replay buffer . This approach is not very successful because for Atari environments the main source of weakness is likely to come from the function approximator , so an adversarial regularization framework such as ( Zhang et al. , 2020b ; Qu et al. , 2020 ) which directly controls the smoothness of the Q function is more effective . For lower dimensional continuous control tasks such as the MuJoCo environments , Mandlekar et al . ( 2017 ) ; Pattanaik et al . ( 2018 ) conducted FGSM and multistep gradient based attacks during training time ; however , their main focus was on the robustness against environment parameter changes and only limited evaluation on the adversarial attack setting was conducted with relatively weak adversaries . Zhang et al . ( 2020b ) systematically tested this approach under newly proposed strong attacks , and found that it can not reliably improve robustness . These early adversarial training approaches typically use gradients from a critic function . They are usually relatively weak , and not sufficient to lead to a robust policy under stronger attacks . The robustness of RL has also been investigated from other perspectives . For example , Tessler et al . ( 2019 ) study MDPs under action perturbations ; Tan et al . ( 2020 ) use adversarial training on action space to enhance agent robustness under action perturbations . Besides , policy teaching ( Zhang & Parkes , 2008 ; Zhang et al. , 2009 ; Ma et al. , 2019 ) and policy poisoning ( Rakhsha et al. , 2020 ; Huang & Zhu , 2019 ) manipulate the reward or cost signal during agent training time to induce a desired agent policy . Essentially , policy teaching is a training time “ attack ” with perturbed rewards from the environments ( which can be analogous to data poisoning attacks in supervised learning settings ) , while our goal is to obtain a robust agent against test time adversarial attacks . All these settings differ from the setting of perturbing state observations discussed in our paper . 3 METHODOLOGY . In this section , we first discuss the case where the agent policy is fixed , and then the case where the adversary is fixed in SA-MDPs . This allows us to propose an alternating training framework to improve robustness of RL agents under perturbations on state observations . Notations and Background We use S and A to represent the state space and the action space , respectively ; P ( S ) defines the set of all possible probability measures on S. We define a Markov decision process ( MDP ) as ( S , A , R , p , γ ) , where R : S × A × S → R and p : S × A → P ( S ) are two mappings represent the reward and transition probability . The transition probability at time step t can be written as p ( s′|s , a ) = Pr ( st+1 = s′|st = s , at = a ) . Reward function is defined as the expected reward R ( s , a , s′ ) : = E [ rt|st = s , at = a , st+1 = s′ ] . γ ∈ [ 0 , 1 ] is the discounting factor . We denote a stationary policy as π : S → P ( A ) which is independent of history . We denote history ht at time t as { s0 , a0 , · · · , st−1 , at−1 , st } and H as the set of all histories . A history-dependent policy is defined as π : H → P ( A ) . A partially observable Markov decision process ( Astrom , 1965 ) ( POMDP ) can be defined as a 7-tuple ( S , A , O , Ω , R , p , γ ) where O is a set of observations and Ω is a set of conditional observation probabilities p ( o|s ) . Unlike MDPs , POMDPs typically require history-dependent optimal policies . To study the decision problem under adversaries on state observations , we use state-adversarial Markov decision process ( SA-MDP ) framework ( Zhang et al. , 2020b ) . In SA-MDP , an adversary ν : S → P ( S ) is introduced to perturb the input state of an agent ; however , the true environment state s is unchanged ( Figure 2 ) . Formally , an SA-MDP is a 6-tuple ( S , A , B , R , p , γ ) where B is a mapping from a state s ∈ S to a set of states B ( s ) ∈ S. The agent sees the perturbed state ŝ ∼ ν ( ·|s ) and takes the action π ( a|ŝ ) accordingly . B limits the power of adver- sary : supp ( ν ( ·|s ) ) ∈ B ( s ) . The goal of SA-MDP is to solve an optimal policy π∗ under its optimal adversary ν∗ ( π∗ ) ; an optimal adversary is defined as ν∗ ( π ) such that π achieves the lowest possible expected discounted return ( or value ) on all states . Zhang et al . ( 2020b ) did not give an explicit algorithm to solve SA-MDP and found that a stationary optimal policy need not exist . | of the paper: The paper studies adversarial attacks in RL, focusing both on the design of optimal attack strategies on RL agents, as well as robust training RL procedures for mitigating attacks. Building on the results of (Zhang et al., 2020), the paper proposes a new learning framework (ATLA), that simultaneously trains a (strong) adversary and a (robust) deep RL agent. The paper showcases the importance of the new framework through extensive experimental evaluation. | SP:586149146ed5e74dd231b134fa6ba582f6e1f72b |
Learning Discrete Adaptive Receptive Fields for Graph Convolutional Networks | 1 INTRODUCTION . After a series of explorations and modifications ( Bruna et al. , 2014 ; Kipf & Welling , 2017 ; Velickovic et al. , 2017 ; Xu et al. , 2019 ; Li et al. , 2019 ; Abu-El-Haija et al. , 2019 ) , Graph Convolutional Networks ( GCNs ) 1 have gained considerable attention in the machine learning community . Typically , a graph convolutional model can be abstracted as a message-passing process ( Gilmer et al. , 2017 ) – nodes in the neighborhood of a central node are regarded as contexts , who individually pass their messages to the central node via convolutional layers . The central node then weighs and transforms these messages . This process is recursively conducted as the depth of network increases . 2 Neighborhood convolutions proved to be widely useful on various graph data . However , some inconveniences also exist in current GCNs . While different nodes may yield different importance in the neighborhood , early GCNs ( Kipf & Welling , 2017 ; Hamilton et al. , 2017 ) did not discriminate contexts in their receptive fields . These models either treated contexts equally , or used normalized edge weights as the weights of contexts . As a result , such implementations failed to capture critical contexts – contexts that pose greater influences on the central node , close friends among acquaintances , for example . Graph Attention Networks ( GATs ) ( Velickovic et al. , 2017 ) resolved this problem with attention mechanisms ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . Soft attention weights were used to discriminate importance of contexts , which allowed the model to better focus on relevant contexts to make decisions . With impressive performances , GATs became widely used in later generations of GCNs including ( Li et al. , 2019 ; Liu et al. , 2019 ) . However , we observe that using soft attention weights in hierarchical convolutions does not fully solve the problem . Firstly , we will show as Proposition 1 that under common conditions , soft attention weights almost surely approach 0 as the neighborhood sizes increase . This smoothness 3 hinders the discrimination of context importance in large neighborhoods . Secondly , we will show by experiments 1We use the name GCN for a class of deep learning approaches where information is convolved among graph neighborhoods , including but not limited to the vanilla GCN ( Kipf & Welling , 2017 ) . 2We use the term contexts to denote the neighbor nodes , and receptive field to denote the set of contexts that the convolutions refer to . 3The smoothness discussed in our paper is different to that in ( Li et al. , 2018 ) , i.e . the phenomenon that representations of nodes converge in very deep GNNs . in Section 4.2 that GATs can not well distinguish true graph nodes from artificial noises : attention weights assigned to true nodes and noises are almost identical in distribution , which further leads to a dramatic drop of performance . Meanwhile , an ideal GCN architecture is often expected to exploit information on nodes with various distances . Most existing GCNs use hierarchical convolutional layers , in which only one-hop neighborhoods are convolved . As a result , one must increase the model depth to detect long-distance dependencies ( informative nodes that are distant from the central nodes ) . This is particularly an issue in large graphs , as the complexity of the graph convolutions is exponential to the model depth . 4 In large graphs , the model depths are often set as 1 , 2 or 3 ( Hamilton et al. , 2017 ; Velickovic et al. , 2017 ) . Accordingly , no dependencies longer than 3 hops are exploited in these models . Motivated by the discussions above , we propose the idea of adaptive receptive fields ( ARFs ) . Figure 1 illustrates the differences between hierarchical convolutions and convolutions with ARFs . An ARF is defined as a subset of contexts that are most informative for a central node , and is constructed via selecting contexts among the neighborhood . Nodes in an ARF can be at various distances from the central node . The discrete selection process of contexts gets rid of the undesired smoothness of soft weights ( see Section 2 ) . In addition , by allowing ARFs to choose contexts on different hops from the central node , one can efficiently explore dependencies with longer distances . Experiments also show that ARFs are more robust to noises ( see Section 4 ) . We further propose GRARF ( GCNs with Reinforced Adaptive Receptive Fields ) as an instance for using ARFs in node-level tasks . In GRARF , an optimal policy of constructing ARFs is learned with reinforcement learning ( RL ) . An RL agent ( constructor ) successively expands the ARF via a two-stage process : a contact node in the intermediately-constructed ARF is firstly selected ; a context among the direct neighbors of the contact node is then added to the ARF . The reward of the constructor is defined as the performance of a trained GCN ( evaluator ) on the constructed ARF . GRARF is validated on datasets from different domains including three citation networks , one social network , and an inductive protein-protein interaction dataset . GRARF matches or improves performances on node classification tasks compared with strong baselines . 5 Moreover , we design two tasks to test the models ’ abilities in focusing on informative contexts and leveraging long-distance dependencies by injecting node noises in graphs with different strategies . 2 PRELIMINARIES AND THEORIES . Notations . In our paper , we consider node-level supervised learning tasks on attributed graphs . An attributed graph G is generally represented as G = ( V , A , X ) , where V = { v1 , · · · , vn } denotes the set of nodes , A ∈ { 0 , 1 } n×n denotes the ( binary ) adjacency matrix , and X ∈ Rn×d0 denotes the input node features , xv ∈ Rd0 the features of node v. E is used as the set of edges . We use N ( vi ) to denote the one-hop neighborhood of node vi , with vi itself included . We use H ( l ) ∈ Rn×dl as the matrix containing dl-dimensional hidden representations of nodes in the l-th layer , h ( l ) v that of node 4With sparse adjacency matrices , the average complexity of graph convolutions is O ( dL ) , where L is the model depth and d is the graph degree ( or the neighborhood-sampling sizes in ( Hamilton et al. , 2017 ) ) . 5We mainly show the results of node classification tasks in our paper , whereas GRARF is intrinsically adapted to all node-level supervised learning tasks . v.  denotes the symmetrically normalized adjacency matrix with  = D−1/2 ( A+ In ) D−1/2 and D = diag ( d ) , di = ∑ j ( A+ In ) ij . We use bold letters for neural network parameters . The smoothness of Graph Attention Networks . As a pioneering work of simplifying architectures of graph neural networks , the vanilla GCN layers in ( Kipf & Welling , 2017 ) were defined as H ( l+1 ) = σ ( ÂH ( l ) W ( l ) ) = σ ∑ j∈N ( vi ) Âijh ( l ) j W ( l ) , l = 0 , 1 , · · · ( 1 ) In each layer , the node representations in one-hop neighborhoods were transformed with W and averaged by normalized edge weights Âij . Graph Attention Networks ( GATs ) ( Velickovic et al. , 2017 ) elaborated the average scheme in Eq . ( 1 ) with attention mechanisms ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . Instead of using Âij , an attention weight αij between node vi and vj was calculated in GAT layers as eij = fθ ( hi , hj ) , αij = softmaxj ( eij ) = exp ( eij ) ∑ k∈N ( vi ) exp ( eik ) , ( 2 ) where fθ ( · ) is often called the energy function with parameter θ. GATs implicitly enabled specifying different weights in a neighborhood . However , under some common assumptions and as the neighborhood size increases , these attention weights , normalized with the softmax function , suffer from over-smoothness : all attention weights approach 0 as the neighborhood size increases . We formally introduce and prove this claim as Lemma 1 and Proposition 1 : Lemma 1 ( the smoothness of softmax ) . If random variables X1 , X2 , · · · are uniformly bounded with probability 1 , that is , for any i and some C , P ( |Xi| > C ) = 0 , then the softmax values taking { Xi } ni=1 as inputs approach 0 almost surely when n→∞ , i.e . eXi / n∑ j=1 eXj → 0 a.s. ( 3 ) Proof . The proof is simple noting that eXi/ ∑n j=1 e Xj > 0 , and that with probability 1 , eXi / n∑ j=1 eXj < eC / ne−C → 0 , n→∞ . Proposition 1 ( the smoothness of attention weights ) . If the representation of nodes ( random vectors ) H1 , H2 , · · · ∈ Rd are uniformly bounded with probability 1 ( for any i and some C , P ( ‖H1‖ > C ) = 0 ) , and for any ( fixed ) node vi , the energy function fθ ( hi , · ) is continuous on any closed set D ∈ Rd , then the attention weights in the neighborhood of vi approach 0 almost surely when n→∞ , i.e . αij = exp ( fθ ( hi , Hj ) ) ∑ k∈N ( vi ) exp ( fθ ( hi , Hk ) ) → 0 a.s. ( 4 ) Proof . Following the a.s. boundedness { Hi } s and the continuity condition on fθ ( · ) , the random energies Eij = fθ ( hi , Hj ) are also bounded a.s .. The desired result then follows Lemma 1 . Note that the continuity condition on fθ ( hi , · ) in Proposition 1 can be satisfied with almost any commonly used non-linear functions and ( regularized ) parameters in deep learning , specifically , those in the official version of GATs ( eij = aT [ Whi‖Whj ] , where ‖ is the operator of concatenation ) . Also , the boundedness of inputs is trivial in deep learning . GCNs with ARFs overcome smoothness . What Proposition 1 shows is that in large neighborhoods , attention weights are smoothed to 0 , thus hindering the discrimination of context importance . In addition , such smoothness can be immediately generated to any other form of normalized weights as long as αij > 0 uniformly and ∑ j∈N ( vi ) αij = 1 . We alleviate the smoothness with ARFs by incorporating discreteness . Specifically , let us denote the convolution in the evaluator as h′i = σ ∑ j∈Na ( u ) ηijhjW = σ ∑ j∈Nk ( u ) η̃ijhjW , η̃ij = { ηij , j ∈ Na ( u ) ,0 , j /∈ Na ( u ) , ( 5 ) where Na ( u ) is an ARF , k is the maximum hop that the ARF explores , and Nk is the entire k-hop neighborhood . Accordingly , η̃ij is not subjected to smoothness : if ηij has a uniform lower bound D > 0 , then for p ∈ Na ( u ) and q /∈ Na ( u ) , we have η̃ip − η̃iq > D , regardless of the sizes of N ( k ) . Note that ηij > 0 uniformly can be guaranteed in most cases when the maximum ARF size is limited , for example , with uniform weights or softmax weights of bounded energies . It should be noted that the discrimination of nodes in ARFs is different to that in MixHop ( Abu-ElHaija et al. , 2019 ) , which directly takes multi-hop nodes as inputs . MixHop can specify different parameters 6 for contexts on different hops ( hop-level discrimination ) , while it CAN NOT specify different weights for contexts on the same hop ( node-level discrimination ) as they were uniformly treated ( averaged ) . The two levels of discrimination are orthogonal , and we focus on the latter . Deep reinforcement learning on graphs . As the discrete context selection process is nondifferentiable , we apply deep reinforcement learning approaches to learn the policy of construting ARFs in GRARF , specifically , the Deep Q-Learning ( DQN ) ( Mnih et al. , 2015 ) algorithm . DQN uses deep neural networks to approximate the action value function ( Q-function ) , and chooses the action that maximizes it in each step . The Q-function is defined iteratively as Q∗ ( st , at ) = R ( st , at , st+1 ) + γmax a∈A Q∗ ( st+1 , a ) , ( 6 ) where s is the state , R ( · ) is the reward function , A is the action space , and γ is a discount factor . A reward shaping technique ( Ng et al. , 1999 ) is also used in GRARF to alleviate the sparsity of rewards , which decorates the original reward R ( · ) with a potential energy F ( · ) , yielding an immediate reward R̂ ( · ) . Denoted in formula , F ( s , a , s′ ) = Φ ( s′ ) − Φ ( s ) , R̂ ( s , a , s′ ) = R ( s , a , s′ ) + F ( s , a , s′ ) , ( 7 ) where Φ ( · ) is a fixed potential function of states that does not change during training . ( Ng et al. , 1999 ) proved that the optimal policies of MDPs remain invariant if R ( · ) is replaced by R̂ ( · ) . There are other recent papers implementing reinforcement learning on graphs . For example , GCPN ( You et al. , 2018 ) proposed an RL agent for generating graph representations of biomedical molecules , and DGN ( Jiang et al. , 2020 ) introduced a multi-agent reinforcement learning approach where the agents in the system formed a dynamic network . The successive molecule generation process in GCPN inspired us in designing the ARF constructor in GRARF , whereas the two models are of different motivations and applications . ARFs and neighborhood sampling . It should be noted that GRARF can also be interpreted as a neighborhood sampling approach . Neighborhood sampling was proposed as a necessary process to apply GCNs to large graphs with arbitrarily large neighborhoods . GraphSAGE ( Hamilton et al. , 2017 ) proposed a general framework of neighborhood sampling and aggregation , where contexts were uniformly sampled . Later work improved the sampling strategy with importance sampling ( Chen et al. , 2018 ) and explicit variance reduction ( Huang et al. , 2018 ; Hamilton et al. , 2017 ) . Sub-graphs instead of subsets of neighborhoods were directly sampled in ( Zeng et al. , 2020 ) . Indeed , selecting ARF nodes takes a specific form of neighborhood sampling . However , the aim of constructing ARFs is to ignore trivial information and to focus on critical contexts , rather than to estimate the neighborhood average as is the primary target of neighborhood sampling . Therefore , despite the similarity , the two approaches are in different directions . | The paper proposes a method for avoiding the oversmoothing happening in standard GNN methods. It defines a receptive field of a node, as the set of nodes that send messages to that node and proposes a method to create adaptive receptive fields specific to each node. Instead of using all the nodes in a multi-hop neighbourhood, an reinforcement learning method is proposed to select only a subset of these nodes. The RL problem is formalised such that each state represents a set of already selected nodes, while actions represent the next selected node. The goal of the RL agent (constructor) is to form an adaptive neighbourhood for each node, and the reward is given by the loss of a GNN method (evaluator) that uses this neighbourhood. | SP:4addc1c9c0f91be7fc176425cb41c22cb4e562ba |
Learning Discrete Adaptive Receptive Fields for Graph Convolutional Networks | 1 INTRODUCTION . After a series of explorations and modifications ( Bruna et al. , 2014 ; Kipf & Welling , 2017 ; Velickovic et al. , 2017 ; Xu et al. , 2019 ; Li et al. , 2019 ; Abu-El-Haija et al. , 2019 ) , Graph Convolutional Networks ( GCNs ) 1 have gained considerable attention in the machine learning community . Typically , a graph convolutional model can be abstracted as a message-passing process ( Gilmer et al. , 2017 ) – nodes in the neighborhood of a central node are regarded as contexts , who individually pass their messages to the central node via convolutional layers . The central node then weighs and transforms these messages . This process is recursively conducted as the depth of network increases . 2 Neighborhood convolutions proved to be widely useful on various graph data . However , some inconveniences also exist in current GCNs . While different nodes may yield different importance in the neighborhood , early GCNs ( Kipf & Welling , 2017 ; Hamilton et al. , 2017 ) did not discriminate contexts in their receptive fields . These models either treated contexts equally , or used normalized edge weights as the weights of contexts . As a result , such implementations failed to capture critical contexts – contexts that pose greater influences on the central node , close friends among acquaintances , for example . Graph Attention Networks ( GATs ) ( Velickovic et al. , 2017 ) resolved this problem with attention mechanisms ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . Soft attention weights were used to discriminate importance of contexts , which allowed the model to better focus on relevant contexts to make decisions . With impressive performances , GATs became widely used in later generations of GCNs including ( Li et al. , 2019 ; Liu et al. , 2019 ) . However , we observe that using soft attention weights in hierarchical convolutions does not fully solve the problem . Firstly , we will show as Proposition 1 that under common conditions , soft attention weights almost surely approach 0 as the neighborhood sizes increase . This smoothness 3 hinders the discrimination of context importance in large neighborhoods . Secondly , we will show by experiments 1We use the name GCN for a class of deep learning approaches where information is convolved among graph neighborhoods , including but not limited to the vanilla GCN ( Kipf & Welling , 2017 ) . 2We use the term contexts to denote the neighbor nodes , and receptive field to denote the set of contexts that the convolutions refer to . 3The smoothness discussed in our paper is different to that in ( Li et al. , 2018 ) , i.e . the phenomenon that representations of nodes converge in very deep GNNs . in Section 4.2 that GATs can not well distinguish true graph nodes from artificial noises : attention weights assigned to true nodes and noises are almost identical in distribution , which further leads to a dramatic drop of performance . Meanwhile , an ideal GCN architecture is often expected to exploit information on nodes with various distances . Most existing GCNs use hierarchical convolutional layers , in which only one-hop neighborhoods are convolved . As a result , one must increase the model depth to detect long-distance dependencies ( informative nodes that are distant from the central nodes ) . This is particularly an issue in large graphs , as the complexity of the graph convolutions is exponential to the model depth . 4 In large graphs , the model depths are often set as 1 , 2 or 3 ( Hamilton et al. , 2017 ; Velickovic et al. , 2017 ) . Accordingly , no dependencies longer than 3 hops are exploited in these models . Motivated by the discussions above , we propose the idea of adaptive receptive fields ( ARFs ) . Figure 1 illustrates the differences between hierarchical convolutions and convolutions with ARFs . An ARF is defined as a subset of contexts that are most informative for a central node , and is constructed via selecting contexts among the neighborhood . Nodes in an ARF can be at various distances from the central node . The discrete selection process of contexts gets rid of the undesired smoothness of soft weights ( see Section 2 ) . In addition , by allowing ARFs to choose contexts on different hops from the central node , one can efficiently explore dependencies with longer distances . Experiments also show that ARFs are more robust to noises ( see Section 4 ) . We further propose GRARF ( GCNs with Reinforced Adaptive Receptive Fields ) as an instance for using ARFs in node-level tasks . In GRARF , an optimal policy of constructing ARFs is learned with reinforcement learning ( RL ) . An RL agent ( constructor ) successively expands the ARF via a two-stage process : a contact node in the intermediately-constructed ARF is firstly selected ; a context among the direct neighbors of the contact node is then added to the ARF . The reward of the constructor is defined as the performance of a trained GCN ( evaluator ) on the constructed ARF . GRARF is validated on datasets from different domains including three citation networks , one social network , and an inductive protein-protein interaction dataset . GRARF matches or improves performances on node classification tasks compared with strong baselines . 5 Moreover , we design two tasks to test the models ’ abilities in focusing on informative contexts and leveraging long-distance dependencies by injecting node noises in graphs with different strategies . 2 PRELIMINARIES AND THEORIES . Notations . In our paper , we consider node-level supervised learning tasks on attributed graphs . An attributed graph G is generally represented as G = ( V , A , X ) , where V = { v1 , · · · , vn } denotes the set of nodes , A ∈ { 0 , 1 } n×n denotes the ( binary ) adjacency matrix , and X ∈ Rn×d0 denotes the input node features , xv ∈ Rd0 the features of node v. E is used as the set of edges . We use N ( vi ) to denote the one-hop neighborhood of node vi , with vi itself included . We use H ( l ) ∈ Rn×dl as the matrix containing dl-dimensional hidden representations of nodes in the l-th layer , h ( l ) v that of node 4With sparse adjacency matrices , the average complexity of graph convolutions is O ( dL ) , where L is the model depth and d is the graph degree ( or the neighborhood-sampling sizes in ( Hamilton et al. , 2017 ) ) . 5We mainly show the results of node classification tasks in our paper , whereas GRARF is intrinsically adapted to all node-level supervised learning tasks . v.  denotes the symmetrically normalized adjacency matrix with  = D−1/2 ( A+ In ) D−1/2 and D = diag ( d ) , di = ∑ j ( A+ In ) ij . We use bold letters for neural network parameters . The smoothness of Graph Attention Networks . As a pioneering work of simplifying architectures of graph neural networks , the vanilla GCN layers in ( Kipf & Welling , 2017 ) were defined as H ( l+1 ) = σ ( ÂH ( l ) W ( l ) ) = σ ∑ j∈N ( vi ) Âijh ( l ) j W ( l ) , l = 0 , 1 , · · · ( 1 ) In each layer , the node representations in one-hop neighborhoods were transformed with W and averaged by normalized edge weights Âij . Graph Attention Networks ( GATs ) ( Velickovic et al. , 2017 ) elaborated the average scheme in Eq . ( 1 ) with attention mechanisms ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . Instead of using Âij , an attention weight αij between node vi and vj was calculated in GAT layers as eij = fθ ( hi , hj ) , αij = softmaxj ( eij ) = exp ( eij ) ∑ k∈N ( vi ) exp ( eik ) , ( 2 ) where fθ ( · ) is often called the energy function with parameter θ. GATs implicitly enabled specifying different weights in a neighborhood . However , under some common assumptions and as the neighborhood size increases , these attention weights , normalized with the softmax function , suffer from over-smoothness : all attention weights approach 0 as the neighborhood size increases . We formally introduce and prove this claim as Lemma 1 and Proposition 1 : Lemma 1 ( the smoothness of softmax ) . If random variables X1 , X2 , · · · are uniformly bounded with probability 1 , that is , for any i and some C , P ( |Xi| > C ) = 0 , then the softmax values taking { Xi } ni=1 as inputs approach 0 almost surely when n→∞ , i.e . eXi / n∑ j=1 eXj → 0 a.s. ( 3 ) Proof . The proof is simple noting that eXi/ ∑n j=1 e Xj > 0 , and that with probability 1 , eXi / n∑ j=1 eXj < eC / ne−C → 0 , n→∞ . Proposition 1 ( the smoothness of attention weights ) . If the representation of nodes ( random vectors ) H1 , H2 , · · · ∈ Rd are uniformly bounded with probability 1 ( for any i and some C , P ( ‖H1‖ > C ) = 0 ) , and for any ( fixed ) node vi , the energy function fθ ( hi , · ) is continuous on any closed set D ∈ Rd , then the attention weights in the neighborhood of vi approach 0 almost surely when n→∞ , i.e . αij = exp ( fθ ( hi , Hj ) ) ∑ k∈N ( vi ) exp ( fθ ( hi , Hk ) ) → 0 a.s. ( 4 ) Proof . Following the a.s. boundedness { Hi } s and the continuity condition on fθ ( · ) , the random energies Eij = fθ ( hi , Hj ) are also bounded a.s .. The desired result then follows Lemma 1 . Note that the continuity condition on fθ ( hi , · ) in Proposition 1 can be satisfied with almost any commonly used non-linear functions and ( regularized ) parameters in deep learning , specifically , those in the official version of GATs ( eij = aT [ Whi‖Whj ] , where ‖ is the operator of concatenation ) . Also , the boundedness of inputs is trivial in deep learning . GCNs with ARFs overcome smoothness . What Proposition 1 shows is that in large neighborhoods , attention weights are smoothed to 0 , thus hindering the discrimination of context importance . In addition , such smoothness can be immediately generated to any other form of normalized weights as long as αij > 0 uniformly and ∑ j∈N ( vi ) αij = 1 . We alleviate the smoothness with ARFs by incorporating discreteness . Specifically , let us denote the convolution in the evaluator as h′i = σ ∑ j∈Na ( u ) ηijhjW = σ ∑ j∈Nk ( u ) η̃ijhjW , η̃ij = { ηij , j ∈ Na ( u ) ,0 , j /∈ Na ( u ) , ( 5 ) where Na ( u ) is an ARF , k is the maximum hop that the ARF explores , and Nk is the entire k-hop neighborhood . Accordingly , η̃ij is not subjected to smoothness : if ηij has a uniform lower bound D > 0 , then for p ∈ Na ( u ) and q /∈ Na ( u ) , we have η̃ip − η̃iq > D , regardless of the sizes of N ( k ) . Note that ηij > 0 uniformly can be guaranteed in most cases when the maximum ARF size is limited , for example , with uniform weights or softmax weights of bounded energies . It should be noted that the discrimination of nodes in ARFs is different to that in MixHop ( Abu-ElHaija et al. , 2019 ) , which directly takes multi-hop nodes as inputs . MixHop can specify different parameters 6 for contexts on different hops ( hop-level discrimination ) , while it CAN NOT specify different weights for contexts on the same hop ( node-level discrimination ) as they were uniformly treated ( averaged ) . The two levels of discrimination are orthogonal , and we focus on the latter . Deep reinforcement learning on graphs . As the discrete context selection process is nondifferentiable , we apply deep reinforcement learning approaches to learn the policy of construting ARFs in GRARF , specifically , the Deep Q-Learning ( DQN ) ( Mnih et al. , 2015 ) algorithm . DQN uses deep neural networks to approximate the action value function ( Q-function ) , and chooses the action that maximizes it in each step . The Q-function is defined iteratively as Q∗ ( st , at ) = R ( st , at , st+1 ) + γmax a∈A Q∗ ( st+1 , a ) , ( 6 ) where s is the state , R ( · ) is the reward function , A is the action space , and γ is a discount factor . A reward shaping technique ( Ng et al. , 1999 ) is also used in GRARF to alleviate the sparsity of rewards , which decorates the original reward R ( · ) with a potential energy F ( · ) , yielding an immediate reward R̂ ( · ) . Denoted in formula , F ( s , a , s′ ) = Φ ( s′ ) − Φ ( s ) , R̂ ( s , a , s′ ) = R ( s , a , s′ ) + F ( s , a , s′ ) , ( 7 ) where Φ ( · ) is a fixed potential function of states that does not change during training . ( Ng et al. , 1999 ) proved that the optimal policies of MDPs remain invariant if R ( · ) is replaced by R̂ ( · ) . There are other recent papers implementing reinforcement learning on graphs . For example , GCPN ( You et al. , 2018 ) proposed an RL agent for generating graph representations of biomedical molecules , and DGN ( Jiang et al. , 2020 ) introduced a multi-agent reinforcement learning approach where the agents in the system formed a dynamic network . The successive molecule generation process in GCPN inspired us in designing the ARF constructor in GRARF , whereas the two models are of different motivations and applications . ARFs and neighborhood sampling . It should be noted that GRARF can also be interpreted as a neighborhood sampling approach . Neighborhood sampling was proposed as a necessary process to apply GCNs to large graphs with arbitrarily large neighborhoods . GraphSAGE ( Hamilton et al. , 2017 ) proposed a general framework of neighborhood sampling and aggregation , where contexts were uniformly sampled . Later work improved the sampling strategy with importance sampling ( Chen et al. , 2018 ) and explicit variance reduction ( Huang et al. , 2018 ; Hamilton et al. , 2017 ) . Sub-graphs instead of subsets of neighborhoods were directly sampled in ( Zeng et al. , 2020 ) . Indeed , selecting ARF nodes takes a specific form of neighborhood sampling . However , the aim of constructing ARFs is to ignore trivial information and to focus on critical contexts , rather than to estimate the neighborhood average as is the primary target of neighborhood sampling . Therefore , despite the similarity , the two approaches are in different directions . | The authors theoretically and empirically show that soft-attention mechanism uses in GCNs suffers from over-smoothness in large neighborhoods. For addressing this shortcoming, they propose a neighborhood sampling approach called adaptive receptive fields (ARFs) which discretely select nodes among the multi-hop neighborhood and allow to efficiently explore long-distance dependencies in graphs. The authors also propose GRARF (GCN with Reinforced ARF) which learns optimal policy of constructing ARFs with reinforcement learning. For a given node, an RL agent successively expands ARF via a two-stage process. Firstly, a contact node in an intermediately-constructed ARF is selected and then a context among the direct neighbors of the contact node is added to ARF. The reward is the performance of the trained GCN on constructed ARF. Overall, the results demonstrate the effectiveness of the approach on benchmark datasets. The authors also demonstrate that the method is quite effective at handling noise in the graph compared to GCN and GAT by evaluating them on the Cora dataset with synthetically added noise. | SP:4addc1c9c0f91be7fc176425cb41c22cb4e562ba |
Model-centric data manifold: the data through the eyes of the model | 1 INTRODUCTION . In machine learning , models are categorized as discriminative models or generative models . From its inception , deep learning has focused on classification and discriminative models ( Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ; Collobert et al. , 2011 ) . Another perspective came with the construction of generative models based on neural networks ( Kingma & Welling , 2014 ; Goodfellow et al. , 2014 ; Van den Oord et al. , 2016 ; Kingma & Dhariwal , 2018 ) . Both kinds of models give us information about the data and the similarity between examples . In particular , generative models introduce a geometric structure on generated data . Such models transform a random low-dimensional vector to an example sampled from a probability distribution approximating the one of the training dataset . As proved by Arjovsky & Bottou ( 2017 ) , generated data lie on a countable union of manifolds . This fact supports the human intuition that data have a low-dimensional manifold structure , but in generative models the dimension of such a manifold is usually a hyper-parameter fixed by the experimenter . A recent algorithm by Peebles et al . ( 2020 ) provides a way to find an approximation of the number of dimensions of the data manifold , deactivating irrelevant dimensions in a GAN . Similarly , here we try to understand if a discriminative model can be used to detect a manifold structure on the space containing data and to provide tools to navigate this manifold . The implicit definition of such a manifold and the possibility to trace paths between points on the manifold can open many possible applications . In particular , we could use paths to define a system of coordinates on the manifold ( more specifically on a chart of the manifold ) . Such coordinates would immediately give us a low-dimensional parametrization of our data , allowing us to do dimensionality reduction . In supervised learning , a model is trained on a labeled dataset to identify the correct label on unseen data . A trained neural network classifier builds a hierarchy of representations that encodes increasingly complex features of the input data ( Olah et al. , 2017 ) . Through the representation function , a distance ( e.g . euclidean or cosine ) on the representation space of a layer endows input data with a distance . This pyramid of distances on examples is increasingly class-aware : the deeper is the layer , the better the metric reflects the similarity of data according to the task at hand . This observation suggests that the model is implicitly organizing the data according to a suitable structure . Unfortunately , these intermediate representations and metrics are insufficient to understand the geometric structure of data . First of all , representation functions are not invertible , so we can not recover the original example from its intermediate representation or interpolate between data points . Moreover , the domain of representation functions is the entire data domain Rn . This domain is mostly composed of meaningless noise and data occupy only a thin region inside of it . So , even if representation functions provide us a distance , those metrics are incapable of distinguishing between meaningful data and noise . We find out that a ReLU neural network implicitly identifies a low-dimensional submanifold of the data domain that contains real data . We prove that if the activation function is piecewise-linear ( e.g . ReLU ) , the neural network decomposes the data domain Rn as the disjoint union of submanifolds ( the leaves of a foliation , using the terminology of differential geometry ) . The dimension of every submanifold ( every leaf of the foliation ) is bounded by the number of classes of our classification model , so it is much smaller than n , the dimension of the data domain Rn . Our main theoretical result , Theorem 3.1 , stems from the study of the properties of a variant of the Fisher Information matrix , the local data matrix . However , Theorem 3.1 can not tell us which leaves of this foliation are meaningful , i.e . what are the possible interesting practical applications of the submanifolds that compose the foliation . The interpretation of this geometric structure can only come from experiments . We report experiments performed on MNIST dataset . We choose to focus on MNIST because it is easily interpretable and because small networks are sufficient to reach a high accuracy . Our experiments suggest that all valid data points lie on only one leaf of the foliation , the data leaf . To observe this phenomenon we take an example from the dataset and we try to connect it with another random example following a path along the leaf containing the starting point . If such a path exists , it means that the destination example belongs to the same leaf of the foliation . Visualizing the intermediate points on these joining paths , we see that the low-dimensional data manifold defined by the model is not the anthropocentric data manifold composed of data meaningful for a human observer . The model-centric data manifold comprises images that do not belong to a precise class . The model needs those transition points to connect points with different labels . At the same time , it understands that such transition points represent an ambiguous digit : on such points , the model assigns a low probability to every class . The experiments also show that moving orthogonally to the data leaf we find noisy images . That means that the other leaves of the foliation contain images with a level of noise that increases with the distance from the data leaf . These noisy images become soon meaningless to the human eye , while the model still classifies them with high confidence . This fact is a consequence of the property of the local data matrix : equation ( 8 ) prescribes that the model output does not change if we move in a direction orthogonal to the tangent space of the leaf on which our data is located . This remark points us to other possible applications of the model-centric data manifold . We could project a noisy point on the data leaf to perform denoising , or we can use the distance from the data leaf to recognize out-of-distribution examples . The main contributions of the paper are : 1. the definition of the local data matrix G ( x , w ) at a point x of the data domain and for a given model w , and the study of its properties ; 2. the proof that the subspace spanned by the eigenvectors with non-zero eigenvalue of the local data matrix G ( x , w ) can be interpreted as the tangent space of a Riemannian manifold , whose dimension is bounded by the number of classes on which our model is trained ; 3. the identification and visualization of the model-centric data manifold through paths , obtained via experiments on MNIST . Organization of the paper . In Section 2 , we review the fundamentals of information geometry using a novel perspective that aims at facilitating the comprehension of the key concepts of the paper . We introduce the local data matrix G ( x , w ) and we summarize its properties in Prop . 2.1 . In Section 3 , we show that , through the local data matrix , under some mild hypotheses , the data domain foliates as a disjoint union of leaves , which are all Riemannian submanifolds of Rn , with metric given via G ( x , w ) . In Section 4 , we provide evidence that all our dataset lies on one leaf of the foliation and that moving along directions orthogonal to the data leaf amounts to adding noise to data . 2 INFORMATION GEOMETRY . Here we collect some results pertaining to information geometry ( Amari , 1998 ; Nielsen , 2018 ) , using a novel perspective adapted to our question , namely how to provide a manifold structure to the space containing data . Let p ( y|x , w ) be a discrete probability distribution on C classification labels , i.e . p ( y|x , w ) = ( pi ( y|x , w ) ) i=1 , ... , C , x ∈ Σ ⊂ Rn , w ∈ Rd . In the applications , x represent input data belonging to a certain dataset Σ , while w are the learning parameters , i.e . the parameters of the empirical model . As we are going to see in our discussion later on , it is fruitful to treat the two sets of variables x and w on equal grounds . This will naturally lead to a geometric structure on a low dimensional submanifold of Rn , that we can navigate through paths joining points in the dataset Σ ( see Section 4 ) . In order to give some context to our treatment , we define , following Amari ( 1998 ) Section 3 , the information loss I ( x , w ) = − log ( p ( y|x , w ) ) and the loss function L ( x , w ) = Ey∼q [ I ( x , w ) ] . Typically L ( x , w ) is used for practical optimizations , where we need to compare the model output distribution p ( y|x , w ) with a certain known true distribution q ( y|x ) . We may also view L ( x , w ) as the Kullback-Leibler divergence up to the constant − ∑ i qi ( y|x ) log qi ( y|x ) , irrelevant for any optimization problem : L ( x , w ) = Ey∼q [ − log ( p ( y|x , w ) ) ] = C∑ i=1 qi ( y|x ) log qi ( y|x ) pi ( y|x , w ) − C∑ i=1 qi ( y|x ) log qi ( y|x ) = = DKL ( q ( y|x ) ||p ( y|x , w ) ) − C∑ i=1 qi ( y|x ) log qi ( y|x ) ( 1 ) A popular choice for p ( y|x , w ) in deep learning classification algorithms is pi ( y|x , w ) = softmax ( s1 ( x , w ) , . . . , sC ( x , w ) ) i = esi ( x , w ) ∑C j=1 e sj ( x , w ) , ( 2 ) where s ( x , w ) ∈ RC is a score function determined by parameters w. From such p ( y|x , w ) we derive the cross-entropy with softmax loss function : L ( x , w ) = Ey∼q [ I ( x , w ) ] = Ey∼q [ − log p ( y|x , w ) ] = −syx ( x , w ) + log C∑ j=1 esj ( x , w ) , ( 3 ) where L ( x , w ) is computed with respect to the probability mass distribution q ( y|x ) assigning 1 to the correct label yx of our datum x and zero otherwise . Other approaches rely on label smoothing ( Szegedy et al. , 2016 ) , hence they take a different L ( x , w ) . However , since our treatment mainly relies on the expression of I ( x , w ) , our results will also apply to such loss functions , provided some hypotheses , that we list in Section 3 , are satisfied . Going back to the general setting , notice that : Ey∼p [ ∇wI ( x , w ) ] = 0 . In fact , Ey∼p [ ∇w ( log ( p ( y|x , w ) ) ] = C∑ i=1 pi ( y|x , w ) ∇w log pi ( y|x , w ) = C∑ i=1 ∇wpi ( y|x , w ) = ∇w1 = 0 . ( 4 ) Let us now define the following two matrices : F ( x , w ) = Ey∼p [ ∇w ( log ( p ( y|x , w ) ) · ∇w ( log ( p ( y|x , w ) ) T ] ( 5 ) G ( x , w ) = Ey∼p [ ∇x ( log ( p ( y|x , w ) ) · ∇x ( log ( p ( y|x , w ) ) T ] ( 6 ) We call F ( x , w ) the local Fisher matrix at the datum x and G ( x , w ) the local data matrix given the model w. The Fisher matrix ( Amari , 1998 ) is obtained as F ( w ) = Ex∼Σ [ F ( x , w ) ] and it gives information on the metric structure of the space of parameters . Similarly , we can reverse our perspective and see how G ( x , w ) , allows us to recognize some structure in our dataset . The following observations apply to both F ( x , w ) and G ( x , w ) and provide the theoretical cornerstone of Section 3 . Proposition 2.1 . Let the notation be as above . Then : 1. kerF ( x , w ) = span { ∇w log pi ( y|x , w ) |1 ≤ i ≤ C } ⊥ ; kerG ( x , w ) = span { ∇x log pi ( y|x , w ) |1 ≤ i ≤ C } ⊥ . 2. rank F ( x , w ) < C , rank G ( x , w ) < C. Proof . See Appendix B . This result tells us that the rank of both F ( x , w ) andG ( x , w ) is bounded byC , the number of classes in our classification problem . The consequence of the bound on rank F ( x , w ) is the following : in SGD dynamics with a single example , the number of directions in which the change in our parameters modifies our loss is severely limited . The consequence on the bound on rank G ( x , w ) is even more striking : it will allow us to define a submanifold of Rn of dimension rankG ( x , w ) ( Section 3 ) , that our experiments show contains our dataset ( Section 4 ) . In practical situations , this dimension is much lower than the size of G ( x , w ) , i.e . the input size n , as shown in Table 1 . | The authors propose a so called data matrix that is induced in the input space of a deep neural network classifier. This matrix is similar to the Fisher-Rao metric, but for the input and not the parameters of the model. The analysis of this matrix shows that the classifier induces in the input space a specific structure, which the authors study. Constructive experiments are used in order to empirically verify the claims. | SP:11f0323635f0647b3407ac61faad5b149754b06c |
Model-centric data manifold: the data through the eyes of the model | 1 INTRODUCTION . In machine learning , models are categorized as discriminative models or generative models . From its inception , deep learning has focused on classification and discriminative models ( Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ; Collobert et al. , 2011 ) . Another perspective came with the construction of generative models based on neural networks ( Kingma & Welling , 2014 ; Goodfellow et al. , 2014 ; Van den Oord et al. , 2016 ; Kingma & Dhariwal , 2018 ) . Both kinds of models give us information about the data and the similarity between examples . In particular , generative models introduce a geometric structure on generated data . Such models transform a random low-dimensional vector to an example sampled from a probability distribution approximating the one of the training dataset . As proved by Arjovsky & Bottou ( 2017 ) , generated data lie on a countable union of manifolds . This fact supports the human intuition that data have a low-dimensional manifold structure , but in generative models the dimension of such a manifold is usually a hyper-parameter fixed by the experimenter . A recent algorithm by Peebles et al . ( 2020 ) provides a way to find an approximation of the number of dimensions of the data manifold , deactivating irrelevant dimensions in a GAN . Similarly , here we try to understand if a discriminative model can be used to detect a manifold structure on the space containing data and to provide tools to navigate this manifold . The implicit definition of such a manifold and the possibility to trace paths between points on the manifold can open many possible applications . In particular , we could use paths to define a system of coordinates on the manifold ( more specifically on a chart of the manifold ) . Such coordinates would immediately give us a low-dimensional parametrization of our data , allowing us to do dimensionality reduction . In supervised learning , a model is trained on a labeled dataset to identify the correct label on unseen data . A trained neural network classifier builds a hierarchy of representations that encodes increasingly complex features of the input data ( Olah et al. , 2017 ) . Through the representation function , a distance ( e.g . euclidean or cosine ) on the representation space of a layer endows input data with a distance . This pyramid of distances on examples is increasingly class-aware : the deeper is the layer , the better the metric reflects the similarity of data according to the task at hand . This observation suggests that the model is implicitly organizing the data according to a suitable structure . Unfortunately , these intermediate representations and metrics are insufficient to understand the geometric structure of data . First of all , representation functions are not invertible , so we can not recover the original example from its intermediate representation or interpolate between data points . Moreover , the domain of representation functions is the entire data domain Rn . This domain is mostly composed of meaningless noise and data occupy only a thin region inside of it . So , even if representation functions provide us a distance , those metrics are incapable of distinguishing between meaningful data and noise . We find out that a ReLU neural network implicitly identifies a low-dimensional submanifold of the data domain that contains real data . We prove that if the activation function is piecewise-linear ( e.g . ReLU ) , the neural network decomposes the data domain Rn as the disjoint union of submanifolds ( the leaves of a foliation , using the terminology of differential geometry ) . The dimension of every submanifold ( every leaf of the foliation ) is bounded by the number of classes of our classification model , so it is much smaller than n , the dimension of the data domain Rn . Our main theoretical result , Theorem 3.1 , stems from the study of the properties of a variant of the Fisher Information matrix , the local data matrix . However , Theorem 3.1 can not tell us which leaves of this foliation are meaningful , i.e . what are the possible interesting practical applications of the submanifolds that compose the foliation . The interpretation of this geometric structure can only come from experiments . We report experiments performed on MNIST dataset . We choose to focus on MNIST because it is easily interpretable and because small networks are sufficient to reach a high accuracy . Our experiments suggest that all valid data points lie on only one leaf of the foliation , the data leaf . To observe this phenomenon we take an example from the dataset and we try to connect it with another random example following a path along the leaf containing the starting point . If such a path exists , it means that the destination example belongs to the same leaf of the foliation . Visualizing the intermediate points on these joining paths , we see that the low-dimensional data manifold defined by the model is not the anthropocentric data manifold composed of data meaningful for a human observer . The model-centric data manifold comprises images that do not belong to a precise class . The model needs those transition points to connect points with different labels . At the same time , it understands that such transition points represent an ambiguous digit : on such points , the model assigns a low probability to every class . The experiments also show that moving orthogonally to the data leaf we find noisy images . That means that the other leaves of the foliation contain images with a level of noise that increases with the distance from the data leaf . These noisy images become soon meaningless to the human eye , while the model still classifies them with high confidence . This fact is a consequence of the property of the local data matrix : equation ( 8 ) prescribes that the model output does not change if we move in a direction orthogonal to the tangent space of the leaf on which our data is located . This remark points us to other possible applications of the model-centric data manifold . We could project a noisy point on the data leaf to perform denoising , or we can use the distance from the data leaf to recognize out-of-distribution examples . The main contributions of the paper are : 1. the definition of the local data matrix G ( x , w ) at a point x of the data domain and for a given model w , and the study of its properties ; 2. the proof that the subspace spanned by the eigenvectors with non-zero eigenvalue of the local data matrix G ( x , w ) can be interpreted as the tangent space of a Riemannian manifold , whose dimension is bounded by the number of classes on which our model is trained ; 3. the identification and visualization of the model-centric data manifold through paths , obtained via experiments on MNIST . Organization of the paper . In Section 2 , we review the fundamentals of information geometry using a novel perspective that aims at facilitating the comprehension of the key concepts of the paper . We introduce the local data matrix G ( x , w ) and we summarize its properties in Prop . 2.1 . In Section 3 , we show that , through the local data matrix , under some mild hypotheses , the data domain foliates as a disjoint union of leaves , which are all Riemannian submanifolds of Rn , with metric given via G ( x , w ) . In Section 4 , we provide evidence that all our dataset lies on one leaf of the foliation and that moving along directions orthogonal to the data leaf amounts to adding noise to data . 2 INFORMATION GEOMETRY . Here we collect some results pertaining to information geometry ( Amari , 1998 ; Nielsen , 2018 ) , using a novel perspective adapted to our question , namely how to provide a manifold structure to the space containing data . Let p ( y|x , w ) be a discrete probability distribution on C classification labels , i.e . p ( y|x , w ) = ( pi ( y|x , w ) ) i=1 , ... , C , x ∈ Σ ⊂ Rn , w ∈ Rd . In the applications , x represent input data belonging to a certain dataset Σ , while w are the learning parameters , i.e . the parameters of the empirical model . As we are going to see in our discussion later on , it is fruitful to treat the two sets of variables x and w on equal grounds . This will naturally lead to a geometric structure on a low dimensional submanifold of Rn , that we can navigate through paths joining points in the dataset Σ ( see Section 4 ) . In order to give some context to our treatment , we define , following Amari ( 1998 ) Section 3 , the information loss I ( x , w ) = − log ( p ( y|x , w ) ) and the loss function L ( x , w ) = Ey∼q [ I ( x , w ) ] . Typically L ( x , w ) is used for practical optimizations , where we need to compare the model output distribution p ( y|x , w ) with a certain known true distribution q ( y|x ) . We may also view L ( x , w ) as the Kullback-Leibler divergence up to the constant − ∑ i qi ( y|x ) log qi ( y|x ) , irrelevant for any optimization problem : L ( x , w ) = Ey∼q [ − log ( p ( y|x , w ) ) ] = C∑ i=1 qi ( y|x ) log qi ( y|x ) pi ( y|x , w ) − C∑ i=1 qi ( y|x ) log qi ( y|x ) = = DKL ( q ( y|x ) ||p ( y|x , w ) ) − C∑ i=1 qi ( y|x ) log qi ( y|x ) ( 1 ) A popular choice for p ( y|x , w ) in deep learning classification algorithms is pi ( y|x , w ) = softmax ( s1 ( x , w ) , . . . , sC ( x , w ) ) i = esi ( x , w ) ∑C j=1 e sj ( x , w ) , ( 2 ) where s ( x , w ) ∈ RC is a score function determined by parameters w. From such p ( y|x , w ) we derive the cross-entropy with softmax loss function : L ( x , w ) = Ey∼q [ I ( x , w ) ] = Ey∼q [ − log p ( y|x , w ) ] = −syx ( x , w ) + log C∑ j=1 esj ( x , w ) , ( 3 ) where L ( x , w ) is computed with respect to the probability mass distribution q ( y|x ) assigning 1 to the correct label yx of our datum x and zero otherwise . Other approaches rely on label smoothing ( Szegedy et al. , 2016 ) , hence they take a different L ( x , w ) . However , since our treatment mainly relies on the expression of I ( x , w ) , our results will also apply to such loss functions , provided some hypotheses , that we list in Section 3 , are satisfied . Going back to the general setting , notice that : Ey∼p [ ∇wI ( x , w ) ] = 0 . In fact , Ey∼p [ ∇w ( log ( p ( y|x , w ) ) ] = C∑ i=1 pi ( y|x , w ) ∇w log pi ( y|x , w ) = C∑ i=1 ∇wpi ( y|x , w ) = ∇w1 = 0 . ( 4 ) Let us now define the following two matrices : F ( x , w ) = Ey∼p [ ∇w ( log ( p ( y|x , w ) ) · ∇w ( log ( p ( y|x , w ) ) T ] ( 5 ) G ( x , w ) = Ey∼p [ ∇x ( log ( p ( y|x , w ) ) · ∇x ( log ( p ( y|x , w ) ) T ] ( 6 ) We call F ( x , w ) the local Fisher matrix at the datum x and G ( x , w ) the local data matrix given the model w. The Fisher matrix ( Amari , 1998 ) is obtained as F ( w ) = Ex∼Σ [ F ( x , w ) ] and it gives information on the metric structure of the space of parameters . Similarly , we can reverse our perspective and see how G ( x , w ) , allows us to recognize some structure in our dataset . The following observations apply to both F ( x , w ) and G ( x , w ) and provide the theoretical cornerstone of Section 3 . Proposition 2.1 . Let the notation be as above . Then : 1. kerF ( x , w ) = span { ∇w log pi ( y|x , w ) |1 ≤ i ≤ C } ⊥ ; kerG ( x , w ) = span { ∇x log pi ( y|x , w ) |1 ≤ i ≤ C } ⊥ . 2. rank F ( x , w ) < C , rank G ( x , w ) < C. Proof . See Appendix B . This result tells us that the rank of both F ( x , w ) andG ( x , w ) is bounded byC , the number of classes in our classification problem . The consequence of the bound on rank F ( x , w ) is the following : in SGD dynamics with a single example , the number of directions in which the change in our parameters modifies our loss is severely limited . The consequence on the bound on rank G ( x , w ) is even more striking : it will allow us to define a submanifold of Rn of dimension rankG ( x , w ) ( Section 3 ) , that our experiments show contains our dataset ( Section 4 ) . In practical situations , this dimension is much lower than the size of G ( x , w ) , i.e . the input size n , as shown in Table 1 . | In this work, the authors showed that deep ReLU networks can model the low dimensional manifold structure of the dataset. The authors first define a local data matrix G which is analogous to Fisher matrix. Then they proved that the tangent space of the data manifold is spanned by the eigen vectors of G corresponding to non-zero eigen values. The authors visualize this data manifold on MNIST data. | SP:11f0323635f0647b3407ac61faad5b149754b06c |
Impact of Representation Learning in Linear Bandits | √ kN + √ dkNT ) regret , where N is the number of rounds we play for each bandit . When T is sufficiently large , our algorithm significantly outperforms the naive algorithm ( playing T bandits independently ) that achieves Õ ( T √ dN ) regret . We also provide an Ω ( T √ kN + √ dkNT ) regret lower bound , showing that our algorithm is minimax-optimal up to poly-logarithmic factors . Furthermore , we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes . We also present experiments on synthetic and realworld data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms . 1 INTRODUCTION . This paper investigates the benefit of using representation learning for sequential decision-making problems . Representation learning learns a joint low-dimensional embedding ( feature extractor ) from different but related tasks and then uses a simple function ( often a linear one ) on top of the embedding ( Baxter , 2000 ; Caruana , 1997 ; Li et al. , 2010 ) The mechanism behind is that since the tasks are related , we can extract the common information more efficiently than treating each task independently . Empirically , representation learning has become a popular approach for improving sample efficiency across various machine learning tasks ( Bengio et al. , 2013 ) . In particular , recently , representation learning has become increasingly more popular in sequential decision-making problems ( Teh et al. , 2017 ; Taylor & Stone , 2009 ; Lazaric & Restelli , 2011 ; Rusu et al. , 2015 ; Liu et al. , 2016 ; Parisotto et al. , 2015 ; Higgins et al. , 2017 ; Hessel et al. , 2019 ; Arora et al. , 2020 ; D ’ Eramo et al. , 2020 ) . For example , many sequential decision-making tasks share the same environment but have different reward functions . Thus a natural approach is to learn a succinct representation that describes the environment and then make decisions for different tasks on top of the learned representation . While representation learning is already widely applied in sequential decision-making problems empirically , its theoretical foundation is still limited . One important problem remains open : When does representation learning provably improve efficiency of sequential decision-making problems ? We take a step to characterize the benefit of representation learning in sequential decision-making problems . We tackle the above problem in the linear bandits setting , one of the most fundamental and popular settings in sequential decision-making problems . This model is widely used in applications as such clinical treatment , manufacturing process , job scheduling , recommendation systems , etc ( Dani et al. , 2008 ; Chu et al. , 2011 ) . We study the multi-task version of linear bandits , which naturally models the scenario where one needs to deal with multiple different but closely related sequential decision-making problems concurrently . We will mostly focus on the finite-action setting . Specifically , we have T tasks , each of which is governed by an unknown linear coefficient θt ∈ Rd . At the n-th round , for each task t ∈ [ T ] , the player chooses an action an , t that belongs to a finite set , and receive a reward rn , t with expectation E rn , t = 〈θt , xn , t , an , t〉 where xn , t , an , t represents the context of action an , t . For this problem , a straightforward approach is to treat each task independently , which leads to Õ ( T √ dN ) 1 total regret . Can we do better ? Clearly , if the tasks are independent , then by the classical Ω ( √ dN ) per task lower bound for linear bandit , it is impossible to do better . We investigate how representation learning can help if the tasks are related . Our main assumption is the existence of an unknown linear feature extractorB ∈ Rd×k with k d and a set of linear coefficients { wt } Tt=1 such that θt = Bwt . Under this assumption , the tasks are closely related as B is a shared linear feature extractor that maps the raw contexts xn , t , a ∈ Rd to a low-dimensional embedding B > xn , t , a ∈ Rk . In this paper , we focus on the regime where k d , N , T . This regime is common in real-world problems , e.g. , computer vision , where the input dimension is high , the number of data is large , many task are related , and there exists a low-dimension representation among these tasks that we can utilize . Problems with similar assumptions have been studied in the supervised learning setting ( Ando & Zhang , 2005 ) . However , to our knowledge , this formulation has not been studied in the bandit setting . Our Contributions We give the first rigorous characterization on the benefit of representation learning for multi-task linear bandits . Our contributions are summarized below . • We design a new algorithm for the aforementioned problem . Theoretically , we show our algorithm incurs Õ ( √ dkTN + T √ kN ) total regret in N rounds for all T tasks . Therefore , our algorithm outperforms the naive approach with O ( T √ dN ) regret . To our knowledge , this is the first theoretical result demonstrating the benefit of representation learning for bandits problems . • To complement our upper bound , we also provide an Ω ( √ dkTN+T √ kN ) lower bound , showing our regret bound is tight up to polylogarithmic factors . • We further design a new algorithm for the infinite-action setting , which has a regret Õ ( d1.5k √ TN + kT √ N ) , which outperforms the naive approach with O ( Td √ N ) regret in the regime where T = Ω̃ ( dk2 ) . • We provide simulations and an experiment on MNIST dataset to illustrate the effectiveness of our algorithms and the benefits of representation learning . Organization This paper is organized as follows . In Section 2 , we discuss related work . In Section 3 , we introduce necessary notation , formally set up our problem , and describe our assumptions . In Section 4 , we present our main algorithm for the finite-action setting and its performance guarantee . In Section 5 , we describe our algorithm and its theoretical guarantee for the infinite-action setting . In Section 6 , we provide simulation studies and real-world experiments to validate the effectiveness of our approach . We conclude in Section 7 and defer all proofs to the Appendix . 2 RELATED WORK . Here we mainly focus on related theoretical results . We refer readers to Bengio et al . ( 2013 ) for empirical results of using representation learning . For supervised learning , there is a long line of works on multi-task learning and representation learning with various assumptions ( Baxter , 2000 ; Ando & Zhang , 2005 ; Ben-David & Schuller , 2003 ; Maurer , 2006 ; Cavallanti et al. , 2010 ; Maurer et al. , 2016 ; Du et al. , 2020 ; Tripuraneni et al. , 2020 ) . 1Õ ( · ) omits logarithmic factors . All these results assumed the existence of a common representation shared among all tasks . However , this assumption alone is not sufficient . For example , Maurer et al . ( 2016 ) further assumed every task is i.i.d . drawn from an underlying distribution . Recently , Du et al . ( 2020 ) replaced the i.i.d . assumption with a deterministic assumption on the input distribution . Finally , it is worth mentioning that Tripuraneni et al . ( 2020 ) gave the method-of-moments estimator and built the confidence ball for the feature extractor , which inspired our algorithm for the infinite-action setting . The benefit of representation learning has been studied in sequential decision-making problems , especially in reinforcement learning domains . D ’ Eramo et al . ( 2020 ) showed that representation learning can improve the rate of approximate value iteration algorithm . Arora et al . ( 2020 ) proved that representation learning can reduce the sample complexity of imitation learning . Both works require a probabilistic assumption similar to that in ( Maurer et al. , 2016 ) and the statistical rates are of similar forms as those in ( Maurer et al. , 2016 ) . We remark that representation learning is also closely connected to meta-learning ( Schaul & Schmidhuber , 2010 ) . Raghu et al . ( 2019 ) empirically suggested that the effectiveness of metalearning is due to its ability to learn a useful representation . There is a line of works that analyzed the theoretical properties of meta-learning ( Denevi et al. , 2019 ; Finn et al. , 2019 ; Khodak et al. , 2019 ; Lee et al. , 2019 ; Bertinetto et al. , 2018 ) . We also note that there are analyses for other representation learning schemes ( Arora et al. , 2019 ; McNamara & Balcan , 2017 ; Galanti et al. , 2016 ; Alquier et al. , 2016 ; Denevi et al. , 2018 ) . Linear bandits ( stochastic linear bandits / linearly parameterized bandits / contextual linear bandits ) have been studied in recent years ( Auer , 2002 ; Dani et al. , 2008 ; Rusmevichientong & Tsitsiklis , 2010 ; Abbasi-Yadkori et al. , 2011 ; Chu et al. , 2011 ; Li et al. , 2019a ; b ) . The studies are divided into two branches according to whether the action set is finite or infinite . For the finite-action setting , Θ̃ ( √ dN ) has been shown to be the near-optimal regret bound ( Chu et al. , 2011 ; Li et al. , 2019a ) , and for the infinite-action setting , Θ̃ ( d √ N ) regret bound has been shown to be near-optimal ( Dani et al. , 2008 ; Rusmevichientong & Tsitsiklis , 2010 ; Li et al. , 2019b ) . Some previous work studied the impact of low-rank structure in linear bandit . Lale et al . ( 2019 ) studied a setting where the context vectors share a low-rank structure . Specifically , in their setting , the context vectors consist of two parts , i.e . x̂ = x+ψ , so that x is from a hidden low-rank subspace and ψ is i.i.d . drawn from an isotropic distribution . Jun et al . ( 2019 ) and Lu et al . ( 2020 ) studied the bilinear bandits with low-rank structure . In their setting , the player chooses two actions x , y and receives the stochastic reward with mean x > Θy , where Θ is an unknown low-rank bilinear form . The algorithms proposed in the aforementioned papers share some similarities with our Algorithm 2 for our infinite-action setting , in that both used Davis-Kahan theorem to recover and exploit the low-rank structure . Some previous work proposed multi-task bandits with different settings . Deshmukh et al . ( 2017 ) proposed a setting under the contextual bandit framework . They assumed similarities among arms . Bastani et al . ( 2019 ) studied a setting where the coefficients of the tasks were drawn from a gaussian distribution fixed across tasks and proposed an algorithm based on Thompson sampling . Soare et al . ( 2018 ) proposed a setting where tasks were played one by one sequentially and the coefficients of the tasks were near in ` 2 distance . In our setting , the tasks are played simultaneously and the coefficients share a common linear feature extractor . | This paper studies the benefits of learning a low-rank feature extractor in multi-task linear bandits. Specifically, the paper studies the setting where an unknown common linear feature extractor $B \in R^{d \times k}$ maps the original $d$-dimensional contexts $x$ to a $k$-dimensional representation. Essentially, for multi-task linear bandit problem $r_t = \theta_t^T x$, this paper assumes the matrix of model parameters $\Theta \in R^{d\times T}$ is low-rank and be factorized as $\Theta = BW$ with rank k. The paper proposed algorithms to estimate both $B$ and $W$ in finite actions setting and infinite actions setting. In finite action setting, the proposed solution is a greedy algorithm while in infinite actions setting, the proposed solution is a explore-the-commit method. Theoretical result shows that the regret is $O(T\sqrt{kN} + \sqrt{dkTN})$ and matches the lower bound. Simulation result shows that the algorithm outperforms baselines running independent linear bandit algorithms. | SP:0fa4963f1d57d48a6271fe726358d204b1e286e8 |
Impact of Representation Learning in Linear Bandits | √ kN + √ dkNT ) regret , where N is the number of rounds we play for each bandit . When T is sufficiently large , our algorithm significantly outperforms the naive algorithm ( playing T bandits independently ) that achieves Õ ( T √ dN ) regret . We also provide an Ω ( T √ kN + √ dkNT ) regret lower bound , showing that our algorithm is minimax-optimal up to poly-logarithmic factors . Furthermore , we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes . We also present experiments on synthetic and realworld data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms . 1 INTRODUCTION . This paper investigates the benefit of using representation learning for sequential decision-making problems . Representation learning learns a joint low-dimensional embedding ( feature extractor ) from different but related tasks and then uses a simple function ( often a linear one ) on top of the embedding ( Baxter , 2000 ; Caruana , 1997 ; Li et al. , 2010 ) The mechanism behind is that since the tasks are related , we can extract the common information more efficiently than treating each task independently . Empirically , representation learning has become a popular approach for improving sample efficiency across various machine learning tasks ( Bengio et al. , 2013 ) . In particular , recently , representation learning has become increasingly more popular in sequential decision-making problems ( Teh et al. , 2017 ; Taylor & Stone , 2009 ; Lazaric & Restelli , 2011 ; Rusu et al. , 2015 ; Liu et al. , 2016 ; Parisotto et al. , 2015 ; Higgins et al. , 2017 ; Hessel et al. , 2019 ; Arora et al. , 2020 ; D ’ Eramo et al. , 2020 ) . For example , many sequential decision-making tasks share the same environment but have different reward functions . Thus a natural approach is to learn a succinct representation that describes the environment and then make decisions for different tasks on top of the learned representation . While representation learning is already widely applied in sequential decision-making problems empirically , its theoretical foundation is still limited . One important problem remains open : When does representation learning provably improve efficiency of sequential decision-making problems ? We take a step to characterize the benefit of representation learning in sequential decision-making problems . We tackle the above problem in the linear bandits setting , one of the most fundamental and popular settings in sequential decision-making problems . This model is widely used in applications as such clinical treatment , manufacturing process , job scheduling , recommendation systems , etc ( Dani et al. , 2008 ; Chu et al. , 2011 ) . We study the multi-task version of linear bandits , which naturally models the scenario where one needs to deal with multiple different but closely related sequential decision-making problems concurrently . We will mostly focus on the finite-action setting . Specifically , we have T tasks , each of which is governed by an unknown linear coefficient θt ∈ Rd . At the n-th round , for each task t ∈ [ T ] , the player chooses an action an , t that belongs to a finite set , and receive a reward rn , t with expectation E rn , t = 〈θt , xn , t , an , t〉 where xn , t , an , t represents the context of action an , t . For this problem , a straightforward approach is to treat each task independently , which leads to Õ ( T √ dN ) 1 total regret . Can we do better ? Clearly , if the tasks are independent , then by the classical Ω ( √ dN ) per task lower bound for linear bandit , it is impossible to do better . We investigate how representation learning can help if the tasks are related . Our main assumption is the existence of an unknown linear feature extractorB ∈ Rd×k with k d and a set of linear coefficients { wt } Tt=1 such that θt = Bwt . Under this assumption , the tasks are closely related as B is a shared linear feature extractor that maps the raw contexts xn , t , a ∈ Rd to a low-dimensional embedding B > xn , t , a ∈ Rk . In this paper , we focus on the regime where k d , N , T . This regime is common in real-world problems , e.g. , computer vision , where the input dimension is high , the number of data is large , many task are related , and there exists a low-dimension representation among these tasks that we can utilize . Problems with similar assumptions have been studied in the supervised learning setting ( Ando & Zhang , 2005 ) . However , to our knowledge , this formulation has not been studied in the bandit setting . Our Contributions We give the first rigorous characterization on the benefit of representation learning for multi-task linear bandits . Our contributions are summarized below . • We design a new algorithm for the aforementioned problem . Theoretically , we show our algorithm incurs Õ ( √ dkTN + T √ kN ) total regret in N rounds for all T tasks . Therefore , our algorithm outperforms the naive approach with O ( T √ dN ) regret . To our knowledge , this is the first theoretical result demonstrating the benefit of representation learning for bandits problems . • To complement our upper bound , we also provide an Ω ( √ dkTN+T √ kN ) lower bound , showing our regret bound is tight up to polylogarithmic factors . • We further design a new algorithm for the infinite-action setting , which has a regret Õ ( d1.5k √ TN + kT √ N ) , which outperforms the naive approach with O ( Td √ N ) regret in the regime where T = Ω̃ ( dk2 ) . • We provide simulations and an experiment on MNIST dataset to illustrate the effectiveness of our algorithms and the benefits of representation learning . Organization This paper is organized as follows . In Section 2 , we discuss related work . In Section 3 , we introduce necessary notation , formally set up our problem , and describe our assumptions . In Section 4 , we present our main algorithm for the finite-action setting and its performance guarantee . In Section 5 , we describe our algorithm and its theoretical guarantee for the infinite-action setting . In Section 6 , we provide simulation studies and real-world experiments to validate the effectiveness of our approach . We conclude in Section 7 and defer all proofs to the Appendix . 2 RELATED WORK . Here we mainly focus on related theoretical results . We refer readers to Bengio et al . ( 2013 ) for empirical results of using representation learning . For supervised learning , there is a long line of works on multi-task learning and representation learning with various assumptions ( Baxter , 2000 ; Ando & Zhang , 2005 ; Ben-David & Schuller , 2003 ; Maurer , 2006 ; Cavallanti et al. , 2010 ; Maurer et al. , 2016 ; Du et al. , 2020 ; Tripuraneni et al. , 2020 ) . 1Õ ( · ) omits logarithmic factors . All these results assumed the existence of a common representation shared among all tasks . However , this assumption alone is not sufficient . For example , Maurer et al . ( 2016 ) further assumed every task is i.i.d . drawn from an underlying distribution . Recently , Du et al . ( 2020 ) replaced the i.i.d . assumption with a deterministic assumption on the input distribution . Finally , it is worth mentioning that Tripuraneni et al . ( 2020 ) gave the method-of-moments estimator and built the confidence ball for the feature extractor , which inspired our algorithm for the infinite-action setting . The benefit of representation learning has been studied in sequential decision-making problems , especially in reinforcement learning domains . D ’ Eramo et al . ( 2020 ) showed that representation learning can improve the rate of approximate value iteration algorithm . Arora et al . ( 2020 ) proved that representation learning can reduce the sample complexity of imitation learning . Both works require a probabilistic assumption similar to that in ( Maurer et al. , 2016 ) and the statistical rates are of similar forms as those in ( Maurer et al. , 2016 ) . We remark that representation learning is also closely connected to meta-learning ( Schaul & Schmidhuber , 2010 ) . Raghu et al . ( 2019 ) empirically suggested that the effectiveness of metalearning is due to its ability to learn a useful representation . There is a line of works that analyzed the theoretical properties of meta-learning ( Denevi et al. , 2019 ; Finn et al. , 2019 ; Khodak et al. , 2019 ; Lee et al. , 2019 ; Bertinetto et al. , 2018 ) . We also note that there are analyses for other representation learning schemes ( Arora et al. , 2019 ; McNamara & Balcan , 2017 ; Galanti et al. , 2016 ; Alquier et al. , 2016 ; Denevi et al. , 2018 ) . Linear bandits ( stochastic linear bandits / linearly parameterized bandits / contextual linear bandits ) have been studied in recent years ( Auer , 2002 ; Dani et al. , 2008 ; Rusmevichientong & Tsitsiklis , 2010 ; Abbasi-Yadkori et al. , 2011 ; Chu et al. , 2011 ; Li et al. , 2019a ; b ) . The studies are divided into two branches according to whether the action set is finite or infinite . For the finite-action setting , Θ̃ ( √ dN ) has been shown to be the near-optimal regret bound ( Chu et al. , 2011 ; Li et al. , 2019a ) , and for the infinite-action setting , Θ̃ ( d √ N ) regret bound has been shown to be near-optimal ( Dani et al. , 2008 ; Rusmevichientong & Tsitsiklis , 2010 ; Li et al. , 2019b ) . Some previous work studied the impact of low-rank structure in linear bandit . Lale et al . ( 2019 ) studied a setting where the context vectors share a low-rank structure . Specifically , in their setting , the context vectors consist of two parts , i.e . x̂ = x+ψ , so that x is from a hidden low-rank subspace and ψ is i.i.d . drawn from an isotropic distribution . Jun et al . ( 2019 ) and Lu et al . ( 2020 ) studied the bilinear bandits with low-rank structure . In their setting , the player chooses two actions x , y and receives the stochastic reward with mean x > Θy , where Θ is an unknown low-rank bilinear form . The algorithms proposed in the aforementioned papers share some similarities with our Algorithm 2 for our infinite-action setting , in that both used Davis-Kahan theorem to recover and exploit the low-rank structure . Some previous work proposed multi-task bandits with different settings . Deshmukh et al . ( 2017 ) proposed a setting under the contextual bandit framework . They assumed similarities among arms . Bastani et al . ( 2019 ) studied a setting where the coefficients of the tasks were drawn from a gaussian distribution fixed across tasks and proposed an algorithm based on Thompson sampling . Soare et al . ( 2018 ) proposed a setting where tasks were played one by one sequentially and the coefficients of the tasks were near in ` 2 distance . In our setting , the tasks are played simultaneously and the coefficients share a common linear feature extractor . | This paper theoretically studies the benefits of representation learning in linear bandit problems. The key assumption is the existence of a common linear feature extractor. Two different setting are studied. In the finite-action setting, the authors provide the MLinGreedy algorithm that achieves matching upper and lower bounds (up to polylog factors). In the infinite-action setting, the authors provide the $E^2TC$ algorithm that can achieve lower regret than the naive method when the number of tasks is large. Experiments on both synthetic and real-world data are conducted, which confirm the theoretical results. | SP:0fa4963f1d57d48a6271fe726358d204b1e286e8 |
DOP: Off-Policy Multi-Agent Decomposed Policy Gradients | 1 INTRODUCTION . Cooperative multi-agent reinforcement learning ( MARL ) has achieved great progress in recent years ( Hughes et al. , 2018 ; Jaques et al. , 2019 ; Vinyals et al. , 2019 ; Zhang et al. , 2019 ; Baker et al. , 2020 ; Wang et al. , 2020c ) . Advances in valued-based MARL ( Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Son et al. , 2019 ; Wang et al. , 2020e ) contribute significantly to the progress , achieving state-ofthe-art performance on challenging tasks , such as StarCraft II micromanagement ( Samvelyan et al. , 2019 ) . However , these value-based methods present a major challenge for stability and convergence in multi-agent settings ( Wang et al. , 2020a ) , which is further exacerbated in continuous action spaces . Policy gradient methods hold great promise to resolve these challenges . MADDPG ( Lowe et al. , 2017 ) and COMA ( Foerster et al. , 2018 ) are two representative methods that adopt the paradigm of centralized critic with decentralized actors ( CCDA ) , which not only deals with the issue of nonstationarity ( Foerster et al. , 2017 ; Hernandez-Leal et al. , 2017 ) by conditioning the centralized critic on global history and actions but also maintains scalable decentralized execution via conditioning policies on local history . Several subsequent works make improvements to the CCDA framework by introducing the mechanism of recursive reasoning ( Wen et al. , 2019 ) or attention ( Iqbal & Sha , 2019 ) . Despite the progress , most of the multi-agent policy gradient ( MAPG ) methods do not provide satisfying performance , e.g. , significantly underperforming value-based methods on benchmark tasks ( Samvelyan et al. , 2019 ) . In this paper , we analyze this discrepancy and pinpoint three major issues that hinder the performance of MAPG methods . ( 1 ) Current stochastic MAPG methods do not support off-policy learning , partly because using common off-policy learning techniques is computationally expensive in multi-agent settings . ( 2 ) In the CCDA paradigm , the suboptimality of one agent ’ s policy can propagate through the centralized joint critic and negatively affect policy learning of other agents , causing catastrophic miscoordination , which we call centralized-decentralized mismatch ( CDM ) . ( 3 ) For deterministic MAPG methods , realizing efficient credit assignment ( Tumer et al. , 2002 ; Agogino & Tumer , 2004 ) with a single global reward signal largely remains challenging . ∗Equal Contribution . Listing order is random . In this paper , we find that these problems can be addressed by introducing the idea of value decomposition into the multi-agent actor-critic framework and learning a centralized but factorized critic . This framework decomposes the centralized critic as a weighted linear summation of individual critics that condition on local actions . This decomposition structure not only enables scalable learning on the critic , but also brings several benefits . It enables tractable off-policy evaluations of stochastic policies , attenuates the CDM issues , and also implicitly learns an efficient multi-agent credit assignment . Based on this decomposition , we develop efficient off-policy multi-agent decomposed policy gradient methods for both discrete and continuous action spaces . A drawback of an linearly decomposed critic is its limited representational capacity ( Wang et al. , 2020b ) , which may induce bias in value estimations . However , we show that this bias does not violate the policy improvement guarantee of policy gradient methods and that using decomposed critics can largely reduce the variance in policy updates . In this way , a decomposed critic achieves a great bias-variance trade-off . We evaluate our methods on both the StarCraft II micromanagement benchmark ( Samvelyan et al. , 2019 ) ( discrete action spaces ) and multi-agent particle environments ( Lowe et al. , 2017 ; Mordatch & Abbeel , 2018 ) ( continuous action spaces ) . Empirical results show that DOP is very stable across different runs and outperforms other MAPG algorithms by a wide margin . Moreover , to our best knowledge , stochastic DOP provides the first MAPG method that outperforms state-of-the-art valuedbased methods in discrete-action benchmark tasks . Related works on value decomposition methods . In value-based MARL , value decomposition ( Guestrin et al. , 2002b ; Castellini et al. , 2019 ) is widely used . These methods learn local Q-value functions for each agent , which are combined with a learnable mixing function to produce global action values . In VDN ( Sunehag et al. , 2018 ) , the mixing function is an arithmetic summation . QMIX ( Rashid et al. , 2018 ; 2020 ) proposes a non-linear monotonic factorization structure . QTRAN ( Son et al. , 2019 ) and QPLEX ( Wang et al. , 2020b ) further extend the class of value functions that can be represented . NDQ ( Wang et al. , 2020e ) addresses the miscoordination problem by learning nearly decomposable architectures . A concurrent work ( de Witt et al. , 2020 ) finds that a decomposed centralized critic in QMIX style can improve the performance of MADDPG for learning in continuous action spaces . In this paper , we study how and why linear value decomposition can enable efficient policy-based learning in both discrete and continuous action spaces . In Appendix F , we discuss how DOP is related to recent progress in multi-agent reinforcement learning and provide detailed comparisons with existing multi-agent policy gradient methods . 2 BACKGROUND . We consider fully cooperative multi-agent tasks that can be modelled as a Dec-POMDP ( Oliehoek et al. , 2016 ) G=〈I , S , A , P , R , Ω , O , n , γ〉 , where I is the finite set of agents , γ ∈ [ 0 , 1 ) is the discount factor , and s ∈ S is the true state of the environment . At each timestep , each agent i receives an observation oi ∈ Ω drawn according to the observation function O ( s , i ) and selects an action ai ∈ A , forming a joint action a ∈ An , leading to a next state s′ according to the transition function P ( s′|s , a ) and a reward r = R ( s , a ) shared by all agents . Each agent learns a policy πi ( ai|τi ; θi ) , which is parameterized by θi and conditioned on the local history τi ∈ T ≡ ( Ω × A ) ∗ . The joint policy π , with parameters θ = 〈θ1 , · · · , θn〉 , induces a joint action-value function : Qπtot ( τ , a ) =Es0 : ∞ , a0 : ∞ [ ∑∞ t=0 γ tR ( st , at ) | s0=s , a0=a , π ] . We consider both discrete and continuous action spaces , for which stochastic and deterministic policies are learned , respectively . To distinguish deterministic policies , we denote them by µ = 〈µ1 , · · · , µn〉 . Multi-Agent Policy Gradients The centralized training with decentralized execution ( CTDE ) paradigm ( Foerster et al. , 2016 ; Wang et al. , 2020d ) has recently attracted attention for its ability to address non-stationarity while maintaining decentralized execution . Learning a centralized critic with decentralized actors ( CCDA ) is an efficient approach that exploits the CTDE paradigm . MADDPG and COMA are two representative examples . MADDPG ( Lowe et al. , 2017 ) learns deterministic policies in continuous action spaces and uses the following gradients to update policies : g = Eτ , a∼D [ ∑ i ∇θiµi ( τi ) ∇aiQ µ tot ( τ , a ) |ai=µi ( τi ) ] , ( 1 ) and D is a replay buffer . COMA ( Foerster et al. , 2018 ) updates stochastic policies using the gradients : g = Eπ [ ∑ i ∇θi log πi ( ai|τi ) Aπi ( τ , a ) ] , ( 2 ) where Aπi ( τ , a ) = Q π tot ( τ , a ) − ∑ a′i Qπtot ( τ , ( a-i , a ′ i ) ) is a counterfactual advantage ( a-i is the joint action other than agent i ) that deals with the issue of credit assignment and reduces variance . 3 ANALYSIS . In this section , we investigate challenges that limit the performance of state-of-the-art multi-agent policy gradient methods . 3.1 OFF-POLICY LEARNING FOR MULTI-AGENT STOCHASTIC POLICY GRADIENTS . Efficient stochastic policy learning in single-agent settings relies heavily on using off-policy data ( Lillicrap et al. , 2015 ; Wang et al. , 2016 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) , which is not supported by existing stochastic MAPG methods ( Foerster et al. , 2018 ) . In the CCDA framework , off-policy policy evaluation—estimating Qπtot from data drawn from behavior policies β = 〈β1 , . . . , βn〉—encounters major challenges . Importance sampling ( Meuleau et al. , 2000 ; Jie & Abbeel , 2010 ; Levine & Koltun , 2013 ) is a simple way to correct for the discrepancy between π and β , but , it requires computing ∏ i πi ( ai|τi ) βi ( ai|τi ) , whose variance grows exponentially with the number of agents in multi-agent settings . An alternative is to extend the tree backup technique ( Precup et al. , 2000 ; Munos et al. , 2016 ) to multi-agent settings and use the k-step tree backup update target for training the critic : yTB = Qπtot ( τ , a ) + k−1∑ t=0 γt ( t∏ l=1 λπ ( al|τl ) ) [ rt + γEπ [ Qπtot ( τt+1 , · ) ] −Qπtot ( τt , at ) ] , ( 3 ) where τ0 = τ , a0 = a . However , the complexity of computing Eπ [ Qπtot ( τt+1 , · ) ] is O ( |A|n ) , which becomes intractable when the number of agents is large . Therefore , it is challenging to develop off-policy stochastic MAPG methods . 3.2 THE CENTRALIZED-DECENTRALIZED MISMATCH ISSUE . In the centralized critic with decentralized actors ( CCDA ) framework , agents learn individual policies , πi ( ai|τi ; θi ) , conditioned on the local observation-action history . However , the gradients for updating these policies are dependent on the centralized joint critic , Qπtot ( τ , a ) ( see Eq . 1 and 2 ) , which introduces the influence of actions of other agents . Intuitively , gradient updates will move an agent in the direction that can increase the global Q value , but the presence of other agents ’ actions incurs large variance in the estimates of such directions . Formally , the variance of policy gradients for agent i at ( τi , ai ) is dependent on other agents ’ actions : Vara-i∼π-i [ Q π tot ( τ , ( ai , a-i ) ) ∇θi log πi ( ai|τi ) ] =Vara-i∼π-i [ Q π tot ( τ , ( ai , a-i ) ) ] ( ∇θi log πi ( ai|τi ) ) ( ∇θi log πi ( ai|τi ) ) T , ( 4 ) where Vara-i [ Q π tot ( τ , ( ai , a-i ) ) ] can be very large due to the exploration or suboptimality of other agents ’ policies , which may cause suboptimality in individual policies . For example , suppose that the optimal joint action under τ is a∗= 〈a∗1 , . . . , a∗n〉 . When Ea-i∼π-i [ Qπtot ( τ , ( a∗i , a-i ) ) ] < 0 , πi ( a∗i |τi ) will decrease , possibly resulting in a suboptimal πi . This becomes problematic because a negative feedback loop is created , in which the joint critic is affected by the suboptimality of agent i , which disturbs policy updates of other agents . We call this issue centralized-decentralized mismatch ( CDM ) . Does CDM occur in practice for state-of-the-art algorithms ? To answer this question , we carry out a case study in Sec . 5.1 . We can see that the variance of DOP gradients is significantly smaller than COMA and MADDPG ( Fig . 2 left ) . This smaller variance enables DOP to outperform other algorithms ( Fig . 2 middle ) . We will explain this didactic example in detail in Sec . 5.1 . In Sec . 5.2 and 5.3 , we further show that CDM is exacerbated in sequential decision-making settings , causing divergence even after a near-optimal strategy has been learned . | In the context centralized training distributed execution in cooperative multi-agent reinforcement learning (MARL), the paper proposes an architecture to learn a decomposed action value function expressed as a weighted sum of the agent's individual functions (plus an additional weight). Those weights are themselves learned and depend on the observed history. Thanks to this decomposition, gradients can be decomposed over each agent. The authors propose to use a combination of off-policy (using tree backup) and on-policy (using TD(\lambda) methods for estimating the decomposed critic. They formulate both a deterministic and stochastic decomposed policy gradients, which are analyzed theoretically to some extent and evaluated experimentally. | SP:a0cbc9dde2539645b847f40af560afe953f001ee |
DOP: Off-Policy Multi-Agent Decomposed Policy Gradients | 1 INTRODUCTION . Cooperative multi-agent reinforcement learning ( MARL ) has achieved great progress in recent years ( Hughes et al. , 2018 ; Jaques et al. , 2019 ; Vinyals et al. , 2019 ; Zhang et al. , 2019 ; Baker et al. , 2020 ; Wang et al. , 2020c ) . Advances in valued-based MARL ( Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Son et al. , 2019 ; Wang et al. , 2020e ) contribute significantly to the progress , achieving state-ofthe-art performance on challenging tasks , such as StarCraft II micromanagement ( Samvelyan et al. , 2019 ) . However , these value-based methods present a major challenge for stability and convergence in multi-agent settings ( Wang et al. , 2020a ) , which is further exacerbated in continuous action spaces . Policy gradient methods hold great promise to resolve these challenges . MADDPG ( Lowe et al. , 2017 ) and COMA ( Foerster et al. , 2018 ) are two representative methods that adopt the paradigm of centralized critic with decentralized actors ( CCDA ) , which not only deals with the issue of nonstationarity ( Foerster et al. , 2017 ; Hernandez-Leal et al. , 2017 ) by conditioning the centralized critic on global history and actions but also maintains scalable decentralized execution via conditioning policies on local history . Several subsequent works make improvements to the CCDA framework by introducing the mechanism of recursive reasoning ( Wen et al. , 2019 ) or attention ( Iqbal & Sha , 2019 ) . Despite the progress , most of the multi-agent policy gradient ( MAPG ) methods do not provide satisfying performance , e.g. , significantly underperforming value-based methods on benchmark tasks ( Samvelyan et al. , 2019 ) . In this paper , we analyze this discrepancy and pinpoint three major issues that hinder the performance of MAPG methods . ( 1 ) Current stochastic MAPG methods do not support off-policy learning , partly because using common off-policy learning techniques is computationally expensive in multi-agent settings . ( 2 ) In the CCDA paradigm , the suboptimality of one agent ’ s policy can propagate through the centralized joint critic and negatively affect policy learning of other agents , causing catastrophic miscoordination , which we call centralized-decentralized mismatch ( CDM ) . ( 3 ) For deterministic MAPG methods , realizing efficient credit assignment ( Tumer et al. , 2002 ; Agogino & Tumer , 2004 ) with a single global reward signal largely remains challenging . ∗Equal Contribution . Listing order is random . In this paper , we find that these problems can be addressed by introducing the idea of value decomposition into the multi-agent actor-critic framework and learning a centralized but factorized critic . This framework decomposes the centralized critic as a weighted linear summation of individual critics that condition on local actions . This decomposition structure not only enables scalable learning on the critic , but also brings several benefits . It enables tractable off-policy evaluations of stochastic policies , attenuates the CDM issues , and also implicitly learns an efficient multi-agent credit assignment . Based on this decomposition , we develop efficient off-policy multi-agent decomposed policy gradient methods for both discrete and continuous action spaces . A drawback of an linearly decomposed critic is its limited representational capacity ( Wang et al. , 2020b ) , which may induce bias in value estimations . However , we show that this bias does not violate the policy improvement guarantee of policy gradient methods and that using decomposed critics can largely reduce the variance in policy updates . In this way , a decomposed critic achieves a great bias-variance trade-off . We evaluate our methods on both the StarCraft II micromanagement benchmark ( Samvelyan et al. , 2019 ) ( discrete action spaces ) and multi-agent particle environments ( Lowe et al. , 2017 ; Mordatch & Abbeel , 2018 ) ( continuous action spaces ) . Empirical results show that DOP is very stable across different runs and outperforms other MAPG algorithms by a wide margin . Moreover , to our best knowledge , stochastic DOP provides the first MAPG method that outperforms state-of-the-art valuedbased methods in discrete-action benchmark tasks . Related works on value decomposition methods . In value-based MARL , value decomposition ( Guestrin et al. , 2002b ; Castellini et al. , 2019 ) is widely used . These methods learn local Q-value functions for each agent , which are combined with a learnable mixing function to produce global action values . In VDN ( Sunehag et al. , 2018 ) , the mixing function is an arithmetic summation . QMIX ( Rashid et al. , 2018 ; 2020 ) proposes a non-linear monotonic factorization structure . QTRAN ( Son et al. , 2019 ) and QPLEX ( Wang et al. , 2020b ) further extend the class of value functions that can be represented . NDQ ( Wang et al. , 2020e ) addresses the miscoordination problem by learning nearly decomposable architectures . A concurrent work ( de Witt et al. , 2020 ) finds that a decomposed centralized critic in QMIX style can improve the performance of MADDPG for learning in continuous action spaces . In this paper , we study how and why linear value decomposition can enable efficient policy-based learning in both discrete and continuous action spaces . In Appendix F , we discuss how DOP is related to recent progress in multi-agent reinforcement learning and provide detailed comparisons with existing multi-agent policy gradient methods . 2 BACKGROUND . We consider fully cooperative multi-agent tasks that can be modelled as a Dec-POMDP ( Oliehoek et al. , 2016 ) G=〈I , S , A , P , R , Ω , O , n , γ〉 , where I is the finite set of agents , γ ∈ [ 0 , 1 ) is the discount factor , and s ∈ S is the true state of the environment . At each timestep , each agent i receives an observation oi ∈ Ω drawn according to the observation function O ( s , i ) and selects an action ai ∈ A , forming a joint action a ∈ An , leading to a next state s′ according to the transition function P ( s′|s , a ) and a reward r = R ( s , a ) shared by all agents . Each agent learns a policy πi ( ai|τi ; θi ) , which is parameterized by θi and conditioned on the local history τi ∈ T ≡ ( Ω × A ) ∗ . The joint policy π , with parameters θ = 〈θ1 , · · · , θn〉 , induces a joint action-value function : Qπtot ( τ , a ) =Es0 : ∞ , a0 : ∞ [ ∑∞ t=0 γ tR ( st , at ) | s0=s , a0=a , π ] . We consider both discrete and continuous action spaces , for which stochastic and deterministic policies are learned , respectively . To distinguish deterministic policies , we denote them by µ = 〈µ1 , · · · , µn〉 . Multi-Agent Policy Gradients The centralized training with decentralized execution ( CTDE ) paradigm ( Foerster et al. , 2016 ; Wang et al. , 2020d ) has recently attracted attention for its ability to address non-stationarity while maintaining decentralized execution . Learning a centralized critic with decentralized actors ( CCDA ) is an efficient approach that exploits the CTDE paradigm . MADDPG and COMA are two representative examples . MADDPG ( Lowe et al. , 2017 ) learns deterministic policies in continuous action spaces and uses the following gradients to update policies : g = Eτ , a∼D [ ∑ i ∇θiµi ( τi ) ∇aiQ µ tot ( τ , a ) |ai=µi ( τi ) ] , ( 1 ) and D is a replay buffer . COMA ( Foerster et al. , 2018 ) updates stochastic policies using the gradients : g = Eπ [ ∑ i ∇θi log πi ( ai|τi ) Aπi ( τ , a ) ] , ( 2 ) where Aπi ( τ , a ) = Q π tot ( τ , a ) − ∑ a′i Qπtot ( τ , ( a-i , a ′ i ) ) is a counterfactual advantage ( a-i is the joint action other than agent i ) that deals with the issue of credit assignment and reduces variance . 3 ANALYSIS . In this section , we investigate challenges that limit the performance of state-of-the-art multi-agent policy gradient methods . 3.1 OFF-POLICY LEARNING FOR MULTI-AGENT STOCHASTIC POLICY GRADIENTS . Efficient stochastic policy learning in single-agent settings relies heavily on using off-policy data ( Lillicrap et al. , 2015 ; Wang et al. , 2016 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) , which is not supported by existing stochastic MAPG methods ( Foerster et al. , 2018 ) . In the CCDA framework , off-policy policy evaluation—estimating Qπtot from data drawn from behavior policies β = 〈β1 , . . . , βn〉—encounters major challenges . Importance sampling ( Meuleau et al. , 2000 ; Jie & Abbeel , 2010 ; Levine & Koltun , 2013 ) is a simple way to correct for the discrepancy between π and β , but , it requires computing ∏ i πi ( ai|τi ) βi ( ai|τi ) , whose variance grows exponentially with the number of agents in multi-agent settings . An alternative is to extend the tree backup technique ( Precup et al. , 2000 ; Munos et al. , 2016 ) to multi-agent settings and use the k-step tree backup update target for training the critic : yTB = Qπtot ( τ , a ) + k−1∑ t=0 γt ( t∏ l=1 λπ ( al|τl ) ) [ rt + γEπ [ Qπtot ( τt+1 , · ) ] −Qπtot ( τt , at ) ] , ( 3 ) where τ0 = τ , a0 = a . However , the complexity of computing Eπ [ Qπtot ( τt+1 , · ) ] is O ( |A|n ) , which becomes intractable when the number of agents is large . Therefore , it is challenging to develop off-policy stochastic MAPG methods . 3.2 THE CENTRALIZED-DECENTRALIZED MISMATCH ISSUE . In the centralized critic with decentralized actors ( CCDA ) framework , agents learn individual policies , πi ( ai|τi ; θi ) , conditioned on the local observation-action history . However , the gradients for updating these policies are dependent on the centralized joint critic , Qπtot ( τ , a ) ( see Eq . 1 and 2 ) , which introduces the influence of actions of other agents . Intuitively , gradient updates will move an agent in the direction that can increase the global Q value , but the presence of other agents ’ actions incurs large variance in the estimates of such directions . Formally , the variance of policy gradients for agent i at ( τi , ai ) is dependent on other agents ’ actions : Vara-i∼π-i [ Q π tot ( τ , ( ai , a-i ) ) ∇θi log πi ( ai|τi ) ] =Vara-i∼π-i [ Q π tot ( τ , ( ai , a-i ) ) ] ( ∇θi log πi ( ai|τi ) ) ( ∇θi log πi ( ai|τi ) ) T , ( 4 ) where Vara-i [ Q π tot ( τ , ( ai , a-i ) ) ] can be very large due to the exploration or suboptimality of other agents ’ policies , which may cause suboptimality in individual policies . For example , suppose that the optimal joint action under τ is a∗= 〈a∗1 , . . . , a∗n〉 . When Ea-i∼π-i [ Qπtot ( τ , ( a∗i , a-i ) ) ] < 0 , πi ( a∗i |τi ) will decrease , possibly resulting in a suboptimal πi . This becomes problematic because a negative feedback loop is created , in which the joint critic is affected by the suboptimality of agent i , which disturbs policy updates of other agents . We call this issue centralized-decentralized mismatch ( CDM ) . Does CDM occur in practice for state-of-the-art algorithms ? To answer this question , we carry out a case study in Sec . 5.1 . We can see that the variance of DOP gradients is significantly smaller than COMA and MADDPG ( Fig . 2 left ) . This smaller variance enables DOP to outperform other algorithms ( Fig . 2 middle ) . We will explain this didactic example in detail in Sec . 5.1 . In Sec . 5.2 and 5.3 , we further show that CDM is exacerbated in sequential decision-making settings , causing divergence even after a near-optimal strategy has been learned . | This paper focuses on the problem of multi-agent reinforcement learning (MARL) for CTDE scenario which is well studied in recent literature. The work discusses shortcomings of actor-critic methods for MARL and proposes a solution using linearly factored critic. The paper is somewhat difficult to read and can be made better by deferring the details about previous methods to appendix. However my main concern is with the problem of centralized-decentralized mismatch (CDM) motivated in the paper and its proposed solution itself. | SP:a0cbc9dde2539645b847f40af560afe953f001ee |
Shapley Explanation Networks | Shapley values have become one of the most popular feature attribution explanation methods . However , most prior work has focused on post-hoc Shapley explanations , which can be computationally demanding due to its exponential time complexity and preclude model regularization based on Shapley explanations during training . Thus , we propose to incorporate Shapley values themselves as latent representations in deep models—thereby making Shapley explanations first-class citizens in the modeling paradigm . This intrinsic explanation approach enables layer-wise explanations , explanation regularization of the model during training , and fast explanation computation at test time . We define the Shapley transform that transforms the input into a Shapley representation given a specific function . We operationalize the Shapley transform as a neural network module and construct both shallow and deep networks , called SHAPNETs , by composing Shapley modules . We prove that our Shallow SHAPNETs compute the exact Shapley values and our Deep SHAPNETs maintain the missingness and accuracy properties of Shapley values . We demonstrate on synthetic and real-world datasets that our SHAPNETs enable layer-wise Shapley explanations , novel Shapley regularizations during training , and fast computation while maintaining reasonable performance . Code is available at https : //github.com/inouye-lab/ShapleyExplanationNetworks . 1 INTRODUCTION . Explaining the predictions of machine learning models has become increasingly important for many crucial applications such as healthcare , recidivism prediction , or loan assessment . Explanations based on feature importance are one key approach to explaining a model prediction . More specifically , additive feature importance explanations have become popular , and in Lundberg & Lee ( 2017 ) , the authors argue for theoretically-grounded additive explanation method called SHAP based on Shapley values—a way to assign credit to members of a group developed in cooperative game theory ( Shapley , 1953 ) . Lundberg & Lee ( 2017 ) defined three intuitive theoretical properties called local accuracy , missingness , and consistency , and proved that only SHAP explanations satisfy all three properties . Despite these elegant theoretically-grounded properties , exact Shapley value computation has exponential time complexity in the general case . To alleviate the computational issue , several methods have been proposed to approximate Shapley values via sampling ( Strumbelj & Kononenko , 2010 ) and weighted regression ( Kernel SHAP ) , a modified backpropagation step ( Deep SHAP ) ( Lundberg & Lee , 2017 ) , utilization of the expectation of summations ( Ancona et al. , 2019 ) , or making assumptions on underlying data structures ( Chen et al. , 2019 ) . To avoid approximation , the model class could be restricted to allow for simpler computation . Along this line , Lundberg et al . ( 2020 ) propose a method for computing exact Shapley values for tree-based models such as random forests or gradient boosted trees . However , even if this drawback is overcome , prior Shapley work has focused on post-hoc explanations , and thus , the explanation approach can not aid in model design or training . On the other hand , Generalized Additive Models ( GAM ) as explored in Lou et al . ( 2012 ; 2013 ) ; Caruana et al . ( 2015 ) ( via tree boosting ) , Chen et al . ( 2017a ) ( via kernel methods ) , and Wang et al . ( 2018 ) ; Agarwal et al . ( 2020 ) ( via neural networks ) can be seen as interpretable model class that exposes the exact Shapley explanation directly . In particular , the output of a GAM model is simply the summation of interaction-free functions : fGAM ( x ) = ∑ s fs ( xs ) , where fs ( · ) are univariate functions Published as a conference paper at ICLR 2021 < latexit sha1_base64= '' L+sbllxo615ueQ9HeA5HU0gtQjU= '' > AAAGhHicpVTLbtNQEJ0GMKW8UliysYgqFZEGO1DRDaiCDQsWBTVtpT4i27lprfhVPyBVlC2fwoot/At/AH/BmbHTpE3iVsJXTefOnXPmzuvakecmqWH8XqjcuHlLu714Z+nuvfsPHlaXH+0kYRY7quWEXhjv2VaiPDdQrdRNPbUXxcrybU/t2r33fL77RcWJGwbb6VmkDn3rOHC7rmOlULWrene13zbrer/dfPbmIDmN0wH2R83nUBw1h2tmu1ozGoZ8+rRgFkKNim8rXK58pwPqUEgOZeSTooBSyB5ZlGDtk0kGRdAd0gC6GJIr54qGtARsBisFCwvaHn6PsdsvtAH2zJkI2oEXD38xkDqtABPCLobM3nQ5z4SZtfO4B8LJdzvDf7vg8qFN6QTaq3Ajy+vjbFiVx5pSlzYkRhcxR6Lh6J3CSyZZ48j0iahTMETQsdzBeQzZEeSoDrpgEskN596S8z9iyVreO4VtRn9Lo2CfnsTxP5GwT0+q+BVybr8GhD5hP59/G93E3F3ggwvM+VoRzkTytAFeG6ycBcbp56iyKE/QFXlOOUfdCe6xh7GNJ7fuXStv+pzFjAl4fCDzCrfoM32U+uZ+mDWVDgjAWdZnIVZP+sKGxay7R7hNiLiUTIArGeGKrNEpWC2JiP3qBUveP1fNxDizs3zaMp+M4htyH3agHU2Ofj6JPN1lvgJ5ObgqeSfP8sVvgF8g0ynuvAd4Xkx6QU2qQ1LyXjVK/Pril/M2qshwyu9lmxDYWOaTu2U+N88ty3zDPg3x/pqXX9tpYafZMNcbxqdXtc13xUu8SE/oKa0irte0SR9oC13k0Df6QT/pl6Zpde2ltp6bVhYKzGO68Glv/wFXaTvj < /latexit > f ( x1 , x2 ) = q x21 + x 2 2 1 < latexit sha1_base64= '' a9KOP9WfSfBx818hxz6GtxeECuY= '' > AAAGg3icpVTLbtNQEJ0Gakp5pbBkgUVUqYg22FEQ3VSqYMOCRUFNW6mpItu5aa34hR+QKsqSX2HDFj6GP4C/4MzYadImcSvhqzjjuXPO3HldO/LcJDWM30uVW7eXtTsrd1fv3X/w8FF17fFBEmaxo1pO6IXxkW0lynMD1Urd1FNHUaws3/bUod1/x/uHX1ScuGGwn55H6sS3TgO35zpWClWn+qy3MeiYm/qg03ixA6mdur5K2okbDJvQjTrVmlE35NFnBbMQalQ8e+Fa5Tu1qUshOZSRT4oCSiF7ZFGCdUwmGRRBd0JD6GJIruwrGtEqsBmsFCwsaPt4n+LruNAG+GbORNAOvHj4xUDqtA5MCLsYMnvTZT8TZtYu4h4KJ5/tHP92weVDm9IZtNfhxpY3x9mwKo81pR5tS4wuYo5Ew9E7hZdMssaR6VNRp2CIoGO5i/0YsiPIcR10wSSSG869Jft/xJK1/O0Uthn9LY2CfXoSx/9Ewj49qeJXyLn9FhD6lP1i/n10E3P3gA8uMedrXTgTydM2eG2wchYYp1+gyqI8Q1fkOeUc9aa4Jx4mNp6cun+jvOkLFjMm4PGBzCvcok/0Qeqb+2HWVDogAGdZn4VYfekLGxbzzh7hNCHiUjIBrmSEK7JFn8FqSUTsVy9Y8v65biYmmZ3n05b5ZBSfkPuwC+14cvSLSeTpLvMVyM3BVck7eZ4vvgP8ApnOcOc9wPNi0itq0CYkJfdVvcSvL345b+OKjGb8XrUJgY1lPrlbFnPz3LLMJxwQ37/m1dt2Vjho1M3XdeNjs7b7triJV+gpPacNxPWGduk97aGLHPpGP+gn/dKWtZdaQ2vmppWlAvOELj3azj9aejxi < /latexit > f ( x1 , x2 ) = x1 ⇥ sin 4x2 < latexit sha1_base64= '' f+iIV9NkU0Rk1FNjiXyQwwz7rcU= '' > AAAGZXicpVTNbtNAEJ4Gakqh0ALiwgGLqBIHGuwIRI8VXDhwKNC0lUpV2c6mtbL+wV5DqyqPwBXegHfiCYC34Jux06RN4lbCqzjj2fm+2flbP9Vhbhzn11zj2vV568bCzcVbt5fu3F1eubedJ0UWqE6Q6CTb9b1c6TBWHRMarXbTTHmRr9WO33/D+ztfVJaHSbxlTlK1H3mHcdgLA89A9fH4oH2w3HRajjz2pOBWQpOqZzNZafykT9SlhAIqKCJFMRnImjzKsfbIJYdS6PbpFLoMUij7iga0CGwBKwULD9o+3of42qu0Mb6ZMxd0AC8avwxIm1aBSWCXQWZvtuwXwszaWdynwslnO8G/X3FF0Bo6gvYy3NDy6jgfVvWxGurRusQYIuZUNBx9UHkpJGscmT0WtQFDCh3LXexnkANBDutgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7NSDsMfvZ/FvoJubuAR+fYy7XqnDmkqd18Ppg5Swwzj5D1UV5hK4oc8o56o1xjzyMbLScun+lvNkzFjPm4ImALCvcoQ/0Tupb+mFWIx0Qg7OuzxKsvvSFD4tpZ09xmgRxKZmAUDLCFVmjz2D1JCL2a1csZf9cNhOjzE7z6ct8MopPyH3YhXY4OfbZJPJ01/mK5ebgqpSdPM0X3wFRhTQT3GUP8Ly49Jza9AySkvuqVeM3Er+ct2FFBhN+L9okwGYyn9wts7l5blnmEx7TAPeve/G2nRS22y33Zct5/6K58bq6iRfoET2hp4jrFW3QW9pEFwXg/Ubf6cf8b2vJemA9LE0bcxXmPp17rMf/AG7jMm8= < /latexit > x2 °2 0 2 °2 °1 0 1 2 ci rc le Input °0.5 0.0 0.5 °0.5 0.0 0.5 Shapley Representation °2 0 2 °2 °1 0 1 2 Li ne ar £ si n Input °1 0 1 °1 0 1 Shapley Representation °0.90 °0.45 0.00 0.45 0.90 1.35 1.80 °0.90 °0.45 0.00 0.45 0.90 1.35 1.80 °1.8 °1.2 °0.6 0.0 0.6 1.2 1.8 °3.2 °2.4 °1.6 °0.8 0.0 0.8 1.6 2.4 3.2 °2 0 2 °2 °1 0 1 2 ci rc le Input 0.5 1.0 1.5 0.5 1.0 1.5 Shapley Representation °2 0 2 °2 °1 0 1 2 Li ne ar £ si n Input °1 0 1 °1 0 1 Shapley Representation °0.90 °0.45 0.00 0.45 0.90 1.35 1.80 °0.8 °0.4 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 °1.8 °1.2 °0.6 0.0 0.6 1.2 1.8 °3.2 °2.4 °1.6 °0.8 0.0 0.8 1.6 2.4 3.2 < latexit sha1_base64= '' EzP2813inJZ9gcRf6vdtsAy3GNQ= '' > AAAGZXicpVTNTtRQFD6MUhFFQY0bFzZOSFzI2BKILIluXLhAZYAECWk7d6CZ2x/bW4WQeQS3+ga+k0+gvoXfOe0wAzNTSOzNdE7PPd937vm7fqrD3DjOr5nGjZuz1q252/N37i7cu7+49GAnT4osUO0g0Um253u50mGs2iY0Wu2lmfIiX6tdv/eG93e/qCwPk3jbnKbqIPKO4rAbBp6B6uPJoXu42HRajjz2uOBWQpOqZytZavykT9ShhAIqKCJFMRnImjzKsfbJJYdS6A7oDLoMUij7ivo0D2wBKwULD9oe3kf42q+0Mb6ZMxd0AC8avwxIm5aBSWCXQWZvtuwXwszaadxnwslnO8W/X3FF0Bo6hvYq3MDy+jgfVvWxGurShsQYIuZUNBx9UHkpJGscmT0StQFDCh3LHexnkANBDupgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7FSDsEfvp/NvoJubuAh9fYC7XsnDmkqcN8Ppg5Swwzj5H1UV5jK4oc8o56o5wDz0MbbScunetvNlTFjPm4ImALCvcpg/0Tupb+mFWIx0Qg7OuzxKsnvSFD4tJZ09xmgRxKZmAUDLCFVmhz2D1JCL2a1csZf9cNRPDzE7y6ct8MopPyH3YgXYwOfb5JPJ01/mK5ebgqpSdPMkX3wFRhTRj3GUP8Ly49JJW6QUkJfdVq8ZvJH45b4OK9Mf8XrZJgM1kPrlbpnPz3LLMJzyhPu5f9/JtOy7srLbc9Zbzfq25+bq6iefoCT2j54jrFW3SW9pCFwXg/Ubf6cfsb2vBemQ9Lk0bMxXmIV14rKf/AGjkMm4= < /latexit > x1 < latexit sha1_base64= '' +PQNwZwMPRqf1dadH50k+pNYOr0= '' > AAAGdHicpVTLbtNQEJ0GGkp5pbCEhUUUqUhtalcguqxgw4JFQU1bqaki27lprPiFfV1aRfkTtvSf+AP4AdacGTtN2iRuJXwVZzx3zpk7r+vEvpdq0/y1VLl3f7n6YOXh6qPHT54+q609P0ijLHFVy438KDly7FT5Xqha2tO+OooTZQeOrw6dwUfePzxTSepF4b6+iNVJYJ+GXs9zbQ1Vp1Zrx32vY633NtpOMDwfvenU6mbTlMeYFaxCqFPx7EVrlUtqU5cicimjgBSFpCH7ZFOKdUwWmRRDd0JD6BJInuwrGtEqsBmsFCxsaAd4n+LruNCG+GbOVNAuvPj4JUAa1AAmgl0Cmb0Zsp8JM2sXcQ+Fk892gX+n4Aqg1dSH9jbc2PLuOAdW5bFq6tGOxOgh5lg0HL1beMkkaxyZMRW1BkMMHctd7CeQXUGO62AIJpXccO5t2f8tlqzlb7ewzehPaRTs05c4/icS9ulLFb9Dzu03gTCm7Bfz76ObmLsHfHiNOV8N4UwlTzvgdcDKWWCccYUqi7KPrshzyjnqTXFPPExsfDn14E55MxYsZkzBEwCZV7hFX+mz1Df3w6xaOiAEZ1mfRVgD6QsHFvPOHuM0EeJSMgGeZIQrsknfwGpLROzXKFjy/rltJiaZnefTkflkFJ+Q+7AL7XhyjKtJ5Oku8xXKzcFVyTt5ni++A4ICqWe48x7gebFoi7ZpA5KS+6pZ4jcQv5y3cUVGM35v2kTAJjKf3C2LuXluWeYTntMI969187adFQ62m9a7pvnlbX33Q3ETr9BLek3riOs97dIn2kMXuXRGP+gnXS7/rb6q1quN3LSyVGBe0LWn2vwHfkY3xA== < /latexit > 1 ( f , x ) < latexit sha1_base64= '' f+iIV9NkU0Rk1FNjiXyQwwz7rcU= '' > AAAGZXicpVTNbtNAEJ4Gakqh0ALiwgGLqBIHGuwIRI8VXDhwKNC0lUpV2c6mtbL+wV5DqyqPwBXegHfiCYC34Jux06RN4lbCqzjj2fm+2flbP9Vhbhzn11zj2vV568bCzcVbt5fu3F1eubedJ0UWqE6Q6CTb9b1c6TBWHRMarXbTTHmRr9WO33/D+ztfVJaHSbxlTlK1H3mHcdgLA89A9fH4oH2w3HRajjz2pOBWQpOqZzNZafykT9SlhAIqKCJFMRnImjzKsfbIJYdS6PbpFLoMUij7iga0CGwBKwULD9o+3of42qu0Mb6ZMxd0AC8avwxIm1aBSWCXQWZvtuwXwszaWdynwslnO8G/X3FF0Bo6gvYy3NDy6jgfVvWxGurRusQYIuZUNBx9UHkpJGscmT0WtQFDCh3LXexnkANBDutgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7NSDsMfvZ/FvoJubuAR+fYy7XqnDmkqd18Ppg5Swwzj5D1UV5hK4oc8o56o1xjzyMbLScun+lvNkzFjPm4ImALCvcoQ/0Tupb+mFWIx0Qg7OuzxKsvvSFD4tpZ09xmgRxKZmAUDLCFVmjz2D1JCL2a1csZf9cNhOjzE7z6ct8MopPyH3YhXY4OfbZJPJ01/mK5ebgqpSdPM0X3wFRhTQT3GUP8Ly49Jza9AySkvuqVeM3Er+ct2FFBhN+L9okwGYyn9wts7l5blnmEx7TAPeve/G2nRS22y33Zct5/6K58bq6iRfoET2hp4jrFW3QW9pEFwXg/Ubf6cf8b2vJemA9LE0bcxXmPp17rMf/AG7jMm8= < /latexit > x2 < latexit sha1_base64= '' EzP2813inJZ9gcRf6vdtsAy3GNQ= '' > AAAGZXicpVTNTtRQFD6MUhFFQY0bFzZOSFzI2BKILIluXLhAZYAECWk7d6CZ2x/bW4WQeQS3+ga+k0+gvoXfOe0wAzNTSOzNdE7PPd937vm7fqrD3DjOr5nGjZuz1q252/N37i7cu7+49GAnT4osUO0g0Um253u50mGs2iY0Wu2lmfIiX6tdv/eG93e/qCwPk3jbnKbqIPKO4rAbBp6B6uPJoXu42HRajjz2uOBWQpOqZytZavykT9ShhAIqKCJFMRnImjzKsfbJJYdS6A7oDLoMUij7ivo0D2wBKwULD9oe3kf42q+0Mb6ZMxd0AC8avwxIm5aBSWCXQWZvtuwXwszaadxnwslnO8W/X3FF0Bo6hvYq3MDy+jgfVvWxGurShsQYIuZUNBx9UHkpJGscmT0StQFDCh3LHexnkANBDupgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7FSDsEfvp/NvoJubuAh9fYC7XsnDmkqcN8Ppg5Swwzj5H1UV5jK4oc8o56o5wDz0MbbScunetvNlTFjPm4ImALCvcpg/0Tupb+mFWIx0Qg7OuzxKsnvSFD4tJZ09xmgRxKZmAUDLCFVmhz2D1JCL2a1csZf9cNRPDzE7y6ct8MopPyH3YgXYwOfb5JPJ01/mK5ebgqpSdPMkX3wFRhTRj3GUP8Ly49JJW6QUkJfdVq8ZvJH45b4OK9Mf8XrZJgM1kPrlbpnPz3LLMJzyhPu5f9/JtOy7srLbc9Zbzfq25+bq6iefoCT2j54jrFW3SW9pCFwXg/Ubf6cfsb2vBemQ9Lk0bMxXmIV14rKf/AGjkMm4= < /latexit > x1 < latexit sha1_base64= '' XegFC4q7LlcHkMh76AKV9FRgFVo= '' > AAAGdHicpVTLTttQEB3SklL6Cu2yXViNIlEJghMVwRK1my66oBUBJIIi27khVvyqfU1BUf6k2/JP/YP2B7rumbFDAkkMUn0VZzx3zpk7r2tHnpto0/y1VHrwcLn8aOXx6pOnz56/qKy9PEzCNHZUywm9MD62rUR5bqBa2tWeOo5iZfm2p47swUfePzpXceKGwYG+jNSpb50Fbs91LA1Vp1JpR32301zvbbRtf3gxetepVM26KY8xKzRyoUr5sx+ula6oTV0KyaGUfFIUkIbskUUJ1gk1yKQIulMaQhdDcmVf0YhWgU1hpWBhQTvA+wxfJ7k2wDdzJoJ24MXDLwbSoBowIexiyOzNkP1UmFm7iHsonHy2S/zbOZcPraY+tHfhxpb3x9mwKo5VU492JUYXMUei4eid3EsqWePIjKmoNRgi6FjuYj+G7AhyXAdDMInkhnNvyf5vsWQtfzu5bUp/CqNgn57E8T+RsE9Pqvgdcma/CYQxZb+Y/wDdxNw94IMbzNmqCWciedoFrw1WzgLjjGtUUZR9dEWWU85Rb4p74mFi48mpB/fKm7FgMWMCHh/IrMIt+kqfpb6ZH2bV0gEBOIv6LMQaSF/YsJh39ginCRGXkglwJSNckU36BlZLImK/Rs6S9c9dMzHJ7Dyftswno/iE3IddaMeTY1xPIk93ka9Abg6uStbJ83zxHeDnSD3DnfUAz0uDtqhJG5CU3Ff1Ar+++OW8jSsymvF72yYENpb55G5ZzM1zyzKf8IJGuH8bt2/bWeGwWW9s180v76t7H/KbeIVe01taR1w7tEefaB9d5NA5/aCfdLX8t/ymXC3XMtPSUo55RTeecv0fhE83xQ== < /latexit > 2 ( f , x ) < latexit sha1_base64= '' +PQNwZwMPRqf1dadH50k+pNYOr0= '' > AAAGdHicpVTLbtNQEJ0GGkp5pbCEhUUUqUhtalcguqxgw4JFQU1bqaki27lprPiFfV1aRfkTtvSf+AP4AdacGTtN2iRuJXwVZzx3zpk7r+vEvpdq0/y1VLl3f7n6YOXh6qPHT54+q609P0ijLHFVy438KDly7FT5Xqha2tO+OooTZQeOrw6dwUfePzxTSepF4b6+iNVJYJ+GXs9zbQ1Vp1Zrx32vY633NtpOMDwfvenU6mbTlMeYFaxCqFPx7EVrlUtqU5cicimjgBSFpCH7ZFOKdUwWmRRDd0JD6BJInuwrGtEqsBmsFCxsaAd4n+LruNCG+GbOVNAuvPj4JUAa1AAmgl0Cmb0Zsp8JM2sXcQ+Fk892gX+n4Aqg1dSH9jbc2PLuOAdW5bFq6tGOxOgh5lg0HL1beMkkaxyZMRW1BkMMHctd7CeQXUGO62AIJpXccO5t2f8tlqzlb7ewzehPaRTs05c4/icS9ulLFb9Dzu03gTCm7Bfz76ObmLsHfHiNOV8N4UwlTzvgdcDKWWCccYUqi7KPrshzyjnqTXFPPExsfDn14E55MxYsZkzBEwCZV7hFX+mz1Df3w6xaOiAEZ1mfRVgD6QsHFvPOHuM0EeJSMgGeZIQrsknfwGpLROzXKFjy/rltJiaZnefTkflkFJ+Q+7AL7XhyjKtJ5Oku8xXKzcFVyTt5ni++A4ICqWe48x7gebFoi7ZpA5KS+6pZ4jcQv5y3cUVGM35v2kTAJjKf3C2LuXluWeYTntMI969187adFQ62m9a7pvnlbX33Q3ETr9BLek3riOs97dIn2kMXuXRGP+gnXS7/rb6q1quN3LSyVGBe0LWn2vwHfkY3xA== < /latexit > 1 ( f , x ) < latexit sha1_base64= '' XegFC4q7LlcHkMh76AKV9FRgFVo= '' > AAAGdHicpVTLTttQEB3SklL6Cu2yXViNIlEJghMVwRK1my66oBUBJIIi27khVvyqfU1BUf6k2/JP/YP2B7rumbFDAkkMUn0VZzx3zpk7r2tHnpto0/y1VHrwcLn8aOXx6pOnz56/qKy9PEzCNHZUywm9MD62rUR5bqBa2tWeOo5iZfm2p47swUfePzpXceKGwYG+jNSpb50Fbs91LA1Vp1JpR32301zvbbRtf3gxetepVM26KY8xKzRyoUr5sx+ula6oTV0KyaGUfFIUkIbskUUJ1gk1yKQIulMaQhdDcmVf0YhWgU1hpWBhQTvA+wxfJ7k2wDdzJoJ24MXDLwbSoBowIexiyOzNkP1UmFm7iHsonHy2S/zbOZcPraY+tHfhxpb3x9mwKo5VU492JUYXMUei4eid3EsqWePIjKmoNRgi6FjuYj+G7AhyXAdDMInkhnNvyf5vsWQtfzu5bUp/CqNgn57E8T+RsE9Pqvgdcma/CYQxZb+Y/wDdxNw94IMbzNmqCWciedoFrw1WzgLjjGtUUZR9dEWWU85Rb4p74mFi48mpB/fKm7FgMWMCHh/IrMIt+kqfpb6ZH2bV0gEBOIv6LMQaSF/YsJh39ginCRGXkglwJSNckU36BlZLImK/Rs6S9c9dMzHJ7Dyftswno/iE3IddaMeTY1xPIk93ka9Abg6uStbJ83zxHeDnSD3DnfUAz0uDtqhJG5CU3Ff1Ar+++OW8jSsymvF72yYENpb55G5ZzM1zyzKf8IJGuH8bt2/bWeGwWW9s180v76t7H/KbeIVe01taR1w7tEefaB9d5NA5/aCfdLX8t/ymXC3XMtPSUo55RTeecv0fhE83xQ== < /latexit > 2 ( f , x ) < latexit sha1_base64= '' f+iIV9NkU0Rk1FNjiXyQwwz7rcU= '' > AAAGZXicpVTNbtNAEJ4Gakqh0ALiwgGLqBIHGuwIRI8VXDhwKNC0lUpV2c6mtbL+wV5DqyqPwBXegHfiCYC34Jux06RN4lbCqzjj2fm+2flbP9Vhbhzn11zj2vV568bCzcVbt5fu3F1eubedJ0UWqE6Q6CTb9b1c6TBWHRMarXbTTHmRr9WO33/D+ztfVJaHSbxlTlK1H3mHcdgLA89A9fH4oH2w3HRajjz2pOBWQpOqZzNZafykT9SlhAIqKCJFMRnImjzKsfbIJYdS6PbpFLoMUij7iga0CGwBKwULD9o+3of42qu0Mb6ZMxd0AC8avwxIm1aBSWCXQWZvtuwXwszaWdynwslnO8G/X3FF0Bo6gvYy3NDy6jgfVvWxGurRusQYIuZUNBx9UHkpJGscmT0WtQFDCh3LXexnkANBDutgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7NSDsMfvZ/FvoJubuAR+fYy7XqnDmkqd18Ppg5Swwzj5D1UV5hK4oc8o56o1xjzyMbLScun+lvNkzFjPm4ImALCvcoQ/0Tupb+mFWIx0Qg7OuzxKsvvSFD4tpZ09xmgRxKZmAUDLCFVmjz2D1JCL2a1csZf9cNhOjzE7z6ct8MopPyH3YhXY4OfbZJPJ01/mK5ebgqpSdPM0X3wFRhTQT3GUP8Ly49Jza9AySkvuqVeM3Er+ct2FFBhN+L9okwGYyn9wts7l5blnmEx7TAPeve/G2nRS22y33Zct5/6K58bq6iRfoET2hp4jrFW3QW9pEFwXg/Ubf6cf8b2vJemA9LE0bcxXmPp17rMf/AG7jMm8= < /latexit > x2 < latexit sha1_base64= '' dOzJUIAhwdtY7mtaVsXQVGoLNyY= '' > AAAGhnicpVRLTxNRFD6g1ooPii7d3NCQYAJ1hkggrhrduHCBhAIJEDIzvYVJ5+XMHYQ03ftLXLrVv+I/0H/hd85MabEPSJybTs+ce77v3PO6bhL4mbGsX3Pz9+4/qDysPlp4/OTps8Xa0vP9LM5TT7e8OIjTQ9fJdOBHumV8E+jDJNVO6Ab6wO2+5/2DC51mfhztmatEn4TOWeR3fM8xUJ3Wlo+NvjQ9ebud3q7uNNSFE+Q6679V/VV7zX51WqvbDUseZY0Jg606lc9OvDT/jY6pTTF5lFNImiIykANyKMM6IpssSqA7oR50KSRf9jX1aQHYHFYaFg60XbzP8HVUaiN8M2cmaA9eAvxSIBWtABPDLoXM3pTs58LM2mncPeHks13h3y25QmgNnUN7G25geXecC6vZsRrq0LbE6CPmRDQcvVd6ySVrHJkaidqAIYGO5Tb2U8ieIAd1UILJJDece0f2f4sla/nbK21z+jMzCvYZSBz/Ewn7DKSKXyAX9utAqBH76fx76Cbm7gAf3WAu1opwZpKnbfC6YOUsME5do2ZFeY6uKHLKOeqMcA89DG0COXX3TnlTUxYzZuAJgSwq3KJd+ij1Lfwwq5EOiMA5q89irK70hQuLSWdPcJoYcWmZAF8ywhVZp89gdSQi9qtKlqJ/bpuJYWYn+XRlPhnFJ+Q+bEM7mBx1PYk83bN8RXJzcFWKTp7ki++AsESaMe6iB3hebHpNG7QGSct91ZjhNxS/nLdBRfpjfv+1iYFNZT65W6Zz89yyzCe8pP7o/Ttd2N9o2JsN69ObevNdeRNX6SUt0yri2qImfaAddJFHX+k7/aCflWqlUdmsbBWm83Ml5gXdeCrNvw6uPf0= < /latexit > Ref . values : ( 1 , 1 ) < latexit sha1_base64= '' Fi2qoCAxSadKqu9LvFFLL5Ggwxg= '' > AAAGhnicpVTLTttQEB1om6b0QWiX3VhESFQC10FFoK6idtNFFxQRQAKEbOcarPhV+5qCouz7JV122/5K/6D9i54ZOyQ0iUGqr+KM5845c+d1nSTwM21Zv+bm791/UHtYf7Tw+MnTZ4uNpef7WZynruq4cRCnh46dqcCPVEf7OlCHSars0AnUgdN7z/sHFyrN/Dja01eJOgnts8j3fNfWUJ02lo+1utR9eTtef1d5pnFhB7nKBm+Nwaq11np12mi2TEsew5oQhltNKp+deGn+Gx1Tl2JyKaeQFEWkIQdkU4Z1RC2yKIHuhPrQpZB82Vc0oAVgc1gpWNjQ9vA+w9dRqY3wzZyZoF14CfBLgTRoBZgYdilk9mbIfi7MrJ3F3RdOPtsV/p2SK4RW0zm0t+GGlnfHObCqjlWTR9sSo4+YE9Fw9G7pJZescWTGWNQaDAl0LHexn0J2BTmsgyGYTHLDubdl/7dYspa/3dI2pz+VUbDPQOL4n0jYZyBV/AK5sF8Hwhizn82/h25ibg/46AZzsVaEM5M8bYPXAStngXHGNaoqynN0RZFTzpE3xj3yMLIJ5NS9O+XNmLGYMQNPCGRR4Q7t0kepb+GHWbV0QATOqj6LsXrSFw4spp09wWlixKVkAnzJCFdknT6D1ZaI2K9RshT9c9tMjDI7zacj88koPiH3YRfa4eQY15PI013lK5Kbg6tSdPI0X3wHhCVST3AXPcDz0qLXtEFrkJTcV2aF31D8ct6GFRlM+P3XJgY2lfnkbpnNzXPLMp/wkgbj9+9sYX/DbG2a1qc3zfa78iau00taplXEtUVt+kA76CKXvtJ3+kE/a/WaWdusbRWm83Ml5gXdeGrtvwisPfw= < /latexit > Ref . values : ( 0 , 1 ) Figure 1 : Shapley representations span beyond one-dimensional manifolds and depend on both the inner function and the reference values . In both groups , the gray-scale background indicates the respective function value while the rainbow-scale color indicates correspondence between input ( left ) a d Shapley representation ( right ) along with function values—red means highest and purple means lowest function values . The red cross represents the reference values . More details in subsection 2.1. that can be arbitrarily complex . Interestingly , the Shapley explanation values , often denoted φs ( x , f ) , for a GAM are exactly the values of the independent function , i.e. , ∀s , φs ( x , fGAM ) = fs ( xs ) . Hence , the prediction and the corresponding exact SHAP explanation can be computed simultaneously for GAM models . However , GAM models are inherently limited in their representational power , particularly for perceptual data such as images in which deep networks are state-of-the-art . Thus , prior work is either post-hoc ( which precludes leveraging the method during training ) or limited in its representational power ( e.g. , GAMs ) . To overcome these drawbacks , we propose to incorporate Shapley values themselves as learned latent representations ( as opposed to post-hoc ) in deep models—thereby making Shapley explanations first-class citizens in the modeling paradigm . Intuitive illustrations of such representation are provided in Fig . 1 and detailed discussion in subsection 2.1 . We summarize our core contributions as follows : • We formally define the Shapley transform and prove a simple but useful linearity property for constructing networks . • We develop a novel network architecture , the SHAPNETs , that includes Shallow and Deep SHAPNETs and that intrinsically provides layer-wise explanations ( i.e. , explanations at every layer of the network ) in the same forward pass as the prediction . • We prove that Shallow SHAPNET explanations are the exact Shapley values—thus satisfying all three SHAP properties—and prove that Deep SHAPNET explanations maintain the missingness and local accuracy properties . • To reduce computation , we propose an instance-specific dynamic pruning method for Deep SHAPNETs that can skip unnecessary computation . • We enable explanation regularization based on Shapley values during training because the explanation is a latent representation in our model . • We demonstrate empirically that our SHAPNETs can provide these new capabilities while maintaining comparable performance to other deep models . Dedicated Related Works Section We present related works above and in the text where appropriate . Due to space limit , we refer to Appendix D for a dedicated literature review section . 2 SHAPLEY EXPLANATION NETWORKS . Background We give a short introduction to SHAP explanations and their properties as originally introduced in Lundberg & Lee ( 2017 ) . Given a model f : Rd 7→ R that is not inherently interpretable ( e.g. , neural nets ) , additive feature-attribution methods form a linear approximation of the function over simplified binary inputs , denoted z ∈ { 0 , 1 } d , indicating the “ presence ” and “ absence ” of each feature , respectively : i.e. , a local linear approximation η ( z ) = a0 + ∑d i=1 aizi . While there are different ways to model “ absence ” and “ presence ” , in this work , we take a simplified viewpoint : “ presence ” means that we keep the original value whereas “ absence ” means we replace the original value with a reference value , which has been validated in Sundararajan & Najmi ( 2020 ) as Baseline Shapley . If we denote the reference vector for all features by r , then we can define a simple mapping function between z and x as Ψx , r ( z ) = z x + ( 1 − z ) r , where denotes element-wise product ( eg , Ψx , r ( [ 0 , 1 , 0 , 1 , 1 ] ) = [ r1 , x2 , r3 , x4 , x5 ] ) . A simple generalization is to group certain features together and consider including or removing all features in the group . Lundberg & Lee ( 2017 ) propose three properties that additive feature attribution methods should intuitively satisfy . The first property called local accuracy states that the approximate model η at z = 1 should match the output of the model f at the corresponding x , i.e. , f ( x ) = η ( 1 ) = ∑d i=0 ai . The second property called missingness formalizes the idea that features that are “ missing ” from the input x ( or correspondingly the zeros in z ) should have zero attributed effect on the output of the approximation , i.e. , zi = 0⇒ ai = 0 . Finally , the third property called consistency formalizes the idea that if one model always sees a larger effect when removing a feature , the attribution value should be larger ( see ( Lundberg & Lee , 2017 ) for full definitions of the three properties and their discussions ) . Definition 1 ( SHAP Values ( Lundberg & Lee , 2017 ) ) . SHAP values are defined as : φi ( f , x ) , 1 d ∑ z∈Zi ( d− 1 ‖z‖1 ) −1 [ f ( Ψx , r ( z ∪ { i } ) ) − f ( Ψx , r ( z ) ) ] , ( 1 ) where ‖z‖1 is the ` 1 norm of z given a set Zi = { z ∈ { 0 , 1 } d : zi = 0 } and z ∪ { i } represents setting the i-th element in z to be 1 instead of 0 . 2.1 SHAPLEY TRANSFORM AND SHAPLEY REPRESENTATION . For the sake of generality , we will define the Shapley transform in terms of tensors ( or multidimensional arrays ) . Below in Def . 2 we make a distinction between tensor dimensions that need to be explained and other tensor dimensions . For example , in image classification , usually an explanation ( or attribution ) value is required for each pixel ( i.e. , the spatial dimensions ) but all channels are grouped together ( i.e. , an attribution is provided for each pixel but not for each channel of each pixel ) . We generalize this idea in the following definitions of explainable and channel dimensions . Definition 2 ( Explainable Dimensions and Channel Dimensions ) . Given a tensor representation x ∈ RD×C ≡ R ( d1×d2×··· ) × ( c1×c2×··· ) , the tensor dimensions D ≡ d1 × d2 × · · · that require attribution will be called explainable dimensions and the other tensor dimensions C ≡ c1 × c2 × · · · will be called the channel dimensions . As a simplest example , if the input is a vector x ∈ Rd×1 , then D = d and C = 1 , i.e. , we have one ( since C = 1 ) importance value assigned to each feature . For images in the space Rh×w×c where h , w denote the height and width , respectively , the explainable dimensions correspond to the spatial dimensions ( i.e. , D = h× w ) and the channel dimensions correspond to the single channel dimension ( i.e. , C = c ) . While the previous examples discuss tensor dimensions of the input , our Shapley transforms can also operate on latent representations ( e.g. , in a neural net ) . For example , our latent representations for image models could be in the space Rw×h×c1×c2 , where the explainable dimensions correspond to spatial dimensions ( i.e. , D = h×w ) and there are two channel dimensions ( i.e. , C = c1 × c2 ) . Given this distinction between explainable and channel dimensions , we can now define the Shapley transform and Shapley representation . Definition 3 ( Shapley Transform ) . Given an arbitrary function f ∈ F : RD×C 7→ RC′ the Shapley transform Ω : RD×C ×F 7→ RD×C′ is defined as : [ Ω ( x , f ) ] i , j = φi ( x , fj ) , ∀i ∈ ID , j ∈ IC′ , ( 2 ) where ID denotes the set of all possible indices for values captured by the explainable dimensions D ( and similarly for IC′ ) and fj denotes the scalar function corresponding to the j-th output of f . Definition 4 ( Shapley Representation ) . Given a function f ∈ F as in Def . 3 and a tensor x ∈ RD×C , we simply define the Shapley representation to be : Z , Ω ( x , f ) ∈ RD×C′ . Notice that in the Shapley transform , we always keep the explainable dimensions D unchanged . However , the channel dimensions of the output representation space , i.e. , C ′ , are determined by the co-domain of the function f . For example , if f is a scalar function ( i.e. , C ′ = 1 ) , then the attribution for the explainable dimensions is a scalar . However , if f is a vector-valued function , then the attribution is a vector ( corresponding to the vector output of f ) . A multi-class classifier is a simple example of an f that is a vector-valued function ( e.g. , C ′ = 10 for 10-class classification tasks ) . In summary , the Shapley transform maintains the explainable tensor dimensions , but each explainable element of D may be associated to a tensor of attribution values corresponding to each output of f . | This work proposes a new model class designed to make SHAP value calculations more efficient. The proposed method exploits sparsity and additivity among intermediate values to provide fast exact SHAP values for shallow ShapNets, and fast approximate SHAP values for Deep ShapNets. This approach enables SHAP-based regularization during training, layer-wise explanations, and faster SHAP-based explanations with minimal loss in quality. | SP:585ea7586283caf39965101656d1dc17abe1b331 |
Shapley Explanation Networks | Shapley values have become one of the most popular feature attribution explanation methods . However , most prior work has focused on post-hoc Shapley explanations , which can be computationally demanding due to its exponential time complexity and preclude model regularization based on Shapley explanations during training . Thus , we propose to incorporate Shapley values themselves as latent representations in deep models—thereby making Shapley explanations first-class citizens in the modeling paradigm . This intrinsic explanation approach enables layer-wise explanations , explanation regularization of the model during training , and fast explanation computation at test time . We define the Shapley transform that transforms the input into a Shapley representation given a specific function . We operationalize the Shapley transform as a neural network module and construct both shallow and deep networks , called SHAPNETs , by composing Shapley modules . We prove that our Shallow SHAPNETs compute the exact Shapley values and our Deep SHAPNETs maintain the missingness and accuracy properties of Shapley values . We demonstrate on synthetic and real-world datasets that our SHAPNETs enable layer-wise Shapley explanations , novel Shapley regularizations during training , and fast computation while maintaining reasonable performance . Code is available at https : //github.com/inouye-lab/ShapleyExplanationNetworks . 1 INTRODUCTION . Explaining the predictions of machine learning models has become increasingly important for many crucial applications such as healthcare , recidivism prediction , or loan assessment . Explanations based on feature importance are one key approach to explaining a model prediction . More specifically , additive feature importance explanations have become popular , and in Lundberg & Lee ( 2017 ) , the authors argue for theoretically-grounded additive explanation method called SHAP based on Shapley values—a way to assign credit to members of a group developed in cooperative game theory ( Shapley , 1953 ) . Lundberg & Lee ( 2017 ) defined three intuitive theoretical properties called local accuracy , missingness , and consistency , and proved that only SHAP explanations satisfy all three properties . Despite these elegant theoretically-grounded properties , exact Shapley value computation has exponential time complexity in the general case . To alleviate the computational issue , several methods have been proposed to approximate Shapley values via sampling ( Strumbelj & Kononenko , 2010 ) and weighted regression ( Kernel SHAP ) , a modified backpropagation step ( Deep SHAP ) ( Lundberg & Lee , 2017 ) , utilization of the expectation of summations ( Ancona et al. , 2019 ) , or making assumptions on underlying data structures ( Chen et al. , 2019 ) . To avoid approximation , the model class could be restricted to allow for simpler computation . Along this line , Lundberg et al . ( 2020 ) propose a method for computing exact Shapley values for tree-based models such as random forests or gradient boosted trees . However , even if this drawback is overcome , prior Shapley work has focused on post-hoc explanations , and thus , the explanation approach can not aid in model design or training . On the other hand , Generalized Additive Models ( GAM ) as explored in Lou et al . ( 2012 ; 2013 ) ; Caruana et al . ( 2015 ) ( via tree boosting ) , Chen et al . ( 2017a ) ( via kernel methods ) , and Wang et al . ( 2018 ) ; Agarwal et al . ( 2020 ) ( via neural networks ) can be seen as interpretable model class that exposes the exact Shapley explanation directly . In particular , the output of a GAM model is simply the summation of interaction-free functions : fGAM ( x ) = ∑ s fs ( xs ) , where fs ( · ) are univariate functions Published as a conference paper at ICLR 2021 < latexit sha1_base64= '' L+sbllxo615ueQ9HeA5HU0gtQjU= '' > AAAGhHicpVTLbtNQEJ0GMKW8UliysYgqFZEGO1DRDaiCDQsWBTVtpT4i27lprfhVPyBVlC2fwoot/At/AH/BmbHTpE3iVsJXTefOnXPmzuvakecmqWH8XqjcuHlLu714Z+nuvfsPHlaXH+0kYRY7quWEXhjv2VaiPDdQrdRNPbUXxcrybU/t2r33fL77RcWJGwbb6VmkDn3rOHC7rmOlULWrene13zbrer/dfPbmIDmN0wH2R83nUBw1h2tmu1ozGoZ8+rRgFkKNim8rXK58pwPqUEgOZeSTooBSyB5ZlGDtk0kGRdAd0gC6GJIr54qGtARsBisFCwvaHn6PsdsvtAH2zJkI2oEXD38xkDqtABPCLobM3nQ5z4SZtfO4B8LJdzvDf7vg8qFN6QTaq3Ajy+vjbFiVx5pSlzYkRhcxR6Lh6J3CSyZZ48j0iahTMETQsdzBeQzZEeSoDrpgEskN596S8z9iyVreO4VtRn9Lo2CfnsTxP5GwT0+q+BVybr8GhD5hP59/G93E3F3ggwvM+VoRzkTytAFeG6ycBcbp56iyKE/QFXlOOUfdCe6xh7GNJ7fuXStv+pzFjAl4fCDzCrfoM32U+uZ+mDWVDgjAWdZnIVZP+sKGxay7R7hNiLiUTIArGeGKrNEpWC2JiP3qBUveP1fNxDizs3zaMp+M4htyH3agHU2Ofj6JPN1lvgJ5ObgqeSfP8sVvgF8g0ynuvAd4Xkx6QU2qQ1LyXjVK/Pril/M2qshwyu9lmxDYWOaTu2U+N88ty3zDPg3x/pqXX9tpYafZMNcbxqdXtc13xUu8SE/oKa0irte0SR9oC13k0Df6QT/pl6Zpde2ltp6bVhYKzGO68Glv/wFXaTvj < /latexit > f ( x1 , x2 ) = q x21 + x 2 2 1 < latexit sha1_base64= '' a9KOP9WfSfBx818hxz6GtxeECuY= '' > AAAGg3icpVTLbtNQEJ0Gakp5pbBkgUVUqYg22FEQ3VSqYMOCRUFNW6mpItu5aa34hR+QKsqSX2HDFj6GP4C/4MzYadImcSvhqzjjuXPO3HldO/LcJDWM30uVW7eXtTsrd1fv3X/w8FF17fFBEmaxo1pO6IXxkW0lynMD1Urd1FNHUaws3/bUod1/x/uHX1ScuGGwn55H6sS3TgO35zpWClWn+qy3MeiYm/qg03ixA6mdur5K2okbDJvQjTrVmlE35NFnBbMQalQ8e+Fa5Tu1qUshOZSRT4oCSiF7ZFGCdUwmGRRBd0JD6GJIruwrGtEqsBmsFCwsaPt4n+LruNAG+GbORNAOvHj4xUDqtA5MCLsYMnvTZT8TZtYu4h4KJ5/tHP92weVDm9IZtNfhxpY3x9mwKo81pR5tS4wuYo5Ew9E7hZdMssaR6VNRp2CIoGO5i/0YsiPIcR10wSSSG869Jft/xJK1/O0Uthn9LY2CfXoSx/9Ewj49qeJXyLn9FhD6lP1i/n10E3P3gA8uMedrXTgTydM2eG2wchYYp1+gyqI8Q1fkOeUc9aa4Jx4mNp6cun+jvOkLFjMm4PGBzCvcok/0Qeqb+2HWVDogAGdZn4VYfekLGxbzzh7hNCHiUjIBrmSEK7JFn8FqSUTsVy9Y8v65biYmmZ3n05b5ZBSfkPuwC+14cvSLSeTpLvMVyM3BVck7eZ4vvgP8ApnOcOc9wPNi0itq0CYkJfdVvcSvL345b+OKjGb8XrUJgY1lPrlbFnPz3LLMJxwQ37/m1dt2Vjho1M3XdeNjs7b7triJV+gpPacNxPWGduk97aGLHPpGP+gn/dKWtZdaQ2vmppWlAvOELj3azj9aejxi < /latexit > f ( x1 , x2 ) = x1 ⇥ sin 4x2 < latexit sha1_base64= '' f+iIV9NkU0Rk1FNjiXyQwwz7rcU= '' > AAAGZXicpVTNbtNAEJ4Gakqh0ALiwgGLqBIHGuwIRI8VXDhwKNC0lUpV2c6mtbL+wV5DqyqPwBXegHfiCYC34Jux06RN4lbCqzjj2fm+2flbP9Vhbhzn11zj2vV568bCzcVbt5fu3F1eubedJ0UWqE6Q6CTb9b1c6TBWHRMarXbTTHmRr9WO33/D+ztfVJaHSbxlTlK1H3mHcdgLA89A9fH4oH2w3HRajjz2pOBWQpOqZzNZafykT9SlhAIqKCJFMRnImjzKsfbIJYdS6PbpFLoMUij7iga0CGwBKwULD9o+3of42qu0Mb6ZMxd0AC8avwxIm1aBSWCXQWZvtuwXwszaWdynwslnO8G/X3FF0Bo6gvYy3NDy6jgfVvWxGurRusQYIuZUNBx9UHkpJGscmT0WtQFDCh3LXexnkANBDutgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7NSDsMfvZ/FvoJubuAR+fYy7XqnDmkqd18Ppg5Swwzj5D1UV5hK4oc8o56o1xjzyMbLScun+lvNkzFjPm4ImALCvcoQ/0Tupb+mFWIx0Qg7OuzxKsvvSFD4tpZ09xmgRxKZmAUDLCFVmjz2D1JCL2a1csZf9cNhOjzE7z6ct8MopPyH3YhXY4OfbZJPJ01/mK5ebgqpSdPM0X3wFRhTQT3GUP8Ly49Jza9AySkvuqVeM3Er+ct2FFBhN+L9okwGYyn9wts7l5blnmEx7TAPeve/G2nRS22y33Zct5/6K58bq6iRfoET2hp4jrFW3QW9pEFwXg/Ubf6cf8b2vJemA9LE0bcxXmPp17rMf/AG7jMm8= < /latexit > x2 °2 0 2 °2 °1 0 1 2 ci rc le Input °0.5 0.0 0.5 °0.5 0.0 0.5 Shapley Representation °2 0 2 °2 °1 0 1 2 Li ne ar £ si n Input °1 0 1 °1 0 1 Shapley Representation °0.90 °0.45 0.00 0.45 0.90 1.35 1.80 °0.90 °0.45 0.00 0.45 0.90 1.35 1.80 °1.8 °1.2 °0.6 0.0 0.6 1.2 1.8 °3.2 °2.4 °1.6 °0.8 0.0 0.8 1.6 2.4 3.2 °2 0 2 °2 °1 0 1 2 ci rc le Input 0.5 1.0 1.5 0.5 1.0 1.5 Shapley Representation °2 0 2 °2 °1 0 1 2 Li ne ar £ si n Input °1 0 1 °1 0 1 Shapley Representation °0.90 °0.45 0.00 0.45 0.90 1.35 1.80 °0.8 °0.4 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 °1.8 °1.2 °0.6 0.0 0.6 1.2 1.8 °3.2 °2.4 °1.6 °0.8 0.0 0.8 1.6 2.4 3.2 < latexit sha1_base64= '' EzP2813inJZ9gcRf6vdtsAy3GNQ= '' > AAAGZXicpVTNTtRQFD6MUhFFQY0bFzZOSFzI2BKILIluXLhAZYAECWk7d6CZ2x/bW4WQeQS3+ga+k0+gvoXfOe0wAzNTSOzNdE7PPd937vm7fqrD3DjOr5nGjZuz1q252/N37i7cu7+49GAnT4osUO0g0Um253u50mGs2iY0Wu2lmfIiX6tdv/eG93e/qCwPk3jbnKbqIPKO4rAbBp6B6uPJoXu42HRajjz2uOBWQpOqZytZavykT9ShhAIqKCJFMRnImjzKsfbJJYdS6A7oDLoMUij7ivo0D2wBKwULD9oe3kf42q+0Mb6ZMxd0AC8avwxIm5aBSWCXQWZvtuwXwszaadxnwslnO8W/X3FF0Bo6hvYq3MDy+jgfVvWxGurShsQYIuZUNBx9UHkpJGscmT0StQFDCh3LHexnkANBDupgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7FSDsEfvp/NvoJubuAh9fYC7XsnDmkqcN8Ppg5Swwzj5H1UV5jK4oc8o56o5wDz0MbbScunetvNlTFjPm4ImALCvcpg/0Tupb+mFWIx0Qg7OuzxKsnvSFD4tJZ09xmgRxKZmAUDLCFVmhz2D1JCL2a1csZf9cNRPDzE7y6ct8MopPyH3YgXYwOfb5JPJ01/mK5ebgqpSdPMkX3wFRhTRj3GUP8Ly49JJW6QUkJfdVq8ZvJH45b4OK9Mf8XrZJgM1kPrlbpnPz3LLMJzyhPu5f9/JtOy7srLbc9Zbzfq25+bq6iefoCT2j54jrFW3SW9pCFwXg/Ubf6cfsb2vBemQ9Lk0bMxXmIV14rKf/AGjkMm4= < /latexit > x1 < latexit sha1_base64= '' +PQNwZwMPRqf1dadH50k+pNYOr0= '' > AAAGdHicpVTLbtNQEJ0GGkp5pbCEhUUUqUhtalcguqxgw4JFQU1bqaki27lprPiFfV1aRfkTtvSf+AP4AdacGTtN2iRuJXwVZzx3zpk7r+vEvpdq0/y1VLl3f7n6YOXh6qPHT54+q609P0ijLHFVy438KDly7FT5Xqha2tO+OooTZQeOrw6dwUfePzxTSepF4b6+iNVJYJ+GXs9zbQ1Vp1Zrx32vY633NtpOMDwfvenU6mbTlMeYFaxCqFPx7EVrlUtqU5cicimjgBSFpCH7ZFOKdUwWmRRDd0JD6BJInuwrGtEqsBmsFCxsaAd4n+LruNCG+GbOVNAuvPj4JUAa1AAmgl0Cmb0Zsp8JM2sXcQ+Fk892gX+n4Aqg1dSH9jbc2PLuOAdW5bFq6tGOxOgh5lg0HL1beMkkaxyZMRW1BkMMHctd7CeQXUGO62AIJpXccO5t2f8tlqzlb7ewzehPaRTs05c4/icS9ulLFb9Dzu03gTCm7Bfz76ObmLsHfHiNOV8N4UwlTzvgdcDKWWCccYUqi7KPrshzyjnqTXFPPExsfDn14E55MxYsZkzBEwCZV7hFX+mz1Df3w6xaOiAEZ1mfRVgD6QsHFvPOHuM0EeJSMgGeZIQrsknfwGpLROzXKFjy/rltJiaZnefTkflkFJ+Q+7AL7XhyjKtJ5Oku8xXKzcFVyTt5ni++A4ICqWe48x7gebFoi7ZpA5KS+6pZ4jcQv5y3cUVGM35v2kTAJjKf3C2LuXluWeYTntMI969187adFQ62m9a7pvnlbX33Q3ETr9BLek3riOs97dIn2kMXuXRGP+gnXS7/rb6q1quN3LSyVGBe0LWn2vwHfkY3xA== < /latexit > 1 ( f , x ) < latexit sha1_base64= '' f+iIV9NkU0Rk1FNjiXyQwwz7rcU= '' > AAAGZXicpVTNbtNAEJ4Gakqh0ALiwgGLqBIHGuwIRI8VXDhwKNC0lUpV2c6mtbL+wV5DqyqPwBXegHfiCYC34Jux06RN4lbCqzjj2fm+2flbP9Vhbhzn11zj2vV568bCzcVbt5fu3F1eubedJ0UWqE6Q6CTb9b1c6TBWHRMarXbTTHmRr9WO33/D+ztfVJaHSbxlTlK1H3mHcdgLA89A9fH4oH2w3HRajjz2pOBWQpOqZzNZafykT9SlhAIqKCJFMRnImjzKsfbIJYdS6PbpFLoMUij7iga0CGwBKwULD9o+3of42qu0Mb6ZMxd0AC8avwxIm1aBSWCXQWZvtuwXwszaWdynwslnO8G/X3FF0Bo6gvYy3NDy6jgfVvWxGurRusQYIuZUNBx9UHkpJGscmT0WtQFDCh3LXexnkANBDutgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7NSDsMfvZ/FvoJubuAR+fYy7XqnDmkqd18Ppg5Swwzj5D1UV5hK4oc8o56o1xjzyMbLScun+lvNkzFjPm4ImALCvcoQ/0Tupb+mFWIx0Qg7OuzxKsvvSFD4tpZ09xmgRxKZmAUDLCFVmjz2D1JCL2a1csZf9cNhOjzE7z6ct8MopPyH3YhXY4OfbZJPJ01/mK5ebgqpSdPM0X3wFRhTQT3GUP8Ly49Jza9AySkvuqVeM3Er+ct2FFBhN+L9okwGYyn9wts7l5blnmEx7TAPeve/G2nRS22y33Zct5/6K58bq6iRfoET2hp4jrFW3QW9pEFwXg/Ubf6cf8b2vJemA9LE0bcxXmPp17rMf/AG7jMm8= < /latexit > x2 < latexit sha1_base64= '' EzP2813inJZ9gcRf6vdtsAy3GNQ= '' > AAAGZXicpVTNTtRQFD6MUhFFQY0bFzZOSFzI2BKILIluXLhAZYAECWk7d6CZ2x/bW4WQeQS3+ga+k0+gvoXfOe0wAzNTSOzNdE7PPd937vm7fqrD3DjOr5nGjZuz1q252/N37i7cu7+49GAnT4osUO0g0Um253u50mGs2iY0Wu2lmfIiX6tdv/eG93e/qCwPk3jbnKbqIPKO4rAbBp6B6uPJoXu42HRajjz2uOBWQpOqZytZavykT9ShhAIqKCJFMRnImjzKsfbJJYdS6A7oDLoMUij7ivo0D2wBKwULD9oe3kf42q+0Mb6ZMxd0AC8avwxIm5aBSWCXQWZvtuwXwszaadxnwslnO8W/X3FF0Bo6hvYq3MDy+jgfVvWxGurShsQYIuZUNBx9UHkpJGscmT0StQFDCh3LHexnkANBDupgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7FSDsEfvp/NvoJubuAh9fYC7XsnDmkqcN8Ppg5Swwzj5H1UV5jK4oc8o56o5wDz0MbbScunetvNlTFjPm4ImALCvcpg/0Tupb+mFWIx0Qg7OuzxKsnvSFD4tJZ09xmgRxKZmAUDLCFVmhz2D1JCL2a1csZf9cNRPDzE7y6ct8MopPyH3YgXYwOfb5JPJ01/mK5ebgqpSdPMkX3wFRhTRj3GUP8Ly49JJW6QUkJfdVq8ZvJH45b4OK9Mf8XrZJgM1kPrlbpnPz3LLMJzyhPu5f9/JtOy7srLbc9Zbzfq25+bq6iefoCT2j54jrFW3SW9pCFwXg/Ubf6cfsb2vBemQ9Lk0bMxXmIV14rKf/AGjkMm4= < /latexit > x1 < latexit sha1_base64= '' XegFC4q7LlcHkMh76AKV9FRgFVo= '' > AAAGdHicpVTLTttQEB3SklL6Cu2yXViNIlEJghMVwRK1my66oBUBJIIi27khVvyqfU1BUf6k2/JP/YP2B7rumbFDAkkMUn0VZzx3zpk7r2tHnpto0/y1VHrwcLn8aOXx6pOnz56/qKy9PEzCNHZUywm9MD62rUR5bqBa2tWeOo5iZfm2p47swUfePzpXceKGwYG+jNSpb50Fbs91LA1Vp1JpR32301zvbbRtf3gxetepVM26KY8xKzRyoUr5sx+ula6oTV0KyaGUfFIUkIbskUUJ1gk1yKQIulMaQhdDcmVf0YhWgU1hpWBhQTvA+wxfJ7k2wDdzJoJ24MXDLwbSoBowIexiyOzNkP1UmFm7iHsonHy2S/zbOZcPraY+tHfhxpb3x9mwKo5VU492JUYXMUei4eid3EsqWePIjKmoNRgi6FjuYj+G7AhyXAdDMInkhnNvyf5vsWQtfzu5bUp/CqNgn57E8T+RsE9Pqvgdcma/CYQxZb+Y/wDdxNw94IMbzNmqCWciedoFrw1WzgLjjGtUUZR9dEWWU85Rb4p74mFi48mpB/fKm7FgMWMCHh/IrMIt+kqfpb6ZH2bV0gEBOIv6LMQaSF/YsJh39ginCRGXkglwJSNckU36BlZLImK/Rs6S9c9dMzHJ7Dyftswno/iE3IddaMeTY1xPIk93ka9Abg6uStbJ83zxHeDnSD3DnfUAz0uDtqhJG5CU3Ff1Ar+++OW8jSsymvF72yYENpb55G5ZzM1zyzKf8IJGuH8bt2/bWeGwWW9s180v76t7H/KbeIVe01taR1w7tEefaB9d5NA5/aCfdLX8t/ymXC3XMtPSUo55RTeecv0fhE83xQ== < /latexit > 2 ( f , x ) < latexit sha1_base64= '' +PQNwZwMPRqf1dadH50k+pNYOr0= '' > AAAGdHicpVTLbtNQEJ0GGkp5pbCEhUUUqUhtalcguqxgw4JFQU1bqaki27lprPiFfV1aRfkTtvSf+AP4AdacGTtN2iRuJXwVZzx3zpk7r+vEvpdq0/y1VLl3f7n6YOXh6qPHT54+q609P0ijLHFVy438KDly7FT5Xqha2tO+OooTZQeOrw6dwUfePzxTSepF4b6+iNVJYJ+GXs9zbQ1Vp1Zrx32vY633NtpOMDwfvenU6mbTlMeYFaxCqFPx7EVrlUtqU5cicimjgBSFpCH7ZFOKdUwWmRRDd0JD6BJInuwrGtEqsBmsFCxsaAd4n+LruNCG+GbOVNAuvPj4JUAa1AAmgl0Cmb0Zsp8JM2sXcQ+Fk892gX+n4Aqg1dSH9jbc2PLuOAdW5bFq6tGOxOgh5lg0HL1beMkkaxyZMRW1BkMMHctd7CeQXUGO62AIJpXccO5t2f8tlqzlb7ewzehPaRTs05c4/icS9ulLFb9Dzu03gTCm7Bfz76ObmLsHfHiNOV8N4UwlTzvgdcDKWWCccYUqi7KPrshzyjnqTXFPPExsfDn14E55MxYsZkzBEwCZV7hFX+mz1Df3w6xaOiAEZ1mfRVgD6QsHFvPOHuM0EeJSMgGeZIQrsknfwGpLROzXKFjy/rltJiaZnefTkflkFJ+Q+7AL7XhyjKtJ5Oku8xXKzcFVyTt5ni++A4ICqWe48x7gebFoi7ZpA5KS+6pZ4jcQv5y3cUVGM35v2kTAJjKf3C2LuXluWeYTntMI969187adFQ62m9a7pvnlbX33Q3ETr9BLek3riOs97dIn2kMXuXRGP+gnXS7/rb6q1quN3LSyVGBe0LWn2vwHfkY3xA== < /latexit > 1 ( f , x ) < latexit sha1_base64= '' XegFC4q7LlcHkMh76AKV9FRgFVo= '' > AAAGdHicpVTLTttQEB3SklL6Cu2yXViNIlEJghMVwRK1my66oBUBJIIi27khVvyqfU1BUf6k2/JP/YP2B7rumbFDAkkMUn0VZzx3zpk7r2tHnpto0/y1VHrwcLn8aOXx6pOnz56/qKy9PEzCNHZUywm9MD62rUR5bqBa2tWeOo5iZfm2p47swUfePzpXceKGwYG+jNSpb50Fbs91LA1Vp1JpR32301zvbbRtf3gxetepVM26KY8xKzRyoUr5sx+ula6oTV0KyaGUfFIUkIbskUUJ1gk1yKQIulMaQhdDcmVf0YhWgU1hpWBhQTvA+wxfJ7k2wDdzJoJ24MXDLwbSoBowIexiyOzNkP1UmFm7iHsonHy2S/zbOZcPraY+tHfhxpb3x9mwKo5VU492JUYXMUei4eid3EsqWePIjKmoNRgi6FjuYj+G7AhyXAdDMInkhnNvyf5vsWQtfzu5bUp/CqNgn57E8T+RsE9Pqvgdcma/CYQxZb+Y/wDdxNw94IMbzNmqCWciedoFrw1WzgLjjGtUUZR9dEWWU85Rb4p74mFi48mpB/fKm7FgMWMCHh/IrMIt+kqfpb6ZH2bV0gEBOIv6LMQaSF/YsJh39ginCRGXkglwJSNckU36BlZLImK/Rs6S9c9dMzHJ7Dyftswno/iE3IddaMeTY1xPIk93ka9Abg6uStbJ83zxHeDnSD3DnfUAz0uDtqhJG5CU3Ff1Ar+++OW8jSsymvF72yYENpb55G5ZzM1zyzKf8IJGuH8bt2/bWeGwWW9s180v76t7H/KbeIVe01taR1w7tEefaB9d5NA5/aCfdLX8t/ymXC3XMtPSUo55RTeecv0fhE83xQ== < /latexit > 2 ( f , x ) < latexit sha1_base64= '' f+iIV9NkU0Rk1FNjiXyQwwz7rcU= '' > AAAGZXicpVTNbtNAEJ4Gakqh0ALiwgGLqBIHGuwIRI8VXDhwKNC0lUpV2c6mtbL+wV5DqyqPwBXegHfiCYC34Jux06RN4lbCqzjj2fm+2flbP9Vhbhzn11zj2vV568bCzcVbt5fu3F1eubedJ0UWqE6Q6CTb9b1c6TBWHRMarXbTTHmRr9WO33/D+ztfVJaHSbxlTlK1H3mHcdgLA89A9fH4oH2w3HRajjz2pOBWQpOqZzNZafykT9SlhAIqKCJFMRnImjzKsfbIJYdS6PbpFLoMUij7iga0CGwBKwULD9o+3of42qu0Mb6ZMxd0AC8avwxIm1aBSWCXQWZvtuwXwszaWdynwslnO8G/X3FF0Bo6gvYy3NDy6jgfVvWxGurRusQYIuZUNBx9UHkpJGscmT0WtQFDCh3LXexnkANBDutgCyaX3HDuPdn/I5as5e+gsi3ob20U7FNLHP8TCfvUUsWvkEv7NSDsMfvZ/FvoJubuAR+fYy7XqnDmkqd18Ppg5Swwzj5D1UV5hK4oc8o56o1xjzyMbLScun+lvNkzFjPm4ImALCvcoQ/0Tupb+mFWIx0Qg7OuzxKsvvSFD4tpZ09xmgRxKZmAUDLCFVmjz2D1JCL2a1csZf9cNhOjzE7z6ct8MopPyH3YhXY4OfbZJPJ01/mK5ebgqpSdPM0X3wFRhTQT3GUP8Ly49Jza9AySkvuqVeM3Er+ct2FFBhN+L9okwGYyn9wts7l5blnmEx7TAPeve/G2nRS22y33Zct5/6K58bq6iRfoET2hp4jrFW3QW9pEFwXg/Ubf6cf8b2vJemA9LE0bcxXmPp17rMf/AG7jMm8= < /latexit > x2 < latexit sha1_base64= '' dOzJUIAhwdtY7mtaVsXQVGoLNyY= '' > AAAGhnicpVRLTxNRFD6g1ooPii7d3NCQYAJ1hkggrhrduHCBhAIJEDIzvYVJ5+XMHYQ03ftLXLrVv+I/0H/hd85MabEPSJybTs+ce77v3PO6bhL4mbGsX3Pz9+4/qDysPlp4/OTps8Xa0vP9LM5TT7e8OIjTQ9fJdOBHumV8E+jDJNVO6Ab6wO2+5/2DC51mfhztmatEn4TOWeR3fM8xUJ3Wlo+NvjQ9ebud3q7uNNSFE+Q6679V/VV7zX51WqvbDUseZY0Jg606lc9OvDT/jY6pTTF5lFNImiIykANyKMM6IpssSqA7oR50KSRf9jX1aQHYHFYaFg60XbzP8HVUaiN8M2cmaA9eAvxSIBWtABPDLoXM3pTs58LM2mncPeHks13h3y25QmgNnUN7G25geXecC6vZsRrq0LbE6CPmRDQcvVd6ySVrHJkaidqAIYGO5Tb2U8ieIAd1UILJJDece0f2f4sla/nbK21z+jMzCvYZSBz/Ewn7DKSKXyAX9utAqBH76fx76Cbm7gAf3WAu1opwZpKnbfC6YOUsME5do2ZFeY6uKHLKOeqMcA89DG0COXX3TnlTUxYzZuAJgSwq3KJd+ij1Lfwwq5EOiMA5q89irK70hQuLSWdPcJoYcWmZAF8ywhVZp89gdSQi9qtKlqJ/bpuJYWYn+XRlPhnFJ+Q+bEM7mBx1PYk83bN8RXJzcFWKTp7ki++AsESaMe6iB3hebHpNG7QGSct91ZjhNxS/nLdBRfpjfv+1iYFNZT65W6Zz89yyzCe8pP7o/Ttd2N9o2JsN69ObevNdeRNX6SUt0yri2qImfaAddJFHX+k7/aCflWqlUdmsbBWm83Ml5gXdeCrNvw6uPf0= < /latexit > Ref . values : ( 1 , 1 ) < latexit sha1_base64= '' Fi2qoCAxSadKqu9LvFFLL5Ggwxg= '' > AAAGhnicpVTLTttQEB1om6b0QWiX3VhESFQC10FFoK6idtNFFxQRQAKEbOcarPhV+5qCouz7JV122/5K/6D9i54ZOyQ0iUGqr+KM5845c+d1nSTwM21Zv+bm791/UHtYf7Tw+MnTZ4uNpef7WZynruq4cRCnh46dqcCPVEf7OlCHSars0AnUgdN7z/sHFyrN/Dja01eJOgnts8j3fNfWUJ02lo+1utR9eTtef1d5pnFhB7nKBm+Nwaq11np12mi2TEsew5oQhltNKp+deGn+Gx1Tl2JyKaeQFEWkIQdkU4Z1RC2yKIHuhPrQpZB82Vc0oAVgc1gpWNjQ9vA+w9dRqY3wzZyZoF14CfBLgTRoBZgYdilk9mbIfi7MrJ3F3RdOPtsV/p2SK4RW0zm0t+GGlnfHObCqjlWTR9sSo4+YE9Fw9G7pJZescWTGWNQaDAl0LHexn0J2BTmsgyGYTHLDubdl/7dYspa/3dI2pz+VUbDPQOL4n0jYZyBV/AK5sF8Hwhizn82/h25ibg/46AZzsVaEM5M8bYPXAStngXHGNaoqynN0RZFTzpE3xj3yMLIJ5NS9O+XNmLGYMQNPCGRR4Q7t0kepb+GHWbV0QATOqj6LsXrSFw4spp09wWlixKVkAnzJCFdknT6D1ZaI2K9RshT9c9tMjDI7zacj88koPiH3YRfa4eQY15PI013lK5Kbg6tSdPI0X3wHhCVST3AXPcDz0qLXtEFrkJTcV2aF31D8ct6GFRlM+P3XJgY2lfnkbpnNzXPLMp/wkgbj9+9sYX/DbG2a1qc3zfa78iau00taplXEtUVt+kA76CKXvtJ3+kE/a/WaWdusbRWm83Ml5gXdeGrtvwisPfw= < /latexit > Ref . values : ( 0 , 1 ) Figure 1 : Shapley representations span beyond one-dimensional manifolds and depend on both the inner function and the reference values . In both groups , the gray-scale background indicates the respective function value while the rainbow-scale color indicates correspondence between input ( left ) a d Shapley representation ( right ) along with function values—red means highest and purple means lowest function values . The red cross represents the reference values . More details in subsection 2.1. that can be arbitrarily complex . Interestingly , the Shapley explanation values , often denoted φs ( x , f ) , for a GAM are exactly the values of the independent function , i.e. , ∀s , φs ( x , fGAM ) = fs ( xs ) . Hence , the prediction and the corresponding exact SHAP explanation can be computed simultaneously for GAM models . However , GAM models are inherently limited in their representational power , particularly for perceptual data such as images in which deep networks are state-of-the-art . Thus , prior work is either post-hoc ( which precludes leveraging the method during training ) or limited in its representational power ( e.g. , GAMs ) . To overcome these drawbacks , we propose to incorporate Shapley values themselves as learned latent representations ( as opposed to post-hoc ) in deep models—thereby making Shapley explanations first-class citizens in the modeling paradigm . Intuitive illustrations of such representation are provided in Fig . 1 and detailed discussion in subsection 2.1 . We summarize our core contributions as follows : • We formally define the Shapley transform and prove a simple but useful linearity property for constructing networks . • We develop a novel network architecture , the SHAPNETs , that includes Shallow and Deep SHAPNETs and that intrinsically provides layer-wise explanations ( i.e. , explanations at every layer of the network ) in the same forward pass as the prediction . • We prove that Shallow SHAPNET explanations are the exact Shapley values—thus satisfying all three SHAP properties—and prove that Deep SHAPNET explanations maintain the missingness and local accuracy properties . • To reduce computation , we propose an instance-specific dynamic pruning method for Deep SHAPNETs that can skip unnecessary computation . • We enable explanation regularization based on Shapley values during training because the explanation is a latent representation in our model . • We demonstrate empirically that our SHAPNETs can provide these new capabilities while maintaining comparable performance to other deep models . Dedicated Related Works Section We present related works above and in the text where appropriate . Due to space limit , we refer to Appendix D for a dedicated literature review section . 2 SHAPLEY EXPLANATION NETWORKS . Background We give a short introduction to SHAP explanations and their properties as originally introduced in Lundberg & Lee ( 2017 ) . Given a model f : Rd 7→ R that is not inherently interpretable ( e.g. , neural nets ) , additive feature-attribution methods form a linear approximation of the function over simplified binary inputs , denoted z ∈ { 0 , 1 } d , indicating the “ presence ” and “ absence ” of each feature , respectively : i.e. , a local linear approximation η ( z ) = a0 + ∑d i=1 aizi . While there are different ways to model “ absence ” and “ presence ” , in this work , we take a simplified viewpoint : “ presence ” means that we keep the original value whereas “ absence ” means we replace the original value with a reference value , which has been validated in Sundararajan & Najmi ( 2020 ) as Baseline Shapley . If we denote the reference vector for all features by r , then we can define a simple mapping function between z and x as Ψx , r ( z ) = z x + ( 1 − z ) r , where denotes element-wise product ( eg , Ψx , r ( [ 0 , 1 , 0 , 1 , 1 ] ) = [ r1 , x2 , r3 , x4 , x5 ] ) . A simple generalization is to group certain features together and consider including or removing all features in the group . Lundberg & Lee ( 2017 ) propose three properties that additive feature attribution methods should intuitively satisfy . The first property called local accuracy states that the approximate model η at z = 1 should match the output of the model f at the corresponding x , i.e. , f ( x ) = η ( 1 ) = ∑d i=0 ai . The second property called missingness formalizes the idea that features that are “ missing ” from the input x ( or correspondingly the zeros in z ) should have zero attributed effect on the output of the approximation , i.e. , zi = 0⇒ ai = 0 . Finally , the third property called consistency formalizes the idea that if one model always sees a larger effect when removing a feature , the attribution value should be larger ( see ( Lundberg & Lee , 2017 ) for full definitions of the three properties and their discussions ) . Definition 1 ( SHAP Values ( Lundberg & Lee , 2017 ) ) . SHAP values are defined as : φi ( f , x ) , 1 d ∑ z∈Zi ( d− 1 ‖z‖1 ) −1 [ f ( Ψx , r ( z ∪ { i } ) ) − f ( Ψx , r ( z ) ) ] , ( 1 ) where ‖z‖1 is the ` 1 norm of z given a set Zi = { z ∈ { 0 , 1 } d : zi = 0 } and z ∪ { i } represents setting the i-th element in z to be 1 instead of 0 . 2.1 SHAPLEY TRANSFORM AND SHAPLEY REPRESENTATION . For the sake of generality , we will define the Shapley transform in terms of tensors ( or multidimensional arrays ) . Below in Def . 2 we make a distinction between tensor dimensions that need to be explained and other tensor dimensions . For example , in image classification , usually an explanation ( or attribution ) value is required for each pixel ( i.e. , the spatial dimensions ) but all channels are grouped together ( i.e. , an attribution is provided for each pixel but not for each channel of each pixel ) . We generalize this idea in the following definitions of explainable and channel dimensions . Definition 2 ( Explainable Dimensions and Channel Dimensions ) . Given a tensor representation x ∈ RD×C ≡ R ( d1×d2×··· ) × ( c1×c2×··· ) , the tensor dimensions D ≡ d1 × d2 × · · · that require attribution will be called explainable dimensions and the other tensor dimensions C ≡ c1 × c2 × · · · will be called the channel dimensions . As a simplest example , if the input is a vector x ∈ Rd×1 , then D = d and C = 1 , i.e. , we have one ( since C = 1 ) importance value assigned to each feature . For images in the space Rh×w×c where h , w denote the height and width , respectively , the explainable dimensions correspond to the spatial dimensions ( i.e. , D = h× w ) and the channel dimensions correspond to the single channel dimension ( i.e. , C = c ) . While the previous examples discuss tensor dimensions of the input , our Shapley transforms can also operate on latent representations ( e.g. , in a neural net ) . For example , our latent representations for image models could be in the space Rw×h×c1×c2 , where the explainable dimensions correspond to spatial dimensions ( i.e. , D = h×w ) and there are two channel dimensions ( i.e. , C = c1 × c2 ) . Given this distinction between explainable and channel dimensions , we can now define the Shapley transform and Shapley representation . Definition 3 ( Shapley Transform ) . Given an arbitrary function f ∈ F : RD×C 7→ RC′ the Shapley transform Ω : RD×C ×F 7→ RD×C′ is defined as : [ Ω ( x , f ) ] i , j = φi ( x , fj ) , ∀i ∈ ID , j ∈ IC′ , ( 2 ) where ID denotes the set of all possible indices for values captured by the explainable dimensions D ( and similarly for IC′ ) and fj denotes the scalar function corresponding to the j-th output of f . Definition 4 ( Shapley Representation ) . Given a function f ∈ F as in Def . 3 and a tensor x ∈ RD×C , we simply define the Shapley representation to be : Z , Ω ( x , f ) ∈ RD×C′ . Notice that in the Shapley transform , we always keep the explainable dimensions D unchanged . However , the channel dimensions of the output representation space , i.e. , C ′ , are determined by the co-domain of the function f . For example , if f is a scalar function ( i.e. , C ′ = 1 ) , then the attribution for the explainable dimensions is a scalar . However , if f is a vector-valued function , then the attribution is a vector ( corresponding to the vector output of f ) . A multi-class classifier is a simple example of an f that is a vector-valued function ( e.g. , C ′ = 10 for 10-class classification tasks ) . In summary , the Shapley transform maintains the explainable tensor dimensions , but each explainable element of D may be associated to a tensor of attribution values corresponding to each output of f . | The paper proposes to incorporate Shapley values as latent representations in deep models. Specifically, the paper constructs Shallow SHAPNETs that computes the exact Shapley values. The paper also constructs Deep SHAPNETs that maintain the missingness and accuracy properties of Shapley values. The effectiveness of the proposed SHAPNETs is demonstrated through experiments on synthetic and real-world data. | SP:585ea7586283caf39965101656d1dc17abe1b331 |
Learning Incompressible Fluid Dynamics from Scratch - Towards Fast, Differentiable Fluid Models that Generalize | 1 INTRODUCTION . Simulating the behavior of fluids by solving the incompressible Navier-Stokes equations is of great importance for a wide range of applications and accurate as well as fast fluid simulations are a long-standing research goal . On top of simulating the behavior of fluids , several applications such as sensitivity analysis of fluids or gradient-based control algorithms rely on differentiable fluid simulators that allow to propagate gradients throughout the simulation ( Holl et al . ( 2020 ) ) . Recent advances in deep learning aim for fast and accurate fluid simulations but rely on vast datasets and / or do not generalize to new fluid domains . Kim et al . ( 2019 ) present a framework to learn parameterized fluid simulations and allow to interpolate efficiently in between such simulations . However , their work does not generalize to new domain geometries that lay outside the training data . Kim & Lee ( 2020 ) train a RNN-GAN that produces turbulent flow fields within a pipe domain , but do not show generalization results beyond pipe domains . Xie et al . ( 2018 ) introduce a tempoGAN to perform temporally consistent superresolution of smoke simulations . This allows to produce plausible high-resolution smoke-density fields for arbitrary low-resolution inputs , but our fluid model should output a complete fluid state description consisting of a velocity and a pressure field . Tompson et al . ( 2017 ) present how a Helmholtz projection step can be learned to accelerate Eulerian fluid simulations . This method generalizes to new domain geometries , but a particle tracer is needed to deal with the advection term of the Navier-Stokes equations . Furthermore , as Eulerian fluids do not model viscosity , effects like e.g . the Magnus effect or Kármán vortex streets can not be simulated . Geneva & Zabaras ( 2020 ) propose a physics-informed framework to learn the entire update step for the Burgers equations in 1D and 2D , but no generalization results for new domain geometries are demonstrated . All of the aforementioned methods rely on the availability of vast amounts of data from fluid-solvers such as FEniCS , OpenFOAM or Mantaflow . Most of these methods do not generalize well or outsource a major part of the fluid simulation to traditional methods such as low-resolution fluid solvers or a particle tracer . In this work , we propose a novel unsupervised training framework to learn incompressible fluid dynamics from scratch . It does not require any simulated fluid-data ( neither as ground truth data , nor to train an adversarial network , nor to initialize frames for a physics-constrained loss ) and generalizes to fluid domains unseen during training . It allows CNNs to learn the entire update-step of mapping a fluid domain from time-point t to t + dt without having to rely on low resolution fluid-solvers or a particle-tracer . In fact , we will demonstrate that a physicsconstrained loss function combined with a simple strategy to recycle fluid-data generated by the neural network at training time suffices to teach CNNs fluid dynamics on increasingly realistic statistics of fluid states . This drastically simplifies the training pipeline . Fluid simulations get efficiently unrolled in time by recurrently applying the trained model on a fluid state . Furthermore , the fluid models include viscous friction and handle effects such as the Magnus effect and Kármán vortex streets . On top of that , we show by a gradient-based optimal control example how backpropagation through time can be used to differentiate the fluid simulation . Code and pretrained models are publicly available at https : //github.com/aschethor/ Unsupervised_Deep_Learning_of_Incompressible_Fluid_Dynamics/ . 2 RELATED WORK . In literature , several different approaches can be found that aim to approximate the dynamics of PDEs in general and fluids in particular with efficient , learning-based surrogate models . Lagrangian methods such as smoothed particle hydrodynamcs ( SPH ) Gingold & Monaghan ( 1977 ) handle fluids from the perspective of many individual particles that move with the velocity field . Following this approach , learning-based methods using regression forests by Ladický et al . ( 2015 ) , graph neural networks by Mrowca et al . ( 2018 ) ; Li et al . ( 2019 ) and continuous convolutions by Ummenhofer et al . ( 2020 ) have been developed . In addition , Smooth Particle Networks ( SP-Nets ) by Schenck & Fox ( 2018 ) allow for differentiable fluid simulations within the Lagrangian frame of reference . These Lagrangian methods are particularly suitable when a fluid domain exhibits large , dynamic surfaces ( e.g . waves or droplets ) . However , to simulate the dynamics within a fluid domain accurately , Eulerian methods , that treat the Navier-Stokes equations in a fixed frame of reference , are usually better suited . Continuous Eulerian methods allow for mesh-free solutions by mapping domain coordinates ( e.g . x , y , t ) directly onto field values ( e.g . velocity ~v / pressure p ) ( Sirignano & Spiliopoulos ( 2018 ) ; Grohs et al . ( 2018 ) ; Khoo et al . ( 2019 ) ) . Recent applications focused on flow through porous media ( Zhu & Zabaras ( 2018 ) ; Zhu et al . ( 2019 ) ; Tripathy & Bilionis ( 2018 ) ) , fluid modeling ( Yang et al . ( 2016 ) ; Raissi et al . ( 2018 ) ) , turbulence modeling ( Geneva & Zabaras ( 2019 ) ; Ling et al . ( 2016 ) ) and modeling of molecular dynamics ( Schöberl et al . ( 2019 ) ) . Training is usually based on physics-constrained loss functions that penalize residuals of the underlying PDEs . Similar to our approach , Raissi et al . ( 2019 ) uses vector potentials to obtain continuous divergence-free velocity fields to approximate the incompressible Navier-Stokes equations . Continuous methods return smooth , accurate results and can overcome the curse of dimensionality of discrete techniques in high-dimensional PDEs ( Grohs et al . ( 2018 ) ) . However , these networks are trained on a specific domain and can not generalize to new environments or be used in interactive scenarios . Discrete Eulerian methods , on the other hand , aim to solve the underlying PDEs on a grid and early work dates back to Harlow & Welch ( 1965 ) and Stam ( 1999 ) . Accelerating such traditional works with deep learning techniques is a major field of research and all of the methods mentioned in the introduction fall into this category . Further methods include the approach by Thuerey et al . ( 2019 ) to learn solutions of the Reynolds-averaged Navier-Stokes equations for airfoil flows , but requires large amounts of training data and does not generalize beyond airfoil flows . In the work by Um et al . ( 2020 ) , a correction step is learned that brings solutions of a low-resolution differentiable fluid solver closer to solutions of a high-resolution fluid simulation . However , generalization results for new domain geometries were not presented . The works of Mohan et al . ( 2020 ) and Kim et al . ( 2019 ) show that vector potentials are suitable to enforce the incompressibility constraint in fluids but do not generalize to new fluid domains beyond their training data . 3 METHOD . In this section , we briefly review the incompressible Navier-Stokes equations , which are to be solved by the neural network . Then , we explain how the Helmholtz decomposition can be exploited to ensure incompressibility within the fluid domain . Furthermore , we provide details of our discrete spatio-temporal fluid representation and introduce the fluid model . Afterwards , we formulate a physics-constrained loss function based on residuals of the Navier-Stokes equations and introduce a pressure regularization term for very high Reynolds numbers . Finally , we explain the unsupervised training strategy . 3.1 INCOMPRESSIBLE NAVIER-STOKES EQUATIONS . Most fluids can be modeled with the incompressible Navier-Stokes equations - a set of non-linear equations that describe the interplay of a velocity field ~v and a pressure field p within a fluid domain Ω : ∇ · ~v = 0 incompressibility on Ω ( 1 ) ρ~̇v = ρ ( ∂~v ∂t + ( ~v · ∇ ) ~v ) = −∇p+ µ∆~v + ~f conservation of momentum on Ω ( 2 ) Here , ρ describes the fluid density and µ the viscosity . Equation 1 states that the fluid is incompressible and thus ~v is divergence-free . Equation 2 states that the change in momentum of fluid particles must correspond to the sum of forces that arise from the pressure gradient , viscous friction and external forces . Here , external forces on the fluid ( such as e.g . gravity ) can be neglected , so we set ~f = 0 . These incompressible Navier-Stokes equations shall be solved by a CNN given initial conditions ~v0 and p0 at the beginning of the simulation and Dirichlet boundary conditions which constrain the velocity field at the domain boundary ∂Ω : ~v = ~vd Dirichlet boundary condition on ∂Ω ( 3 ) 3.2 HELMHOLTZ DECOMPOSITION . A common method to ensure incompressibility of a fluid ( see Equation 1 ) is to project the flow field onto the divergence-free part of its Helmholtz decomposition . The Helmholtz theorem states that every vector field ~v can be decomposed into a curl-free part ( ∇q ) and a divergence-free part ( ∇×~a ) : ~v = ∇q +∇× ~a ( 4 ) Note , that ∇ × ( ∇q ) = ~0 and ∇ · ( ∇ × ~a ) = 0 . The Helmholtz projection consists of solving the Poisson problem ∇ · ~v = ∆q for q , followed by substracting ∇q from the original flow field . However , solving the Poisson equation on arbitrary domains comes at high computational costs for classical methods and one has to rely e.g . on conjugate gradient methods to approximate its solution . Here , we propose a different approach and directly try to learn a vector potential ~a with ~v = ∇×~a . This ensures that the network outputs a divergence-free velocity field within the domain Ω and automatically solves Equation 1 . In this work , we consider 2D fluid simulations , so only the zcomponent of ~a , az , is of interest since vz and all derivatives with respect to the z-axis are zero : ∇× ~a = ( ∂yaz − ∂zay ∂zax − ∂xaz ∂xay − ∂yax ) = ( ∂yaz −∂xaz 0 ) = ( vx vy 0 ) = ~v ( 5 ) 3.3 DISCRETE SPATIO-TEMPORAL FLUID REPRESENTATION . Marker-And-Cell ( MAC ) grid To solve the Navier-Stokes equations , we represent the relation between az , vx , vy , p on a 2D staggered marker-and-cell ( MAC ) grid ( see Figure 1a ) . Therefore , we discretise time and space as follows : ~a ( x , y , t ) = 00 ( az ) t i , j ; ~v ( x , y , t ) = ( ( vx ) ti , j ( vy ) t i , j ) ; p ( x , y , t ) = pti , j ( 6 ) Obtaining gradient , divergence , Laplace and curl operations on this grid with finite differences is straight forward and can be efficiently implemented with convolutions ( see appendix A ) . Explicit , Implicit , Implicit-Explicit ( IMEX ) time integration methods The discretization of the time domain is needed to deal with the time-derivative of the velocity field ∂~v∂t in Equation 2 , which becomes : ρ ( ~vt+dt − ~vt dt + ( ~vt ′ · ∇ ) ~vt ′ ) = −∇pt+dt + µ∆~vt ′ + ~f ( 7 ) The goal is to take as large as possible timesteps dt while maintaining stable and accurate solutions . Stability and accuracy largely depend on the definition of vt ′ . In literature , choosing vt ′ = vt is often referred to as explicit integration methods and frequently leads to unstable behavior . Choosing vt ′ = vt+dt is usually associated with implicit integration methods and gives stable solutions at the cost of numerical dissipation . Implicit-Explicit ( IMEX ) methods , which set vt ′ = ( vt + vt+dt ) /2 are a compromise between both methods and considered to be more accurate but less stable than implicit methods . | This paper proposes to learn the dynamics of an incompressible fluid via a physics informed loss formulation using an unsupervised training framework. It employs a custom solver that is executed at training time to learn a Navier-Stokes residual with a incompressible (curl of a stream function) formulation. This setup is demonstrated for two dimensional karman vortex streets, and a control example for the magnus effect. | SP:cd3d672f555b7a88704ad3142aca702ec7154258 |
Learning Incompressible Fluid Dynamics from Scratch - Towards Fast, Differentiable Fluid Models that Generalize | 1 INTRODUCTION . Simulating the behavior of fluids by solving the incompressible Navier-Stokes equations is of great importance for a wide range of applications and accurate as well as fast fluid simulations are a long-standing research goal . On top of simulating the behavior of fluids , several applications such as sensitivity analysis of fluids or gradient-based control algorithms rely on differentiable fluid simulators that allow to propagate gradients throughout the simulation ( Holl et al . ( 2020 ) ) . Recent advances in deep learning aim for fast and accurate fluid simulations but rely on vast datasets and / or do not generalize to new fluid domains . Kim et al . ( 2019 ) present a framework to learn parameterized fluid simulations and allow to interpolate efficiently in between such simulations . However , their work does not generalize to new domain geometries that lay outside the training data . Kim & Lee ( 2020 ) train a RNN-GAN that produces turbulent flow fields within a pipe domain , but do not show generalization results beyond pipe domains . Xie et al . ( 2018 ) introduce a tempoGAN to perform temporally consistent superresolution of smoke simulations . This allows to produce plausible high-resolution smoke-density fields for arbitrary low-resolution inputs , but our fluid model should output a complete fluid state description consisting of a velocity and a pressure field . Tompson et al . ( 2017 ) present how a Helmholtz projection step can be learned to accelerate Eulerian fluid simulations . This method generalizes to new domain geometries , but a particle tracer is needed to deal with the advection term of the Navier-Stokes equations . Furthermore , as Eulerian fluids do not model viscosity , effects like e.g . the Magnus effect or Kármán vortex streets can not be simulated . Geneva & Zabaras ( 2020 ) propose a physics-informed framework to learn the entire update step for the Burgers equations in 1D and 2D , but no generalization results for new domain geometries are demonstrated . All of the aforementioned methods rely on the availability of vast amounts of data from fluid-solvers such as FEniCS , OpenFOAM or Mantaflow . Most of these methods do not generalize well or outsource a major part of the fluid simulation to traditional methods such as low-resolution fluid solvers or a particle tracer . In this work , we propose a novel unsupervised training framework to learn incompressible fluid dynamics from scratch . It does not require any simulated fluid-data ( neither as ground truth data , nor to train an adversarial network , nor to initialize frames for a physics-constrained loss ) and generalizes to fluid domains unseen during training . It allows CNNs to learn the entire update-step of mapping a fluid domain from time-point t to t + dt without having to rely on low resolution fluid-solvers or a particle-tracer . In fact , we will demonstrate that a physicsconstrained loss function combined with a simple strategy to recycle fluid-data generated by the neural network at training time suffices to teach CNNs fluid dynamics on increasingly realistic statistics of fluid states . This drastically simplifies the training pipeline . Fluid simulations get efficiently unrolled in time by recurrently applying the trained model on a fluid state . Furthermore , the fluid models include viscous friction and handle effects such as the Magnus effect and Kármán vortex streets . On top of that , we show by a gradient-based optimal control example how backpropagation through time can be used to differentiate the fluid simulation . Code and pretrained models are publicly available at https : //github.com/aschethor/ Unsupervised_Deep_Learning_of_Incompressible_Fluid_Dynamics/ . 2 RELATED WORK . In literature , several different approaches can be found that aim to approximate the dynamics of PDEs in general and fluids in particular with efficient , learning-based surrogate models . Lagrangian methods such as smoothed particle hydrodynamcs ( SPH ) Gingold & Monaghan ( 1977 ) handle fluids from the perspective of many individual particles that move with the velocity field . Following this approach , learning-based methods using regression forests by Ladický et al . ( 2015 ) , graph neural networks by Mrowca et al . ( 2018 ) ; Li et al . ( 2019 ) and continuous convolutions by Ummenhofer et al . ( 2020 ) have been developed . In addition , Smooth Particle Networks ( SP-Nets ) by Schenck & Fox ( 2018 ) allow for differentiable fluid simulations within the Lagrangian frame of reference . These Lagrangian methods are particularly suitable when a fluid domain exhibits large , dynamic surfaces ( e.g . waves or droplets ) . However , to simulate the dynamics within a fluid domain accurately , Eulerian methods , that treat the Navier-Stokes equations in a fixed frame of reference , are usually better suited . Continuous Eulerian methods allow for mesh-free solutions by mapping domain coordinates ( e.g . x , y , t ) directly onto field values ( e.g . velocity ~v / pressure p ) ( Sirignano & Spiliopoulos ( 2018 ) ; Grohs et al . ( 2018 ) ; Khoo et al . ( 2019 ) ) . Recent applications focused on flow through porous media ( Zhu & Zabaras ( 2018 ) ; Zhu et al . ( 2019 ) ; Tripathy & Bilionis ( 2018 ) ) , fluid modeling ( Yang et al . ( 2016 ) ; Raissi et al . ( 2018 ) ) , turbulence modeling ( Geneva & Zabaras ( 2019 ) ; Ling et al . ( 2016 ) ) and modeling of molecular dynamics ( Schöberl et al . ( 2019 ) ) . Training is usually based on physics-constrained loss functions that penalize residuals of the underlying PDEs . Similar to our approach , Raissi et al . ( 2019 ) uses vector potentials to obtain continuous divergence-free velocity fields to approximate the incompressible Navier-Stokes equations . Continuous methods return smooth , accurate results and can overcome the curse of dimensionality of discrete techniques in high-dimensional PDEs ( Grohs et al . ( 2018 ) ) . However , these networks are trained on a specific domain and can not generalize to new environments or be used in interactive scenarios . Discrete Eulerian methods , on the other hand , aim to solve the underlying PDEs on a grid and early work dates back to Harlow & Welch ( 1965 ) and Stam ( 1999 ) . Accelerating such traditional works with deep learning techniques is a major field of research and all of the methods mentioned in the introduction fall into this category . Further methods include the approach by Thuerey et al . ( 2019 ) to learn solutions of the Reynolds-averaged Navier-Stokes equations for airfoil flows , but requires large amounts of training data and does not generalize beyond airfoil flows . In the work by Um et al . ( 2020 ) , a correction step is learned that brings solutions of a low-resolution differentiable fluid solver closer to solutions of a high-resolution fluid simulation . However , generalization results for new domain geometries were not presented . The works of Mohan et al . ( 2020 ) and Kim et al . ( 2019 ) show that vector potentials are suitable to enforce the incompressibility constraint in fluids but do not generalize to new fluid domains beyond their training data . 3 METHOD . In this section , we briefly review the incompressible Navier-Stokes equations , which are to be solved by the neural network . Then , we explain how the Helmholtz decomposition can be exploited to ensure incompressibility within the fluid domain . Furthermore , we provide details of our discrete spatio-temporal fluid representation and introduce the fluid model . Afterwards , we formulate a physics-constrained loss function based on residuals of the Navier-Stokes equations and introduce a pressure regularization term for very high Reynolds numbers . Finally , we explain the unsupervised training strategy . 3.1 INCOMPRESSIBLE NAVIER-STOKES EQUATIONS . Most fluids can be modeled with the incompressible Navier-Stokes equations - a set of non-linear equations that describe the interplay of a velocity field ~v and a pressure field p within a fluid domain Ω : ∇ · ~v = 0 incompressibility on Ω ( 1 ) ρ~̇v = ρ ( ∂~v ∂t + ( ~v · ∇ ) ~v ) = −∇p+ µ∆~v + ~f conservation of momentum on Ω ( 2 ) Here , ρ describes the fluid density and µ the viscosity . Equation 1 states that the fluid is incompressible and thus ~v is divergence-free . Equation 2 states that the change in momentum of fluid particles must correspond to the sum of forces that arise from the pressure gradient , viscous friction and external forces . Here , external forces on the fluid ( such as e.g . gravity ) can be neglected , so we set ~f = 0 . These incompressible Navier-Stokes equations shall be solved by a CNN given initial conditions ~v0 and p0 at the beginning of the simulation and Dirichlet boundary conditions which constrain the velocity field at the domain boundary ∂Ω : ~v = ~vd Dirichlet boundary condition on ∂Ω ( 3 ) 3.2 HELMHOLTZ DECOMPOSITION . A common method to ensure incompressibility of a fluid ( see Equation 1 ) is to project the flow field onto the divergence-free part of its Helmholtz decomposition . The Helmholtz theorem states that every vector field ~v can be decomposed into a curl-free part ( ∇q ) and a divergence-free part ( ∇×~a ) : ~v = ∇q +∇× ~a ( 4 ) Note , that ∇ × ( ∇q ) = ~0 and ∇ · ( ∇ × ~a ) = 0 . The Helmholtz projection consists of solving the Poisson problem ∇ · ~v = ∆q for q , followed by substracting ∇q from the original flow field . However , solving the Poisson equation on arbitrary domains comes at high computational costs for classical methods and one has to rely e.g . on conjugate gradient methods to approximate its solution . Here , we propose a different approach and directly try to learn a vector potential ~a with ~v = ∇×~a . This ensures that the network outputs a divergence-free velocity field within the domain Ω and automatically solves Equation 1 . In this work , we consider 2D fluid simulations , so only the zcomponent of ~a , az , is of interest since vz and all derivatives with respect to the z-axis are zero : ∇× ~a = ( ∂yaz − ∂zay ∂zax − ∂xaz ∂xay − ∂yax ) = ( ∂yaz −∂xaz 0 ) = ( vx vy 0 ) = ~v ( 5 ) 3.3 DISCRETE SPATIO-TEMPORAL FLUID REPRESENTATION . Marker-And-Cell ( MAC ) grid To solve the Navier-Stokes equations , we represent the relation between az , vx , vy , p on a 2D staggered marker-and-cell ( MAC ) grid ( see Figure 1a ) . Therefore , we discretise time and space as follows : ~a ( x , y , t ) = 00 ( az ) t i , j ; ~v ( x , y , t ) = ( ( vx ) ti , j ( vy ) t i , j ) ; p ( x , y , t ) = pti , j ( 6 ) Obtaining gradient , divergence , Laplace and curl operations on this grid with finite differences is straight forward and can be efficiently implemented with convolutions ( see appendix A ) . Explicit , Implicit , Implicit-Explicit ( IMEX ) time integration methods The discretization of the time domain is needed to deal with the time-derivative of the velocity field ∂~v∂t in Equation 2 , which becomes : ρ ( ~vt+dt − ~vt dt + ( ~vt ′ · ∇ ) ~vt ′ ) = −∇pt+dt + µ∆~vt ′ + ~f ( 7 ) The goal is to take as large as possible timesteps dt while maintaining stable and accurate solutions . Stability and accuracy largely depend on the definition of vt ′ . In literature , choosing vt ′ = vt is often referred to as explicit integration methods and frequently leads to unstable behavior . Choosing vt ′ = vt+dt is usually associated with implicit integration methods and gives stable solutions at the cost of numerical dissipation . Implicit-Explicit ( IMEX ) methods , which set vt ′ = ( vt + vt+dt ) /2 are a compromise between both methods and considered to be more accurate but less stable than implicit methods . | This paper presents a "physics-informed" deep learning model of fluid dynamics. The underlying deep learning architecture employed is a somewhat standard u-net, but one of the proposed method's distinguishing features is that it enforces its adherence to physical behavior at its loss terms, by penalizing predictions that are not incompressible or do not conserve momentum. Notably, this approach allows it to be trained unsupervisedly, without requiring the generation of ground-truth simulations. | SP:cd3d672f555b7a88704ad3142aca702ec7154258 |
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning | 1 INTRODUCTION . Federated learning has become an important paradigm in large-scale machine learning where the training data remains distributed over a large number of clients , which may be mobile phones or network sensors ( Konečnỳ et al. , 2016b ; a ; McMahan et al. , 2017 ; Mohri et al. , 2019 ; Kairouz et al. , 2019 ) . A centralized model , here referred to as a server model , is then trained , without ever transmitting client data over the network , thereby providing some basic levels of data privacy and security . Two important settings are distinguished in Federated learning ( Kairouz et al. , 2019 , Table 1 ) : the cross-device and the cross-silo settings . The cross-silo setting corresponds to a relatively small number of reliable clients , typically organizations , such as medical or financial institutions . In contrast , in the cross-device federated learning setting , the number of clients may be extremely large and include , for example , all 3.5 billion active android phones ( Holst , 2019 ) . Thus , in that setting , we may never make even a single pass over the entire clients ’ data during training . The cross-device setting is further characterized by resource-poor clients communicating over a highly unreliable network . Together , the essential features of this setting give rise to unique challenges not present in the cross-silo setting . Here , we are interested in the cross-device setting , for which we will formalize and study stochastic optimization algorithms . The de facto standard algorithm for this setting is FEDAVG ( McMahan et al. , 2017 ) , which performs multiple SGD updates on the available clients , before communicating to the server . While this approach can reduce the total amount of communication required , performing multiple steps on the same client can lead to ‘ over-fitting ’ to its atypical local data , a phenomenon known as client drift ( Karimireddy et al. , 2020 ) . Furthermore , algorithmic innovations such as momentum ( Sutskever et al. , 2013 ; Cutkosky and Orabona , 2019 ) , adaptivity ( Kingma and Ba , 2014 ; Zaheer et al. , 2018 ; Zhang et al. , 2019 ) , and clipping ( You et al. , 2017 ; 2019 ; Zhang et al. , 2020 ) are critical to the success of deep learning applications and need to be incorporated into the client updates , replacing the SGD update of FEDAVG . Perhaps due to such deficiencies , there exists a large gap in performance between the centralized setting , where data is centrally collected on the server , and the federated setting ( Zhao et al. , 2018 ; Hsieh et al. , 2019 ; Hsu et al. , 2019 ; Karimireddy et al. , 2020 ) . To overcome such deficiencies , we propose a new framework , MIME , that mitigates client drift and adapts arbitrary centralized optimization algorithms , e.g . SGD with momentum or Adam , to the federated setting . In each local client update , MIME uses global statistics , e.g . momentum , and an SVRG-style correction to mimic the updates of the centralized algorithm run on i.i.d . data . These global statistics are computed only at the server level and kept fixed throughout the local steps , thereby avoiding a bias due to the atypical local data of any single client . Contributions . We summarize our main results below . • We formalize the cross-device federated learning problem , and propose a new framework MIME that can adapt arbitrary centralized algorithms to this setting . • We prove that incorporating server momentum into each local client update reduces client drift and leads to optimal statistical rates . • Further , we quantify the usefulness of performing multiple local updates on a single client by carefully tracking the bias ( client-drift ) introduced . This is the first analysis showing improved rates by taking additional multiple steps for general smooth functions . • Finally , we also propose a simpler variant , MIMELITE , with an empirical performance similar to MIME . We report the results of thorough experimental analysis demonstrating that both MIME and MIMELITE are faster than FEDAVG . Related work . Analysis of FedAvg : Much of the recent work in federated learning has focused on analyzing FEDAVG . For identical clients , FEDAVG coincides with parallel SGD , for which Zinkevich et al . ( 2010 ) derived an analysis with asymptotic convergence . Sharper and more refined analyses of the same method , sometimes called local SGD , were provided by Stich ( 2019 ) , and more recently by Stich and Karimireddy ( 2019 ) , Patel and Dieuleveut ( 2019 ) , Khaled et al . ( 2020 ) , and Woodworth et al . ( 2020b ) , for identical functions . Their analysis was extended to heterogeneous clients in ( Wang et al. , 2019 ; Yu et al. , 2019b ; Karimireddy et al. , 2020 ; Khaled et al. , 2020 ; Koloskova et al. , 2020 ) . Charles and Konečnỳ ( 2020 ) derived a tight characterization of FedAvg with quadratic functions and demonstrated the sensitivity of the algorithm to both client and server step sizes . Matching upper and lower bounds were recently given by Karimireddy et al . ( 2020 ) and Woodworth et al . ( 2020a ) for general functions , proving that FEDAVG can be slower than even SGD for heterogeneous data , due to the client-drift . Comparison to SCAFFOLD : For the cross-silo setting where the number of clients is relatively low , Karimireddy et al . ( 2020 ) proposed the SCAFFOLD algorithm , which uses control-variates ( similar to SVRG ) to correct for client drift . However , their algorithm crucially relies on stateful clients which repeatedly participate in the training process . In contrast , we focus on the cross-device setting where clients may be visited only once during training and where they are stateless . This is akin to the difference between the finite-sum and stochastic settings in traditional centralized optimization . Improvements to FedAvg : Hsu et al . ( 2019 ) and Wang et al . ( 2020c ) observed that using server momentum significantly improves over vanilla FEDAVG . This idea was generalized by Reddi et al . ( 2020 ) , who replaced the server update with an arbitrary optimizer , e.g . Adam . However , these methods only modify the server update while using SGD for the client updates . MIME , on the other hand , ensures that every local client update resembles the optimizer e.g . MIME would apply momentum in every client update and not just at the server level . Beyond this , Li et al . ( 2018 ) proposed to add a regularizer to ensure client updates remain close . However , its usefulness is unclear ( cf . Fig . 5 , Karimireddy et al. , 2020 ; Wang et al. , 2020b ) . Other orthogonal directions which can be combined with MIME include tackling computation heterogeneity , where some clients perform many more updates than others ( Wang et al. , 2020b ) , improving fairness by modifying the objective ( Mohri et al. , 2019 ; Li et al. , 2019 ) , incorporating differential privacy ( Geyer et al. , 2017 ; Agarwal et al. , 2018 ; Thakkar et al. , 2020 ) , Byzantine adversaries ( Pillutla et al. , 2019 ; Wang et al. , 2020a ; He et al. , 2020a ) , secure aggregation ( Bonawitz et al. , 2017 ; He et al. , 2020b ) , etc . We refer the reader to the extensive survey by Kairouz et al . ( 2019 ) for additional discussion . 2 PROBLEM SETUP . This section formalizes the problem of cross-device federated learning . We first examine some key challenges of this setting ( cf . Kairouz et al. , 2019 ) to ensure our formalism captures the difficulty : 1 . Communication cost between the server and the clients is a major concern and the source of bottleneck in federated learning ; thus , a key metric for optimization in this setting is the number of communication rounds . 2 . Each client is likely to participate at most once , due to the extremely large number of clients ; furthermore , each individual client may have very little data of its own . 3 . There may be a wide heterogeneity or non-i.i.d.-ness due to the difference of data distributions for the clients . Thus , our objective will be to minimize the following quantity within the fewest number of clientserver communication rounds : f ( x ) = Ei∼D [ fi ( x ) : = 1 ni ni∑ ν=1 fi ( x ; ζi , ν ) ] . ( 1 ) Here , fi denotes the loss function of client i and { ζi,1 , . . . , ζi , ni } its local data . Since the number of clients is extremely large , while size of each local data is rather modest , we represent the former as an expectation and the latter as a finite sum . In each round , the algorithm samples a subset of clients ( of size S ) and performs some updates to the server model . There is some inherent tension between the second and the third challenge outlined above : if there exists a client with arbitrarily different data whom we may never encounter during training , then there is no hope to actually minimize f . Thus for ( 1 ) to be tractable , it is necessary to assume bounded dissimilarity between different fi . ( A1 ) G2-BGD or bounded gradient dissimilarity : there exists G ≥ 0 such that Ei∼D [ ‖∇fi ( x ) −∇f ( x ) ‖2 ] ≤ G2 , ∀x . Next , we also characterize the variance in the Hessians . Note that if fi ( · ; ζ ) is L-smooth , ( A2 ) is always satisfied with δ ≤ 2L and hence is more of a definition rather than an assumption . Note that however , in realistic examples we expect the clients to be similar and hence that δ L. ( A2 ) δ-BHD or bounded Hessian dissimilarity : Almost surely , f is δ-weakly convex i.e . ∇2fi ( x ) −δI and the loss function of any client i satisfies ‖∇2fi ( x ; ζ ) −∇2f ( x ) ‖ ≤ δ , ∀x . In addition , we assume that f ( x ) is bounded from below by f ? and is L-smooth , as is standard . 3 USING MOMENTUM TO REDUCE CLIENT DRIFT . In this section we examine the tension between reducing communication by running multiple client updates each round , and degradation in performance due to client drift ( Karimireddy et al. , 2020 ) . To simplify the discussion , we assume a single client is sampled each round and that clients use full-batch gradients . Server-only approach . A simple way to avoid the issue of client drift is to take no local steps . We sample a client i ∼ D and run SGDm with momentum parameter β and step size η : xt = xt−1 − η ( ( 1− β ) ∇fi ( xt−1 ) + βmt−1 ) , mt = ( 1− β ) ∇fi ( xt−1 ) + βmt−1 . ( 2 ) Here , the gradient ∇fi ( xt ) is unbiased i.e . E [ ∇fi ( xt ) ] = ∇f ( xt ) and hence we are guaranteed convergence . However , this strategy can be communication-intensive and we are likely to spend all our time waiting for communication with very little time spent on computing the gradients . FedAvg approach . To reduce the overall communication rounds required , we need to make more progress in each round of communication . Starting from y0 = xt−1 , FEDAVG ( McMahan et al. , 2017 ) runs multiple SGD steps on the sampled client i ∼ D yk = yk−1 − η∇fi ( yk−1 ) for k ∈ [ K ] , ( 3 ) and then a pseudo-gradient g̃t = − ( yK −xt ) replaces∇fi ( xt−1 ) in the SGDm algorithm ( 2 ) . This is referred to as server-momentum since it is computed and applied only at the server level ( Hsu et al. , 2019 ) . However , such updates give rise to client-drift resulting in performance worse than the naive server-only strategy ( 2 ) . This is because by using multiple local updates , ( 3 ) starts overfitting to the local client data , optimizing fi ( x ) instead of the actual global objective f ( x ) . The net 2 which can be quite different from the true global optimum x ? . Server momentum only speeds up the convergence to the wrong point in this case . In contrast , MIME uses unbiased momentum and applies it locally at every update . This keeps the updates of MIME closer to the true optimum x ? . effect is that FEDAVG moves towards an incorrect point ( see Fig 1 , left ) . If K is sufficiently large , approximately yK x ? i , where x ? i : = arg min x fi ( x ) ⇒ Ei∼D [ g̃t ] ( xt − Ei∼D [ x ? i ] ) . Further , the server momentum is based on g̃t and hence is also biased . Thus , it can not correct for the client drift . We next see how a different way of using momentum can mitigate client drift . Mime approach . FEDAVG experiences client drift because both the momentum and the client updates are biased . To fix the former , we compute momentum using only global statistics as in ( 2 ) : mt = ( 1− β ) ∇fi ( xt−1 ) + βmt−1 . ( 4 ) To reduce the bias in the local updates , we will apply this unbiased momentum every step : yk = yk−1 − η ( ( 1− β ) ∇fi ( yk−1 ) + βmt−1 ) for k ∈ [ K ] . ( 5 ) Note that the momentum term is kept fixed during the local updates i.e . there is no local momentum used , only global momentum is applied locally . Since mt−1 is a moving average of unbiased gradients computed over multiple clients , it intuitively is a good approximation of the general direction of the updates . By taking a convex combination of the local gradient with mt−1 , the update ( 5 ) is potentially also less biased . In this way MIME combines the communication benefits of taking multiple local steps and prevents client-drift ( see Fig 1 , right ) . Sec . C makes this intuition precise . | This paper proposes a way to apply various variance reduction/momentum based method to the federated learning scenario, especially when there is distribution drift among the clients. The main claim of this paper is that the global statistics (momentum, control variance et al) should be update at the server side only, which is helpful to reduce the bias of these terms. The paper also provides convergence analysis for their methods and attains the best convergence result for their MimeMVR method. | SP:7ec1aeb5e1e9e0ef6759fa1d57de00d2170526c8 |
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning | 1 INTRODUCTION . Federated learning has become an important paradigm in large-scale machine learning where the training data remains distributed over a large number of clients , which may be mobile phones or network sensors ( Konečnỳ et al. , 2016b ; a ; McMahan et al. , 2017 ; Mohri et al. , 2019 ; Kairouz et al. , 2019 ) . A centralized model , here referred to as a server model , is then trained , without ever transmitting client data over the network , thereby providing some basic levels of data privacy and security . Two important settings are distinguished in Federated learning ( Kairouz et al. , 2019 , Table 1 ) : the cross-device and the cross-silo settings . The cross-silo setting corresponds to a relatively small number of reliable clients , typically organizations , such as medical or financial institutions . In contrast , in the cross-device federated learning setting , the number of clients may be extremely large and include , for example , all 3.5 billion active android phones ( Holst , 2019 ) . Thus , in that setting , we may never make even a single pass over the entire clients ’ data during training . The cross-device setting is further characterized by resource-poor clients communicating over a highly unreliable network . Together , the essential features of this setting give rise to unique challenges not present in the cross-silo setting . Here , we are interested in the cross-device setting , for which we will formalize and study stochastic optimization algorithms . The de facto standard algorithm for this setting is FEDAVG ( McMahan et al. , 2017 ) , which performs multiple SGD updates on the available clients , before communicating to the server . While this approach can reduce the total amount of communication required , performing multiple steps on the same client can lead to ‘ over-fitting ’ to its atypical local data , a phenomenon known as client drift ( Karimireddy et al. , 2020 ) . Furthermore , algorithmic innovations such as momentum ( Sutskever et al. , 2013 ; Cutkosky and Orabona , 2019 ) , adaptivity ( Kingma and Ba , 2014 ; Zaheer et al. , 2018 ; Zhang et al. , 2019 ) , and clipping ( You et al. , 2017 ; 2019 ; Zhang et al. , 2020 ) are critical to the success of deep learning applications and need to be incorporated into the client updates , replacing the SGD update of FEDAVG . Perhaps due to such deficiencies , there exists a large gap in performance between the centralized setting , where data is centrally collected on the server , and the federated setting ( Zhao et al. , 2018 ; Hsieh et al. , 2019 ; Hsu et al. , 2019 ; Karimireddy et al. , 2020 ) . To overcome such deficiencies , we propose a new framework , MIME , that mitigates client drift and adapts arbitrary centralized optimization algorithms , e.g . SGD with momentum or Adam , to the federated setting . In each local client update , MIME uses global statistics , e.g . momentum , and an SVRG-style correction to mimic the updates of the centralized algorithm run on i.i.d . data . These global statistics are computed only at the server level and kept fixed throughout the local steps , thereby avoiding a bias due to the atypical local data of any single client . Contributions . We summarize our main results below . • We formalize the cross-device federated learning problem , and propose a new framework MIME that can adapt arbitrary centralized algorithms to this setting . • We prove that incorporating server momentum into each local client update reduces client drift and leads to optimal statistical rates . • Further , we quantify the usefulness of performing multiple local updates on a single client by carefully tracking the bias ( client-drift ) introduced . This is the first analysis showing improved rates by taking additional multiple steps for general smooth functions . • Finally , we also propose a simpler variant , MIMELITE , with an empirical performance similar to MIME . We report the results of thorough experimental analysis demonstrating that both MIME and MIMELITE are faster than FEDAVG . Related work . Analysis of FedAvg : Much of the recent work in federated learning has focused on analyzing FEDAVG . For identical clients , FEDAVG coincides with parallel SGD , for which Zinkevich et al . ( 2010 ) derived an analysis with asymptotic convergence . Sharper and more refined analyses of the same method , sometimes called local SGD , were provided by Stich ( 2019 ) , and more recently by Stich and Karimireddy ( 2019 ) , Patel and Dieuleveut ( 2019 ) , Khaled et al . ( 2020 ) , and Woodworth et al . ( 2020b ) , for identical functions . Their analysis was extended to heterogeneous clients in ( Wang et al. , 2019 ; Yu et al. , 2019b ; Karimireddy et al. , 2020 ; Khaled et al. , 2020 ; Koloskova et al. , 2020 ) . Charles and Konečnỳ ( 2020 ) derived a tight characterization of FedAvg with quadratic functions and demonstrated the sensitivity of the algorithm to both client and server step sizes . Matching upper and lower bounds were recently given by Karimireddy et al . ( 2020 ) and Woodworth et al . ( 2020a ) for general functions , proving that FEDAVG can be slower than even SGD for heterogeneous data , due to the client-drift . Comparison to SCAFFOLD : For the cross-silo setting where the number of clients is relatively low , Karimireddy et al . ( 2020 ) proposed the SCAFFOLD algorithm , which uses control-variates ( similar to SVRG ) to correct for client drift . However , their algorithm crucially relies on stateful clients which repeatedly participate in the training process . In contrast , we focus on the cross-device setting where clients may be visited only once during training and where they are stateless . This is akin to the difference between the finite-sum and stochastic settings in traditional centralized optimization . Improvements to FedAvg : Hsu et al . ( 2019 ) and Wang et al . ( 2020c ) observed that using server momentum significantly improves over vanilla FEDAVG . This idea was generalized by Reddi et al . ( 2020 ) , who replaced the server update with an arbitrary optimizer , e.g . Adam . However , these methods only modify the server update while using SGD for the client updates . MIME , on the other hand , ensures that every local client update resembles the optimizer e.g . MIME would apply momentum in every client update and not just at the server level . Beyond this , Li et al . ( 2018 ) proposed to add a regularizer to ensure client updates remain close . However , its usefulness is unclear ( cf . Fig . 5 , Karimireddy et al. , 2020 ; Wang et al. , 2020b ) . Other orthogonal directions which can be combined with MIME include tackling computation heterogeneity , where some clients perform many more updates than others ( Wang et al. , 2020b ) , improving fairness by modifying the objective ( Mohri et al. , 2019 ; Li et al. , 2019 ) , incorporating differential privacy ( Geyer et al. , 2017 ; Agarwal et al. , 2018 ; Thakkar et al. , 2020 ) , Byzantine adversaries ( Pillutla et al. , 2019 ; Wang et al. , 2020a ; He et al. , 2020a ) , secure aggregation ( Bonawitz et al. , 2017 ; He et al. , 2020b ) , etc . We refer the reader to the extensive survey by Kairouz et al . ( 2019 ) for additional discussion . 2 PROBLEM SETUP . This section formalizes the problem of cross-device federated learning . We first examine some key challenges of this setting ( cf . Kairouz et al. , 2019 ) to ensure our formalism captures the difficulty : 1 . Communication cost between the server and the clients is a major concern and the source of bottleneck in federated learning ; thus , a key metric for optimization in this setting is the number of communication rounds . 2 . Each client is likely to participate at most once , due to the extremely large number of clients ; furthermore , each individual client may have very little data of its own . 3 . There may be a wide heterogeneity or non-i.i.d.-ness due to the difference of data distributions for the clients . Thus , our objective will be to minimize the following quantity within the fewest number of clientserver communication rounds : f ( x ) = Ei∼D [ fi ( x ) : = 1 ni ni∑ ν=1 fi ( x ; ζi , ν ) ] . ( 1 ) Here , fi denotes the loss function of client i and { ζi,1 , . . . , ζi , ni } its local data . Since the number of clients is extremely large , while size of each local data is rather modest , we represent the former as an expectation and the latter as a finite sum . In each round , the algorithm samples a subset of clients ( of size S ) and performs some updates to the server model . There is some inherent tension between the second and the third challenge outlined above : if there exists a client with arbitrarily different data whom we may never encounter during training , then there is no hope to actually minimize f . Thus for ( 1 ) to be tractable , it is necessary to assume bounded dissimilarity between different fi . ( A1 ) G2-BGD or bounded gradient dissimilarity : there exists G ≥ 0 such that Ei∼D [ ‖∇fi ( x ) −∇f ( x ) ‖2 ] ≤ G2 , ∀x . Next , we also characterize the variance in the Hessians . Note that if fi ( · ; ζ ) is L-smooth , ( A2 ) is always satisfied with δ ≤ 2L and hence is more of a definition rather than an assumption . Note that however , in realistic examples we expect the clients to be similar and hence that δ L. ( A2 ) δ-BHD or bounded Hessian dissimilarity : Almost surely , f is δ-weakly convex i.e . ∇2fi ( x ) −δI and the loss function of any client i satisfies ‖∇2fi ( x ; ζ ) −∇2f ( x ) ‖ ≤ δ , ∀x . In addition , we assume that f ( x ) is bounded from below by f ? and is L-smooth , as is standard . 3 USING MOMENTUM TO REDUCE CLIENT DRIFT . In this section we examine the tension between reducing communication by running multiple client updates each round , and degradation in performance due to client drift ( Karimireddy et al. , 2020 ) . To simplify the discussion , we assume a single client is sampled each round and that clients use full-batch gradients . Server-only approach . A simple way to avoid the issue of client drift is to take no local steps . We sample a client i ∼ D and run SGDm with momentum parameter β and step size η : xt = xt−1 − η ( ( 1− β ) ∇fi ( xt−1 ) + βmt−1 ) , mt = ( 1− β ) ∇fi ( xt−1 ) + βmt−1 . ( 2 ) Here , the gradient ∇fi ( xt ) is unbiased i.e . E [ ∇fi ( xt ) ] = ∇f ( xt ) and hence we are guaranteed convergence . However , this strategy can be communication-intensive and we are likely to spend all our time waiting for communication with very little time spent on computing the gradients . FedAvg approach . To reduce the overall communication rounds required , we need to make more progress in each round of communication . Starting from y0 = xt−1 , FEDAVG ( McMahan et al. , 2017 ) runs multiple SGD steps on the sampled client i ∼ D yk = yk−1 − η∇fi ( yk−1 ) for k ∈ [ K ] , ( 3 ) and then a pseudo-gradient g̃t = − ( yK −xt ) replaces∇fi ( xt−1 ) in the SGDm algorithm ( 2 ) . This is referred to as server-momentum since it is computed and applied only at the server level ( Hsu et al. , 2019 ) . However , such updates give rise to client-drift resulting in performance worse than the naive server-only strategy ( 2 ) . This is because by using multiple local updates , ( 3 ) starts overfitting to the local client data , optimizing fi ( x ) instead of the actual global objective f ( x ) . The net 2 which can be quite different from the true global optimum x ? . Server momentum only speeds up the convergence to the wrong point in this case . In contrast , MIME uses unbiased momentum and applies it locally at every update . This keeps the updates of MIME closer to the true optimum x ? . effect is that FEDAVG moves towards an incorrect point ( see Fig 1 , left ) . If K is sufficiently large , approximately yK x ? i , where x ? i : = arg min x fi ( x ) ⇒ Ei∼D [ g̃t ] ( xt − Ei∼D [ x ? i ] ) . Further , the server momentum is based on g̃t and hence is also biased . Thus , it can not correct for the client drift . We next see how a different way of using momentum can mitigate client drift . Mime approach . FEDAVG experiences client drift because both the momentum and the client updates are biased . To fix the former , we compute momentum using only global statistics as in ( 2 ) : mt = ( 1− β ) ∇fi ( xt−1 ) + βmt−1 . ( 4 ) To reduce the bias in the local updates , we will apply this unbiased momentum every step : yk = yk−1 − η ( ( 1− β ) ∇fi ( yk−1 ) + βmt−1 ) for k ∈ [ K ] . ( 5 ) Note that the momentum term is kept fixed during the local updates i.e . there is no local momentum used , only global momentum is applied locally . Since mt−1 is a moving average of unbiased gradients computed over multiple clients , it intuitively is a good approximation of the general direction of the updates . By taking a convex combination of the local gradient with mt−1 , the update ( 5 ) is potentially also less biased . In this way MIME combines the communication benefits of taking multiple local steps and prevents client-drift ( see Fig 1 , right ) . Sec . C makes this intuition precise . | The paper proposes a new framework for solving federated learning. The authors consider a specific setting that there are many clients, and each client is allowed to compute the full gradient. The authors claim that the current setting’s main issue is the client drift, and the proposed framework can reduce such an issue and thus achieve a faster convergence rate. Here are my main concerns of the current paper: | SP:7ec1aeb5e1e9e0ef6759fa1d57de00d2170526c8 |
DQSGD: DYNAMIC QUANTIZED STOCHASTIC GRADIENT DESCENT FOR COMMUNICATION-EFFICIENT DISTRIBUTED LEARNING | 1 INTRODUCTION . Recently , with the booming of Artificial Intelligence ( AI ) , 5G wireless communications , and CyberPhysical Systems ( CPS ) , distributed learning plays an increasingly important role in improving the efficiency and accuracy of learning , scaling to a large input data size , and bridging different wireless computing resources ( Dean et al. , 2012 ; Bekkerman et al. , 2011 ; Chilimbi et al. , 2014 ; Chaturapruek et al. , 2015 ; Zhu et al. , 2020 ; Mills et al. , 2019 ) . Distributed Stochastic Gradient Descent ( SGD ) is the core in a vast majority of distributed learning algorithms ( e.g. , various distributed deep neural networks ) , where distributed nodes calculate local gradients and an aggregated gradient is achieved via communication among distributed nodes and/or a parameter server . However , due to limited bandwidth in practical networks , communication overhead for transferring gradients often becomes the performance bottleneck . Several approaches towards communicationefficient distributed learning have been proposed , including compressing gradients ( Stich et al. , 2018 ; Alistarh et al. , 2017 ) or updating local models less frequently ( McMahan et al. , 2017 ) . Gradient quantization reduces the communication overhead by using few bits to approximate the original real value , which is considered to be one of the most effective approaches to reduce communication overhead ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Bernstein et al. , 2018 ; Wu et al. , 2018 ; Suresh et al. , 2017 ) . The lossy quantization inevitably brings in gradient noise , which will affect the convergence of the model . Hence , a key question is how to effectively select the number of quantization bits to balance the trade-off between the communication cost and its convergence performance . Existing algorithms often quantize parameters into a fixed number of bits , which is shown to be inefficient in balancing the communication-convergence trade-off ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Bernstein et al. , 2018 ) . An efficient scheme should be able to dynamically adjust the number of quantized bits according to the state of current learning model in each gradient descent step to balance the communication overhead and model accuracy . Several studies try to construct adaptive quantization schemes through design heuristics and/or empirical evidence . However , they do not come up with a solid theoretical analysis ( Guo et al. , 2020 ; Cui et al. , 2018 ; Oland & Raj , 2015 ) , which even results in contradicted conclusions . More specifically , MQGrad ( Cui et al. , 2018 ) and AdaQS ( Guo et al. , 2020 ) suggest using few quantization bits in early epochs and gradually increase the number of bits in later epochs ; while the scheme proposed by Anders ( Oland & Raj , 2015 ) states that more quantization bits should be used for the gradient with larger root-mean-squared ( RMS ) value , choosing to use more bits in the early training stage and fewer bits in the later stage . One of this paper ’ s key contributions is to develop a theoretical framework to crystallize the design tradeoff in dynamic gradient quantization and settle this contradiction . In this paper , we propose a novel dynamic quantized SGD ( DQSGD ) framework for minimizing communication overhead in distributed learning while maintaining the desired learning accuracy . We study this dynamic quantization problem in both the strongly convex and the non-convex optimization frameworks . In the strongly convex optimization framework , we first derive an upper bound on the difference ( that we term the strongly convex convergence error ) between the loss after N iterations and the optimal loss to characterize the strongly convex convergence error caused by sampling , limited iteration steps , and quantization . In addition , we find some particular cases and prove the tightness for this upper bound on part of the convergence error caused by quantization . In the non-convex optimization framework , we derive an upper bound on the mean square of gradient norms at every iteration step , which is termed the non-convex convergence error . Based on the above theoretical analysis , we design a dynamic quantization algorithm by minimizing the strongly convex/non-convex convergence error bound under communication cost constraints . Our dynamic quantization algorithm is able to adjust the number of quantization bits adaptively by taking into account the norm of gradients , the communication budget , and the remaining number of iterations . We validate our theoretical analysis through extensive experiments on large-scale Computer Vision ( CV ) and Natural Language Processing ( NLP ) tasks , including image classification tasks on CIFAR-10 and CIFAR-100 and text classification tasks on AG-News . Numerical results show that our proposed DQSGD significantly outperforms the baseline quantization methods . To summarize , our key contributions are as follows : • We propose a novel framework to characterize the trade-off between communication cost and modeling error by dynamically quantizing gradients in the distributed learning . •We derive an upper bound on the convergence error for strongly convex objectives and non-convex objectives . The upper bound is shown to be optimal in particular cases . • We develop a dynamic quantization SGD strategy , which is shown to achieve a smaller convergence error upper bound compared with fixed-bit quantization methods . •We validate the proposed DQSGD on a variety of real world datasets and machine learning models , demonstrating that our proposed DQSGD significantly outperforms state-of-the-art gradient quantization methods in terms of mitigating communication costs . 2 RELATED WORK . To solve large scale machine learning problems , distributed SGD methods have attracted a wide attention ( Dean et al. , 2012 ; Bekkerman et al. , 2011 ; Chilimbi et al. , 2014 ; Chaturapruek et al. , 2015 ) . To mitigate the communication bottleneck in distributed SGD , gradient quantization has been investigated . 1BitSGD uses 1 bit to quantize each dimension of the gradients and achieves the desired goal in speech recognition applications ( Seide et al. , 2014 ) . TernGrad quantizes gradients to ternary levels { −1 , 0 , 1 } to reduce the communication overhead ( Wen et al. , 2017 ) . Furthermore , QSGD is considered in a family of compression schemes that use a fixed number of bits to quantize gradients , allowing the user to smoothly trade-off communication and convergence time ( Alistarh et al. , 2017 ) . However , these fixed-bit quantization methods may not be efficient in communication . To further reduce the communication overhead , some empirical studies began to dynamically adjust the quantization bits according to current model parameters in the training process , such as the gradient ’ s mean to standard deviation ratio ( Guo et al. , 2020 ) , the training loss ( Cui et al. , 2018 ) , gradient ’ s root-mean-squared value ( Oland & Raj , 2015 ) . Though these empirical heuristics of adaptive quan- tization methods show good performance in some certain tasks , their imprecise conjectures and the lack of theoretical guidelines in the conjecture framework have limited their generalization to a broad range of machine learning models/tasks . 3 PROBLEM FORMULATION . We consider to minimize the objective function F : Rd → R with parameter x minx∈Rd F ( x ) = Eξ∼D [ l ( x ; ξ ) ] , ( 1 ) where the data point ξ is generated from an unknown distribution D , and a loss function l ( x ; ξ ) measures the loss of the model x at data point ξ . Vanilla gradient descent ( GD ) will solve this problem by updating model parameters via iterations x ( n+1 ) = x ( n ) − η∇F ( x ( n ) ) , where x ( n ) is the model parameter at iteration n ; η is the learning rate ; ∇F ( x ( n ) ) is the gradient of F ( x ( n ) ) . A modification to the GD scheme , minibatch SGD , uses mini-batches of random samples with size K , AK = { ξ0 , ... , ξK−1 } , to calculate the stochastic gradient g ( x ) = 1/K ∑K−1 i=0 ∇l ( x ; ξi ) . In distributed learning , to reduce the communication overhead , we consider to quantize the minibach stochastic gradients : x ( n+1 ) = x ( n ) − ηQsn [ g ( x ( n ) ) ] , ( 2 ) where Qsn [ · ] is the quantization operation that works on each dimension of g ( x ( n ) ) . The i-th component of the stochastic gradient vector g is quantized as Qs ( gi ) = ‖g‖p · sgn ( gi ) · ζ ( gi , s ) , ( 3 ) where ‖g‖p is the lp norm of g ; sgn ( gi ) = { +1 , −1 } is the sign of gi ; s is the quantization level ; and ζ ( gi , s ) is an unbiased stochastic function that maps scalar |gi|/‖g‖p to one of the values in set { 0 , 1/s , 2/s , . . . , s/s } : if |gi|/‖g‖p ∈ [ l/s , ( l + 1 ) /s ] , we have ζ ( gi , s ) = l/s , with probability 1− p , ( l + 1 ) /s , with probability p = s |gi| ‖g‖p − l. ( 4 ) Note that , the quantization level is roughly exponential to the number of quantized bits . If we use B bits to quantize gi , we will use one bit to represent its sign and the other B − 1 bits to represent ζ ( gi , s ) , thus resulting in a quantization level s = 2B−1 − 1 . In total , we use Bpre + dB bits for the gradient quantization at each iteration : a certain number of Bpre bits of precision to construct ‖g‖p and dB bits to express the d components of g. Given a total number of training iterations N and the overall communication budget C to upload all stochastic gradients , we would like to design a gradient quantization scheme to maximize the learning performance . To measure the learning performance under gradient quantization , we follow the commonly adopted convex/non-convex-convergence error δ ( F , N , C ) ( Alistarh et al. , 2017 ) : δ ( F , N , C ) = F ( x ( N ) , C ) − F ( x∗ , C ) , for strongly convex F , 1 N ∑N−1 n=0 ‖∇F ( x ( n ) ) ‖22 , for non-convex F , ( 5 ) where x∗ is the optimal point to minimize F . In general , this error δ ( F , N , C ) is hard to determine ; instead , we aim to lower and upper bound this error and design corresponding quantization schemes . 4 DYNAMIC QUANTIZED SGD . In this part , we derive upper bounds on the strongly convex/non-convex convergence error δ ( F , N , C ) and lower bounds on the strongly convex-convergence error . By minimizing the upper bound on this convergence error , we propose the dynamic quantized SGD strategies for strongly convex and non-convex objective functions . 4.1 PRELIMINARIES . We first state some assumptions as follows . Assumption 1 ( Smoothness ) . The objective function F ( x ) is L-smooth , if ∀x , y ∈ Rd , ‖∇F ( x ) − ∇F ( y ) ‖2 6 L‖x− y‖2 . It implies that ∀x , y ∈ Rd , we have F ( y ) ≤ F ( x ) +∇F ( x ) T ( y − x ) + L 2 ‖y − x‖22 ( 6 ) ‖∇F ( x ) ‖22 ≤ 2L [ F ( x ) − F ( x∗ ) ] ( 7 ) Assumption 2 ( Strongly convexity ) . The objective function F ( x ) is µ-strongly convex , if ∃µ > 0 , F ( x ) − µ 2 xTx is a convex function . From Assumption 2 , we have : ∀x , y ∈ Rd , F ( y ) ≥ F ( x ) +∇F ( x ) T ( y − x ) + µ 2 ‖y − x‖22 ( 8 ) Assumption 3 ( Variance bound ) . The stochastic gradient oracle gives us an independent unbiased estimate ∇l ( x ; ξ ) with a bounded variance : Eξ∼D [ ∇l ( x ; ξ ) ] = ∇F ( x ) , ( 9 ) Eξ∼D [ ‖∇l ( x ; ξ ) −∇F ( x ) ‖22 ] ≤ σ2 . ( 10 ) From Assumption 3 , for the minibatch stochastic gradient g ( x ) = [ ∑K−1 i=0 ∇l ( x ; ξi ) ] /K , we have Eξ∼D [ g ( x ) ] = ∇F ( x ) ( 11 ) Eξ∼D [ ‖g ( x ; ξ ) ‖2 ] ≤ ‖∇F ( x ) ‖22 + σ2/K . ( 12 ) We have the relationship of gradients before and after quantization : Qs [ g ( x ) ] = g ( x ) + ̂ , where ̂ represents the quantization noise , following the probability distribution that can be shown in Proposition 1 . The proof of Proposition 1 is given in Appendix A . Proposition 1 ( Quantization Noise magnitude ) . For the stochastic gradient vector g , if the quantization level is s , then the i-th component of quantization noise follows as : p ( ̂i ) = s ‖g‖p − s2 ‖g‖2p ̂i , 0 < ̂i ≤ ‖g‖p s , s ‖g‖p + s2 ‖g‖2p ̂i , − ‖g‖p s ≤ ̂i ≤ 0 . ( 13 ) Following Proposition 1 , we can get Êi [ Qs [ g ] ] = g and Êi [ ‖Qs [ g ] − g‖22 ] = d 6s2 ‖g‖2p . This indicates that the quantization operation is unbiased , and the variance bound of Qs [ g ] is directly proportional to ‖g‖2p and inversely proportional to s2 , which means that gradients with a larger norm should be quantized using more bits to keep E [ ‖Qs [ g ] − g‖22 ] below a given noise level . Therefore , we have the following lemma to characterize the quantization noise Qs [ g ] . Lemma 1 . For the quantized gradient vector Qs [ g ] , we have E [ Qs [ g ] ] = ∇F ( x ) ( 14 ) E [ ‖Qs [ g ] ‖22 ] ≤ ‖∇F ( x ) ‖22 + σ2 K + d 6s2 ‖g‖2p ( 15 ) We can see that the noise various of Qs [ g ] contains two parts : the first part is the sampling noise σ2 K , the second part is the quantization noise d 6s2 ‖g‖2p . | This paper proposed an adaptive quantized method which is derived by minimizing a constrained quantization error bound. The theoretical analysis suggests adjusting the quantization level according to the gradient norm, convergence rate of the model, and the current iteration number. Theoretical results show that the dynamic bits leads to better error bound than the fixed bits. The result is intuitive. Overall, the paper is clearly written. But the improvement is not significant enough to warrant a publication at ICLR. | SP:9378bcacf7befab93b6850366fea16d477c01dc6 |
DQSGD: DYNAMIC QUANTIZED STOCHASTIC GRADIENT DESCENT FOR COMMUNICATION-EFFICIENT DISTRIBUTED LEARNING | 1 INTRODUCTION . Recently , with the booming of Artificial Intelligence ( AI ) , 5G wireless communications , and CyberPhysical Systems ( CPS ) , distributed learning plays an increasingly important role in improving the efficiency and accuracy of learning , scaling to a large input data size , and bridging different wireless computing resources ( Dean et al. , 2012 ; Bekkerman et al. , 2011 ; Chilimbi et al. , 2014 ; Chaturapruek et al. , 2015 ; Zhu et al. , 2020 ; Mills et al. , 2019 ) . Distributed Stochastic Gradient Descent ( SGD ) is the core in a vast majority of distributed learning algorithms ( e.g. , various distributed deep neural networks ) , where distributed nodes calculate local gradients and an aggregated gradient is achieved via communication among distributed nodes and/or a parameter server . However , due to limited bandwidth in practical networks , communication overhead for transferring gradients often becomes the performance bottleneck . Several approaches towards communicationefficient distributed learning have been proposed , including compressing gradients ( Stich et al. , 2018 ; Alistarh et al. , 2017 ) or updating local models less frequently ( McMahan et al. , 2017 ) . Gradient quantization reduces the communication overhead by using few bits to approximate the original real value , which is considered to be one of the most effective approaches to reduce communication overhead ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Bernstein et al. , 2018 ; Wu et al. , 2018 ; Suresh et al. , 2017 ) . The lossy quantization inevitably brings in gradient noise , which will affect the convergence of the model . Hence , a key question is how to effectively select the number of quantization bits to balance the trade-off between the communication cost and its convergence performance . Existing algorithms often quantize parameters into a fixed number of bits , which is shown to be inefficient in balancing the communication-convergence trade-off ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Bernstein et al. , 2018 ) . An efficient scheme should be able to dynamically adjust the number of quantized bits according to the state of current learning model in each gradient descent step to balance the communication overhead and model accuracy . Several studies try to construct adaptive quantization schemes through design heuristics and/or empirical evidence . However , they do not come up with a solid theoretical analysis ( Guo et al. , 2020 ; Cui et al. , 2018 ; Oland & Raj , 2015 ) , which even results in contradicted conclusions . More specifically , MQGrad ( Cui et al. , 2018 ) and AdaQS ( Guo et al. , 2020 ) suggest using few quantization bits in early epochs and gradually increase the number of bits in later epochs ; while the scheme proposed by Anders ( Oland & Raj , 2015 ) states that more quantization bits should be used for the gradient with larger root-mean-squared ( RMS ) value , choosing to use more bits in the early training stage and fewer bits in the later stage . One of this paper ’ s key contributions is to develop a theoretical framework to crystallize the design tradeoff in dynamic gradient quantization and settle this contradiction . In this paper , we propose a novel dynamic quantized SGD ( DQSGD ) framework for minimizing communication overhead in distributed learning while maintaining the desired learning accuracy . We study this dynamic quantization problem in both the strongly convex and the non-convex optimization frameworks . In the strongly convex optimization framework , we first derive an upper bound on the difference ( that we term the strongly convex convergence error ) between the loss after N iterations and the optimal loss to characterize the strongly convex convergence error caused by sampling , limited iteration steps , and quantization . In addition , we find some particular cases and prove the tightness for this upper bound on part of the convergence error caused by quantization . In the non-convex optimization framework , we derive an upper bound on the mean square of gradient norms at every iteration step , which is termed the non-convex convergence error . Based on the above theoretical analysis , we design a dynamic quantization algorithm by minimizing the strongly convex/non-convex convergence error bound under communication cost constraints . Our dynamic quantization algorithm is able to adjust the number of quantization bits adaptively by taking into account the norm of gradients , the communication budget , and the remaining number of iterations . We validate our theoretical analysis through extensive experiments on large-scale Computer Vision ( CV ) and Natural Language Processing ( NLP ) tasks , including image classification tasks on CIFAR-10 and CIFAR-100 and text classification tasks on AG-News . Numerical results show that our proposed DQSGD significantly outperforms the baseline quantization methods . To summarize , our key contributions are as follows : • We propose a novel framework to characterize the trade-off between communication cost and modeling error by dynamically quantizing gradients in the distributed learning . •We derive an upper bound on the convergence error for strongly convex objectives and non-convex objectives . The upper bound is shown to be optimal in particular cases . • We develop a dynamic quantization SGD strategy , which is shown to achieve a smaller convergence error upper bound compared with fixed-bit quantization methods . •We validate the proposed DQSGD on a variety of real world datasets and machine learning models , demonstrating that our proposed DQSGD significantly outperforms state-of-the-art gradient quantization methods in terms of mitigating communication costs . 2 RELATED WORK . To solve large scale machine learning problems , distributed SGD methods have attracted a wide attention ( Dean et al. , 2012 ; Bekkerman et al. , 2011 ; Chilimbi et al. , 2014 ; Chaturapruek et al. , 2015 ) . To mitigate the communication bottleneck in distributed SGD , gradient quantization has been investigated . 1BitSGD uses 1 bit to quantize each dimension of the gradients and achieves the desired goal in speech recognition applications ( Seide et al. , 2014 ) . TernGrad quantizes gradients to ternary levels { −1 , 0 , 1 } to reduce the communication overhead ( Wen et al. , 2017 ) . Furthermore , QSGD is considered in a family of compression schemes that use a fixed number of bits to quantize gradients , allowing the user to smoothly trade-off communication and convergence time ( Alistarh et al. , 2017 ) . However , these fixed-bit quantization methods may not be efficient in communication . To further reduce the communication overhead , some empirical studies began to dynamically adjust the quantization bits according to current model parameters in the training process , such as the gradient ’ s mean to standard deviation ratio ( Guo et al. , 2020 ) , the training loss ( Cui et al. , 2018 ) , gradient ’ s root-mean-squared value ( Oland & Raj , 2015 ) . Though these empirical heuristics of adaptive quan- tization methods show good performance in some certain tasks , their imprecise conjectures and the lack of theoretical guidelines in the conjecture framework have limited their generalization to a broad range of machine learning models/tasks . 3 PROBLEM FORMULATION . We consider to minimize the objective function F : Rd → R with parameter x minx∈Rd F ( x ) = Eξ∼D [ l ( x ; ξ ) ] , ( 1 ) where the data point ξ is generated from an unknown distribution D , and a loss function l ( x ; ξ ) measures the loss of the model x at data point ξ . Vanilla gradient descent ( GD ) will solve this problem by updating model parameters via iterations x ( n+1 ) = x ( n ) − η∇F ( x ( n ) ) , where x ( n ) is the model parameter at iteration n ; η is the learning rate ; ∇F ( x ( n ) ) is the gradient of F ( x ( n ) ) . A modification to the GD scheme , minibatch SGD , uses mini-batches of random samples with size K , AK = { ξ0 , ... , ξK−1 } , to calculate the stochastic gradient g ( x ) = 1/K ∑K−1 i=0 ∇l ( x ; ξi ) . In distributed learning , to reduce the communication overhead , we consider to quantize the minibach stochastic gradients : x ( n+1 ) = x ( n ) − ηQsn [ g ( x ( n ) ) ] , ( 2 ) where Qsn [ · ] is the quantization operation that works on each dimension of g ( x ( n ) ) . The i-th component of the stochastic gradient vector g is quantized as Qs ( gi ) = ‖g‖p · sgn ( gi ) · ζ ( gi , s ) , ( 3 ) where ‖g‖p is the lp norm of g ; sgn ( gi ) = { +1 , −1 } is the sign of gi ; s is the quantization level ; and ζ ( gi , s ) is an unbiased stochastic function that maps scalar |gi|/‖g‖p to one of the values in set { 0 , 1/s , 2/s , . . . , s/s } : if |gi|/‖g‖p ∈ [ l/s , ( l + 1 ) /s ] , we have ζ ( gi , s ) = l/s , with probability 1− p , ( l + 1 ) /s , with probability p = s |gi| ‖g‖p − l. ( 4 ) Note that , the quantization level is roughly exponential to the number of quantized bits . If we use B bits to quantize gi , we will use one bit to represent its sign and the other B − 1 bits to represent ζ ( gi , s ) , thus resulting in a quantization level s = 2B−1 − 1 . In total , we use Bpre + dB bits for the gradient quantization at each iteration : a certain number of Bpre bits of precision to construct ‖g‖p and dB bits to express the d components of g. Given a total number of training iterations N and the overall communication budget C to upload all stochastic gradients , we would like to design a gradient quantization scheme to maximize the learning performance . To measure the learning performance under gradient quantization , we follow the commonly adopted convex/non-convex-convergence error δ ( F , N , C ) ( Alistarh et al. , 2017 ) : δ ( F , N , C ) = F ( x ( N ) , C ) − F ( x∗ , C ) , for strongly convex F , 1 N ∑N−1 n=0 ‖∇F ( x ( n ) ) ‖22 , for non-convex F , ( 5 ) where x∗ is the optimal point to minimize F . In general , this error δ ( F , N , C ) is hard to determine ; instead , we aim to lower and upper bound this error and design corresponding quantization schemes . 4 DYNAMIC QUANTIZED SGD . In this part , we derive upper bounds on the strongly convex/non-convex convergence error δ ( F , N , C ) and lower bounds on the strongly convex-convergence error . By minimizing the upper bound on this convergence error , we propose the dynamic quantized SGD strategies for strongly convex and non-convex objective functions . 4.1 PRELIMINARIES . We first state some assumptions as follows . Assumption 1 ( Smoothness ) . The objective function F ( x ) is L-smooth , if ∀x , y ∈ Rd , ‖∇F ( x ) − ∇F ( y ) ‖2 6 L‖x− y‖2 . It implies that ∀x , y ∈ Rd , we have F ( y ) ≤ F ( x ) +∇F ( x ) T ( y − x ) + L 2 ‖y − x‖22 ( 6 ) ‖∇F ( x ) ‖22 ≤ 2L [ F ( x ) − F ( x∗ ) ] ( 7 ) Assumption 2 ( Strongly convexity ) . The objective function F ( x ) is µ-strongly convex , if ∃µ > 0 , F ( x ) − µ 2 xTx is a convex function . From Assumption 2 , we have : ∀x , y ∈ Rd , F ( y ) ≥ F ( x ) +∇F ( x ) T ( y − x ) + µ 2 ‖y − x‖22 ( 8 ) Assumption 3 ( Variance bound ) . The stochastic gradient oracle gives us an independent unbiased estimate ∇l ( x ; ξ ) with a bounded variance : Eξ∼D [ ∇l ( x ; ξ ) ] = ∇F ( x ) , ( 9 ) Eξ∼D [ ‖∇l ( x ; ξ ) −∇F ( x ) ‖22 ] ≤ σ2 . ( 10 ) From Assumption 3 , for the minibatch stochastic gradient g ( x ) = [ ∑K−1 i=0 ∇l ( x ; ξi ) ] /K , we have Eξ∼D [ g ( x ) ] = ∇F ( x ) ( 11 ) Eξ∼D [ ‖g ( x ; ξ ) ‖2 ] ≤ ‖∇F ( x ) ‖22 + σ2/K . ( 12 ) We have the relationship of gradients before and after quantization : Qs [ g ( x ) ] = g ( x ) + ̂ , where ̂ represents the quantization noise , following the probability distribution that can be shown in Proposition 1 . The proof of Proposition 1 is given in Appendix A . Proposition 1 ( Quantization Noise magnitude ) . For the stochastic gradient vector g , if the quantization level is s , then the i-th component of quantization noise follows as : p ( ̂i ) = s ‖g‖p − s2 ‖g‖2p ̂i , 0 < ̂i ≤ ‖g‖p s , s ‖g‖p + s2 ‖g‖2p ̂i , − ‖g‖p s ≤ ̂i ≤ 0 . ( 13 ) Following Proposition 1 , we can get Êi [ Qs [ g ] ] = g and Êi [ ‖Qs [ g ] − g‖22 ] = d 6s2 ‖g‖2p . This indicates that the quantization operation is unbiased , and the variance bound of Qs [ g ] is directly proportional to ‖g‖2p and inversely proportional to s2 , which means that gradients with a larger norm should be quantized using more bits to keep E [ ‖Qs [ g ] − g‖22 ] below a given noise level . Therefore , we have the following lemma to characterize the quantization noise Qs [ g ] . Lemma 1 . For the quantized gradient vector Qs [ g ] , we have E [ Qs [ g ] ] = ∇F ( x ) ( 14 ) E [ ‖Qs [ g ] ‖22 ] ≤ ‖∇F ( x ) ‖22 + σ2 K + d 6s2 ‖g‖2p ( 15 ) We can see that the noise various of Qs [ g ] contains two parts : the first part is the sampling noise σ2 K , the second part is the quantization noise d 6s2 ‖g‖2p . | 1. The authors considered uniform upper bound of the stochastic gradients g_i. The authors may argue that "The classical theoretical analysis of SGD assumes that the stochastic gradients are uniformly bounded". But one can even strongly argue that this bound is actually $\infty$. Moreover, an even stronger argument can be made that the above assumption is in contrast with strong convexity. Please see ["SGD and Hogwild! Convergence Without the Bounded Gradients Assumption" by Nguyen et al.] as one of the instances. Please understand there are relaxed assumptions such as Strong growth condition on a stochastic gradient as in Assumption 4 of [1]. | SP:9378bcacf7befab93b6850366fea16d477c01dc6 |
Addressing Some Limitations of Transformers with Feedback Memory | 1 INTRODUCTION . In recent years , the Transformer architecture ( Vaswani et al. , 2017 ) has brought large improvements to a wide range of Natural Language Processing tasks such as machine translation , sentence representation ( Devlin et al. , 2019 ) , and summarization ( Edunov et al. , 2019 ) . Transformers are also successfully used as an autoregressive model on sequential tasks such as language modeling ( Dai et al. , 2019 ; Rae et al. , 2020 ) and reinforcement learning ( Parisotto et al. , 2019 ) . Unlike more traditional recurrent architectures such as RNNs and LSTMs , the Transformer architecture processes a sequence in parallel in an order-invariant way . Techniques such as position embeddings ( Sukhbaatar et al. , 2015 ; Shaw et al. , 2018 ) and attention masking are required to capture input order information . In this work , we focus on several limitations of the Transformer architecture as an autoregressive model and present a straightforward solution — Feedback memory . These limitations and our proposed solution target sequential token prediction tasks , such as language modeling or other auto-regressive generative tasks . The feedforward nature of Transformers makes them efficient on modern hardware , but restricts the Transformer from taking full advantage of the input ’ s sequential property . In particular , the current hidden representation of a Transformer only accesses the past representations of lower layers , even though higher level representations of the past have already been computed as an autoregressive model . At generation , the Transformer generates only one token at a time , so it could access these representations for better performance , but does not exploit these at training time due to parallelization . However , if these past higher level representations could be used at training time , they would enrich future lower level representations , enabling shallower models to have the same representation power . Another inherent limitation of Transformers on sequential tasks is the lack of recursive computation ( Dehghani et al. , 2018 ) , and the number of transformations possible on the input is bounded by the model depth . Such disadvantages have impact on tasks that require careful tracking of a world state or modeling hierarchical structures ( Tran et al. , 2018 ; Hahn , 2020 ) . On the other hand , while RNNs can maintain an internal state for an unbounded time while accumulating more computations upon it , the size of this internal state is limited by the dimension of the hidden state . In this work , we propose a novel autoregressive model , the Feedback Transformer , that makes all previous hidden representations accessible to the computation of a representation at any depth — the model feeds back previous computations to itself . The feedback allows the model to perform recursive computation , building stronger representations iteratively upon previous states . To achieve this , we modify self-attention to attend to higher level representations rather than lower ones . As shown in Figure 1 , the Feedback Transformer merges the hidden states from all layers into a single vector for every time step and stores them in a memory . Instead of self-attention , all subsequent layers attend to this memory , which means every previously computed representation is accessible by all future layers , mediated by the memory . This allows Feedback Transformers to recursively compute and transform an input as many times as the input length , which is something Transformers can not achieve . While RNNs can perform recursive computation , the amount of information that Feedback Transformers can maintain is not limited by the number of layers . There are computational benefits to this straightforward modification . First , it uses less memory because all the layers share a single Feedback memory , thus reducing the memory size by L times , where L is the number of layers . There is also less computation because we share the key and value projections during attention computation , which increases the speed of the attention over the Feedback Memory . Further , the GPU memory usage is reduced due to the memory sharing — the overall model is 2x smaller — allowing the batch size to be increased for computational efficiency . During inference , the increased batch size contributes to substantially faster decoding speeds . In summary , our main contributions are : ( 1 ) The Feedback Transformer architecture , which completely changes the way a Transformer works to access available higher level representations immediately . ( 2 ) We show the Feedback Transformer can achieve state of the art results with smaller , shallower models that have faster decoding speed and smaller memory footprint . ( 3 ) The Feedback Transformer uses substantially less memory during training and inference time . 2 RELATED WORK . Several previous works have analyzed the limitations of Transformer architectures , such as the inability to process input sequentially ( Dehghani et al. , 2018 ) or represent hierarchical structure ( Tran et al. , 2018 ) . Hahn ( 2020 ) demonstrate that Transformers can not model structures involving bounded recursion , such as closing parentheses . Pérez et al . ( 2019 ) study Transformers in the context of Turing machines , where they must produce unbounded numbers of decoding steps . Various work in probing Transformers identified several limitations where Transformers may not have the computational capacity of recurrent architecture like an LSTM ( Hahn , 2020 ) . From the architectural perspective , our work shares similarities with recurrent networks augmented with external shared memories ( Graves et al. , 2014 ; Joulin & Mikolov , 2015 ; Sukhbaatar et al. , 2015 ) . For example , the stack augmented RNN of Joulin & Mikolov ( 2015 ) adds an external memory to a recurrent network to keep long term dependencies . Closer to our work , the Neural Turing Machine of Graves et al . ( 2014 ) models an unconstrained memory that resembles the self-attention layer of a Transformer . Further improvements to recurrent networks , such as the Gated Feedback RNN ( Chung et al. , 2015 ) , are based on better controlling signal from different layers and extended to feedback through multiple pathways ( Jin et al. , 2017 ) . These works are built on recurrent networks with additional components to store long term dependencies . Other works have studied modifications to the Transformer architecture by enriching its structure with components inspired by recurrent networks . For example , Wang et al . ( 2019 ) propose adding a local recurrent sublayer to the Transformer layer to remove the need for position embeddings in the multi-head self-attention layers . Universal Transformer ( Dehghani et al. , 2018 ) share the parameters between the layers of a Transformer , leading a recurrent network in depth . Hao et al . ( 2019 ) and Chen et al . ( 2018 ) augment Transformers with a second , recurrent encoder . As opposed to our work , these prior investigations do not change the computational path in a Transformer to reduce the discrepancy between the training and inference time . Closer to our work , Merity ( 2019 ) proposes adding a self-attention layer on top of the past outputs from an LSTM cell . However , this approach keeps the recurrent and the self-attention mechanisms decoupled , as opposed to ours which makes the attention mechanism recurrent . In particular , the LSTM layer of Merity ( 2019 ) still intrinsically has a bottleneck corresponding to the dimension of the hidden layer . 3 METHOD . In this section , we propose the Feedback Transformer , which provides capacity to build richer representations of each timestep t of a sequential modeling task . 3.1 TRANSFORMER ARCHITECTURES . We briefly describe the Transformer ( Vaswani et al. , 2017 ) . Each layer is composed of a multihead self-attention sublayer ( Attn ) followed by a feedforward sublayer ( FF ) , and each sublayer is followed by an add-norm operation that combines a skip-connection ( He et al. , 2016 ) and layer normalization ( Lei Ba et al. , 2016 ) . The l-th layer of a Transformer processes an input sequence of vectors Xl = ( xl1 , . . . , x l t ) into a sequence of vectors of the same length . First , the self-attention sublayer computes a representation for each time step t by taking its related input vector xt along with its past context , { xlt−τ , ... , xlt−1 } : zlt = Attn ( x l t , { xlt−τ , . . . , xlt−1 } ) . Within the self-attention sublayer , xlt is used to form query vectors while its context is used to compute key and value vectors , forming a memory of the past information . Then the feedforward sublayer processes each vector zlt independently , i.e. , x l+1 t = FF ( z l t ) . The Transformer layer transforms its input sequence into an output sequence Xl+1 = FF ( Attn ( Xl ) ) . In practice , a block of steps { xlt−M+1 , . . . , xlt } is computed in parallel during training , where M can be seen as the backpropagation through time ( BPTT ) length . This makes training Transformers efficient on hardware such as GPUs . However , to operate on sequences of unbounded length , Transformers require modifications such as caching and relative position embeddings ( Dai et al. , 2019 ; Sukhbaatar et al. , 2019 ) . 3.2 LIMITATIONS OF TRANSFORMERS . Previous work has analyzed the impact of several limitations of the Transformer architecture , such as the inability to track long sequences and process hierarchical inputs ( Hahn , 2020 ) . In this work , we focus on two major limitations of Transformer architectures . Limited Access to Higher Level Representations . Layer by layer , Transformers build more abstract , high level representations of the input sequence . At each layer , the representations for the input sequence are treated in parallel . As a consequence , a Transformer does not leverage the highest level representations from the past to compute the current representation , even though these highest level representations have already been computed for autoregressive models . Maintaining a Belief State . Many sequential tasks require models to maintain an internal state for two main purposes . First , internal states act as memory for recalling past inputs , where Transformers excel because their internal state xlt is directly accessible to future steps through self-attention . The second role of an internal state is to act as a belief state that tracks the world state that is not directly observable in inputs . For example , when inputs are actions taken on a Markov Decision Process , an internal state can apply those changes to the current belief state and correctly predict the outcome . As a feedforward model , Transformer have inherent limitations in this area — only a fixed number of transformations can be applied to its internal states . Since both Attn and FF sublayers contain a fixed number of transformations and there are L layers of them , the total number of transformations between the input and output is limited by the depth . This means Transformers can not maintain an internal state for long time if it has to be frequently updated . 3.3 FEEDBACK TRANSFORMER . We propose to change the Transformer architecture by using the most abstract representations from the past directly as inputs for the current timestep . This means that the model does not form its representation in parallel , but sequentially token by token . More precisely , we replace the context inputs to attention modules with memory vectors that are computed over the past , i.e. , zlt = Attn ( x l t , { mt−τ , . . . , mt−1 } ) , where memory vectors mt are computed by summing the representations of all layers at time step t : mt = L∑ l=0 Softmax ( wl ) xlt , ( 1 ) where wl are learnable scalar parameters . Note these scalars are the only new parameters introduced by our change , with all else the same as the standard Transformer . Here l = 0 corresponds to token embeddings . The weighting of different layers by a softmax output gives the model more flexibility as it can average them or select one of them . This modification of the self-attention input adapts the computation of the Transformer from parallel to sequential , summarized in Figure 2 . Indeed , it provides the ability to formulate the representation xlt+1 based on past representations from any layer l ′ , while in a standard Transformer this is only true for l′ < l. This change can be viewed as exposing all previous computations to all future computations , providing better representations of the input . Such capacity would allow much shallower models to capture the same level of abstraction as a deeper architecture . This has several practical advantages , as more shallow models have reduced memory footprint and increased decoding speed . An alternative view of such an architecture modification is providing the capacity for recursive computation — outputs from a sublayer can feed back to the same sublayer through the memory . The model can then maintain an internal state for unbounded time . This is a clear advantage over Transformers , in which a submodule never looks at its own output . While an RNN can also repeat its computation on its internal state , its internal state has a limited capacity determined by the number of layers and their hidden dimension . In contrast , the internal state of a Feedback Transformer is its whole memory , which can grow with the input length . This allows the model to keep track of a large number of things within its internal state . While our modification requires sequential computation , we significantly improve training speed by sharing the key and value projections W lk and W l v across all layers . This sharing reduces computation because we need to compute key and value vectors only once instead of computing them per layer klt = kt = Wkmt v l t = vt = Wvmt . For the same reason , the memory footprint is smaller than a standard Transformer because only one set of kt , vt needs to be stored . To be more precise , the memory requirement for processing a single token is reduced from O ( L× T ) to O ( T ) , where L is the number of layers and T is the context size . Further , the reduced memory usage allows the batch size to be increased to recover some of the lost parallelism , which improves training speed . Thus , the Feedback Transformer is not much slower compared to the standard Transformer . Note that the same sharing of projections will not make the standard Transformer efficient because those projections are applied to different representations at each layer ( the key and value vectors will not the same for all layers ) . Lastly , we note that the sequential nature of the Feedback Transformer does not affect the performance during generation where one needs to compute one step at a time anyway . The same is true for online reinforcement learning where the input must be processed sequentially even during training . Task / Model Accuracy ( % ) Copy Char Seq Transformer 59.1 6.2 Feedback Transformer 76.2 23.6 Reverse Char Seq Transformer 50.2 5.9 Feedback Transformer 74.8 29.2 Counting Len 50 Len 1K Transformer 99.6 82.4 Feedback Transformer 99.7 95.3 Random Walk Transformer 68 Feedback Transformer 100 Algorithmic Task 3 vars 5 vars Transformer 4L 33.7 37.5 Transformer 8L 47.4 29.1 LSTM 82.8 32.1 Feedback Trans . 4L 99.1 92.6 Table 1 : Results on toy tasks . Char is character accuracy , Seq is sequence accuracy . 20 40 60 80 100 Memory size 0.5 0.6 0.7 0.8 0.9 S u cc es s ra te Transformer Feedback Transformer Figure 3 : Results on the Corridor task . The Transformer degrades as the memory size decreases , but the Feedback Transformer maintains performance . | > Summary: This paper proposes some changes to the classical Transformer architecture to address its major limitations, such as limited access to higher-level representations. It specifically introduces recurrence to the Transformer architecture by feeding the activations of all previous time steps to a later time step (in the form of self-attention). Empirical results on language modeling and small-scale RL tasks seem to suggest the usefulness of doing do. | SP:0a31bc4cda9cbbfc9b5ab0e951eb529579842300 |
Addressing Some Limitations of Transformers with Feedback Memory | 1 INTRODUCTION . In recent years , the Transformer architecture ( Vaswani et al. , 2017 ) has brought large improvements to a wide range of Natural Language Processing tasks such as machine translation , sentence representation ( Devlin et al. , 2019 ) , and summarization ( Edunov et al. , 2019 ) . Transformers are also successfully used as an autoregressive model on sequential tasks such as language modeling ( Dai et al. , 2019 ; Rae et al. , 2020 ) and reinforcement learning ( Parisotto et al. , 2019 ) . Unlike more traditional recurrent architectures such as RNNs and LSTMs , the Transformer architecture processes a sequence in parallel in an order-invariant way . Techniques such as position embeddings ( Sukhbaatar et al. , 2015 ; Shaw et al. , 2018 ) and attention masking are required to capture input order information . In this work , we focus on several limitations of the Transformer architecture as an autoregressive model and present a straightforward solution — Feedback memory . These limitations and our proposed solution target sequential token prediction tasks , such as language modeling or other auto-regressive generative tasks . The feedforward nature of Transformers makes them efficient on modern hardware , but restricts the Transformer from taking full advantage of the input ’ s sequential property . In particular , the current hidden representation of a Transformer only accesses the past representations of lower layers , even though higher level representations of the past have already been computed as an autoregressive model . At generation , the Transformer generates only one token at a time , so it could access these representations for better performance , but does not exploit these at training time due to parallelization . However , if these past higher level representations could be used at training time , they would enrich future lower level representations , enabling shallower models to have the same representation power . Another inherent limitation of Transformers on sequential tasks is the lack of recursive computation ( Dehghani et al. , 2018 ) , and the number of transformations possible on the input is bounded by the model depth . Such disadvantages have impact on tasks that require careful tracking of a world state or modeling hierarchical structures ( Tran et al. , 2018 ; Hahn , 2020 ) . On the other hand , while RNNs can maintain an internal state for an unbounded time while accumulating more computations upon it , the size of this internal state is limited by the dimension of the hidden state . In this work , we propose a novel autoregressive model , the Feedback Transformer , that makes all previous hidden representations accessible to the computation of a representation at any depth — the model feeds back previous computations to itself . The feedback allows the model to perform recursive computation , building stronger representations iteratively upon previous states . To achieve this , we modify self-attention to attend to higher level representations rather than lower ones . As shown in Figure 1 , the Feedback Transformer merges the hidden states from all layers into a single vector for every time step and stores them in a memory . Instead of self-attention , all subsequent layers attend to this memory , which means every previously computed representation is accessible by all future layers , mediated by the memory . This allows Feedback Transformers to recursively compute and transform an input as many times as the input length , which is something Transformers can not achieve . While RNNs can perform recursive computation , the amount of information that Feedback Transformers can maintain is not limited by the number of layers . There are computational benefits to this straightforward modification . First , it uses less memory because all the layers share a single Feedback memory , thus reducing the memory size by L times , where L is the number of layers . There is also less computation because we share the key and value projections during attention computation , which increases the speed of the attention over the Feedback Memory . Further , the GPU memory usage is reduced due to the memory sharing — the overall model is 2x smaller — allowing the batch size to be increased for computational efficiency . During inference , the increased batch size contributes to substantially faster decoding speeds . In summary , our main contributions are : ( 1 ) The Feedback Transformer architecture , which completely changes the way a Transformer works to access available higher level representations immediately . ( 2 ) We show the Feedback Transformer can achieve state of the art results with smaller , shallower models that have faster decoding speed and smaller memory footprint . ( 3 ) The Feedback Transformer uses substantially less memory during training and inference time . 2 RELATED WORK . Several previous works have analyzed the limitations of Transformer architectures , such as the inability to process input sequentially ( Dehghani et al. , 2018 ) or represent hierarchical structure ( Tran et al. , 2018 ) . Hahn ( 2020 ) demonstrate that Transformers can not model structures involving bounded recursion , such as closing parentheses . Pérez et al . ( 2019 ) study Transformers in the context of Turing machines , where they must produce unbounded numbers of decoding steps . Various work in probing Transformers identified several limitations where Transformers may not have the computational capacity of recurrent architecture like an LSTM ( Hahn , 2020 ) . From the architectural perspective , our work shares similarities with recurrent networks augmented with external shared memories ( Graves et al. , 2014 ; Joulin & Mikolov , 2015 ; Sukhbaatar et al. , 2015 ) . For example , the stack augmented RNN of Joulin & Mikolov ( 2015 ) adds an external memory to a recurrent network to keep long term dependencies . Closer to our work , the Neural Turing Machine of Graves et al . ( 2014 ) models an unconstrained memory that resembles the self-attention layer of a Transformer . Further improvements to recurrent networks , such as the Gated Feedback RNN ( Chung et al. , 2015 ) , are based on better controlling signal from different layers and extended to feedback through multiple pathways ( Jin et al. , 2017 ) . These works are built on recurrent networks with additional components to store long term dependencies . Other works have studied modifications to the Transformer architecture by enriching its structure with components inspired by recurrent networks . For example , Wang et al . ( 2019 ) propose adding a local recurrent sublayer to the Transformer layer to remove the need for position embeddings in the multi-head self-attention layers . Universal Transformer ( Dehghani et al. , 2018 ) share the parameters between the layers of a Transformer , leading a recurrent network in depth . Hao et al . ( 2019 ) and Chen et al . ( 2018 ) augment Transformers with a second , recurrent encoder . As opposed to our work , these prior investigations do not change the computational path in a Transformer to reduce the discrepancy between the training and inference time . Closer to our work , Merity ( 2019 ) proposes adding a self-attention layer on top of the past outputs from an LSTM cell . However , this approach keeps the recurrent and the self-attention mechanisms decoupled , as opposed to ours which makes the attention mechanism recurrent . In particular , the LSTM layer of Merity ( 2019 ) still intrinsically has a bottleneck corresponding to the dimension of the hidden layer . 3 METHOD . In this section , we propose the Feedback Transformer , which provides capacity to build richer representations of each timestep t of a sequential modeling task . 3.1 TRANSFORMER ARCHITECTURES . We briefly describe the Transformer ( Vaswani et al. , 2017 ) . Each layer is composed of a multihead self-attention sublayer ( Attn ) followed by a feedforward sublayer ( FF ) , and each sublayer is followed by an add-norm operation that combines a skip-connection ( He et al. , 2016 ) and layer normalization ( Lei Ba et al. , 2016 ) . The l-th layer of a Transformer processes an input sequence of vectors Xl = ( xl1 , . . . , x l t ) into a sequence of vectors of the same length . First , the self-attention sublayer computes a representation for each time step t by taking its related input vector xt along with its past context , { xlt−τ , ... , xlt−1 } : zlt = Attn ( x l t , { xlt−τ , . . . , xlt−1 } ) . Within the self-attention sublayer , xlt is used to form query vectors while its context is used to compute key and value vectors , forming a memory of the past information . Then the feedforward sublayer processes each vector zlt independently , i.e. , x l+1 t = FF ( z l t ) . The Transformer layer transforms its input sequence into an output sequence Xl+1 = FF ( Attn ( Xl ) ) . In practice , a block of steps { xlt−M+1 , . . . , xlt } is computed in parallel during training , where M can be seen as the backpropagation through time ( BPTT ) length . This makes training Transformers efficient on hardware such as GPUs . However , to operate on sequences of unbounded length , Transformers require modifications such as caching and relative position embeddings ( Dai et al. , 2019 ; Sukhbaatar et al. , 2019 ) . 3.2 LIMITATIONS OF TRANSFORMERS . Previous work has analyzed the impact of several limitations of the Transformer architecture , such as the inability to track long sequences and process hierarchical inputs ( Hahn , 2020 ) . In this work , we focus on two major limitations of Transformer architectures . Limited Access to Higher Level Representations . Layer by layer , Transformers build more abstract , high level representations of the input sequence . At each layer , the representations for the input sequence are treated in parallel . As a consequence , a Transformer does not leverage the highest level representations from the past to compute the current representation , even though these highest level representations have already been computed for autoregressive models . Maintaining a Belief State . Many sequential tasks require models to maintain an internal state for two main purposes . First , internal states act as memory for recalling past inputs , where Transformers excel because their internal state xlt is directly accessible to future steps through self-attention . The second role of an internal state is to act as a belief state that tracks the world state that is not directly observable in inputs . For example , when inputs are actions taken on a Markov Decision Process , an internal state can apply those changes to the current belief state and correctly predict the outcome . As a feedforward model , Transformer have inherent limitations in this area — only a fixed number of transformations can be applied to its internal states . Since both Attn and FF sublayers contain a fixed number of transformations and there are L layers of them , the total number of transformations between the input and output is limited by the depth . This means Transformers can not maintain an internal state for long time if it has to be frequently updated . 3.3 FEEDBACK TRANSFORMER . We propose to change the Transformer architecture by using the most abstract representations from the past directly as inputs for the current timestep . This means that the model does not form its representation in parallel , but sequentially token by token . More precisely , we replace the context inputs to attention modules with memory vectors that are computed over the past , i.e. , zlt = Attn ( x l t , { mt−τ , . . . , mt−1 } ) , where memory vectors mt are computed by summing the representations of all layers at time step t : mt = L∑ l=0 Softmax ( wl ) xlt , ( 1 ) where wl are learnable scalar parameters . Note these scalars are the only new parameters introduced by our change , with all else the same as the standard Transformer . Here l = 0 corresponds to token embeddings . The weighting of different layers by a softmax output gives the model more flexibility as it can average them or select one of them . This modification of the self-attention input adapts the computation of the Transformer from parallel to sequential , summarized in Figure 2 . Indeed , it provides the ability to formulate the representation xlt+1 based on past representations from any layer l ′ , while in a standard Transformer this is only true for l′ < l. This change can be viewed as exposing all previous computations to all future computations , providing better representations of the input . Such capacity would allow much shallower models to capture the same level of abstraction as a deeper architecture . This has several practical advantages , as more shallow models have reduced memory footprint and increased decoding speed . An alternative view of such an architecture modification is providing the capacity for recursive computation — outputs from a sublayer can feed back to the same sublayer through the memory . The model can then maintain an internal state for unbounded time . This is a clear advantage over Transformers , in which a submodule never looks at its own output . While an RNN can also repeat its computation on its internal state , its internal state has a limited capacity determined by the number of layers and their hidden dimension . In contrast , the internal state of a Feedback Transformer is its whole memory , which can grow with the input length . This allows the model to keep track of a large number of things within its internal state . While our modification requires sequential computation , we significantly improve training speed by sharing the key and value projections W lk and W l v across all layers . This sharing reduces computation because we need to compute key and value vectors only once instead of computing them per layer klt = kt = Wkmt v l t = vt = Wvmt . For the same reason , the memory footprint is smaller than a standard Transformer because only one set of kt , vt needs to be stored . To be more precise , the memory requirement for processing a single token is reduced from O ( L× T ) to O ( T ) , where L is the number of layers and T is the context size . Further , the reduced memory usage allows the batch size to be increased to recover some of the lost parallelism , which improves training speed . Thus , the Feedback Transformer is not much slower compared to the standard Transformer . Note that the same sharing of projections will not make the standard Transformer efficient because those projections are applied to different representations at each layer ( the key and value vectors will not the same for all layers ) . Lastly , we note that the sequential nature of the Feedback Transformer does not affect the performance during generation where one needs to compute one step at a time anyway . The same is true for online reinforcement learning where the input must be processed sequentially even during training . Task / Model Accuracy ( % ) Copy Char Seq Transformer 59.1 6.2 Feedback Transformer 76.2 23.6 Reverse Char Seq Transformer 50.2 5.9 Feedback Transformer 74.8 29.2 Counting Len 50 Len 1K Transformer 99.6 82.4 Feedback Transformer 99.7 95.3 Random Walk Transformer 68 Feedback Transformer 100 Algorithmic Task 3 vars 5 vars Transformer 4L 33.7 37.5 Transformer 8L 47.4 29.1 LSTM 82.8 32.1 Feedback Trans . 4L 99.1 92.6 Table 1 : Results on toy tasks . Char is character accuracy , Seq is sequence accuracy . 20 40 60 80 100 Memory size 0.5 0.6 0.7 0.8 0.9 S u cc es s ra te Transformer Feedback Transformer Figure 3 : Results on the Corridor task . The Transformer degrades as the memory size decreases , but the Feedback Transformer maintains performance . | This paper modifies transformers with feedback memory. Specifically, for each timestep, it merges hidden representations of all layers into a high-level single vector and stores it in memory. For the current timestep, it attends past memory vectors. The authors claim that in this way, low layers of the current timestep can utilize high-level representations of past timesteps. The authors show that the proposed models with shallow layers can achieve stronger performance than comparable transformers. However, it seems that the models need a much longer time to train. | SP:0a31bc4cda9cbbfc9b5ab0e951eb529579842300 |
Information Theoretic Meta Learning with Gaussian Processes | 1 INTRODUCTION . Meta learning ( Ravi & Larochelle , 2017 ; Vinyals et al. , 2016 ; Edwards & Storkey , 2017 ; Finn et al. , 2017 ; Lacoste et al. , 2019 ; Nichol et al. , 2018 ) and few-shot learning ( Li et al. , 2006 ; Lake et al. , 2011 ) aim to derive data efficient learning algorithms that can rapidly adapt to new tasks . Such systems require training deep neural networks from a set of tasks drawn from a common distribution , where each task is described by a small amount of experience , typically divided into a training or support set and a validation set . By sharing information across tasks the neural network can learn to rapidly adapt to new tasks and generalize from few examples at test time . Several few-shot learning algorithms use memory-based ( Vinyals et al. , 2016 ; Ravi & Larochelle , 2017 ) or gradient-based procedures ( Finn et al. , 2017 ; Nichol et al. , 2018 ) , with the gradient-based model agnostic meta learning algorithm ( MAML ) by Finn et al . ( 2017 ) being very influential in the literature . Despite the success of specific schemes , one fundamental issue in meta learning is concerned with deriving unified principles that can allow to relate different approaches and invent new schemes . While there exist probabilistic interpretations of existing methods , such as the approximate Bayesian inference approach ( Grant et al. , 2018 ; Finn et al. , 2018 ; Yoon et al. , 2018 ) and the related conditional probability modelling approach ( Garnelo et al. , 2018 ; Gordon et al. , 2019 ) , meta learning still lacks of a general and tractable learning principle that can help to get a better understanding of existing algorithms and derive new methods . To this end , the main contribution of this paper is to introduce an information theoretic view of meta learning , by utilizing tools such as the mutual information and the information bottleneck ( Cover & Thomas , 2006 ; Tishby et al. , 1999 ) . Given that each task consists of a support or training set and a target or validation set , we consider the information bottleneck principle , introduced by Tishby et al . ( 1999 ) , which can learn a stochastic encoding of the support set that is highly informative about predicting the validation set . Such stochastic encoding is optimized through the difference between two mutual informations , so that the encoding compresses the training set into a representation that can predict well the validation set . By exploiting recent variational approximations to the information bottleneck ( Alemi et al. , 2017 ; Chalk et al. , 2016 ; Achille & Soatto , 2016 ) that make use of variational lower bounds on the mutual information ( Barber & Agakov , 2003 ) , we derive a general and tractable framework for meta learning . Such framework can allow us to re-interpret gradient-based algorithms , such as MAML , and also derive new methods . Based on the variational information bottleneck ( VIB ) framework ( Alemi et al. , 2017 ; Chalk et al. , 2016 ; Achille & Soatto , 2016 ) , we introduce a new memory-based algorithm for supervised fewshot learning ( right panel in Figure 1 ) based on Gaussian processes ( Rasmussen & Williams , 2006 ) and deep neural kernels ( Wilson et al. , 2016 ) that offers a kernel-based Bayesian view of a memory system . With Gaussian processes , the underlying encoding takes the form of a non-parametric function that follows a stochastic process amortized by the training set . Further , we show that VIB gives rise to gradient-based meta learning methods , such as MAML , when combined with parametric encodings corresponding to model parameters or weights , and based on this we derive a stochastic MAML algorithm . In an additional scheme , we show that our framework can naturally allow for combinations of memory and gradient-based meta learning by constructing suitable encodings , and we derive such an algorithm that combines Gaussian processes with MAML . We demonstrate our methods on few-shot regression and classification by using standard benchmarks such as Omniglot , mini-Imagenet and Augmented Omniglot . 2 META LEARNING WITH INFORMATION BOTTLENECK . Suppose we wish to learn from a distribution of tasks . During training for each task we observe a pair consisted of a task description represented by the support or training set Dt and task validation represented by the target or validation set Dv . At test time only Dt will be given and the learning algorithm should rapidly adapt to form predictions on Dv or on further test data . We wish to formulate meta learning using information theoretic concepts such as mutual information and the information bottleneck ( Tishby et al. , 1999 ) . The idea is to learn a stochastic representation or encoding of the task descriptionDt that is highly informative about predictingDv . We introduce a random variable , Z , associated with this encoding drawn from a distribution qw ( Z|Dt ) parametrized by w. Given this encoding the full joint distribution is written as qw ( Dv , Dt , Z ) = qw ( Z|Dt ) p ( Dv , Dt ) , ( 1 ) where p ( Dv , Dt ) denotes the unknown data distribution overDt andDv . In equation 1 and throughout the paper we use the convention that the full joint as well as any marginal or conditional that depends on Z is denoted by qw ( emphasizing the dependence on the parametrized encoder ) , while corresponding quantities over data Dt , Dv are denoted by p. Eg . from the above we can express a Z-dependent marginal such as , qw ( Z , Dv ) = ∫ qw ( Z|Dt ) p ( Dv , Dt ) dDt . To tune w we would like to maximize the mutual information between Z and the target set Dv , denoted by I ( Z , Dv ) . A trivial way to obtain a maximally informative representation is to set Z = Dt , which does not provide a useful representation . Thus , the information bottleneck ( IB ) principle ( Tishby et al. , 1999 ) adds a model complexity penalty to the maximization of I ( Z , Dv ) which promotes an encoding Z that is highly compressive of Dt , i.e . for which I ( Z , Dt ) is minimized . This leads to the IB objective : LIB ( w ) = I ( Z , Dv ) − βI ( Z , Dt ) , ( 2 ) where β ≥ 0 is a hyperparameter . Nevertheless , in order to use IB for meta learning we need to approximate the mutual information terms I ( Z , Dv ) and I ( Z , Dt ) , which are both intractable since they depend on the unknown data distribution p ( Dv , Dt ) . To overcome this , we will consider variational approximations by following similar arguments to the variational IB approach ( Alemi et al. , 2017 ) that was introduced for supervised learning of a single task , which allows us to express a tractable lower bound on LIB ( w ) by lower bounding I ( Z , Dv ) and upper bounding I ( Z , Dt ) . 2.1 VARIATIONAL INFORMATION BOTTLENECK ( VIB ) FOR META LEARNING . To construct a bound F ≤ LIB ( w ) we need first to lower bound I ( Z , Dv ) , which is written as I ( Z , Dv ) = KL [ qw ( Z , Dv ) ||qw ( Z ) p ( Dv ) ] = Eqw ( Z , Dv ) [ log qw ( Dv|Z ) p ( Dv ) ] , ( 3 ) where KL denotes the Kullback-Leibler divergence and qw ( Dv|Z ) = ∫ qw ( Z|Dt ) p ( Dv , Dt ) dDt∫ qw ( Z|Dt ) p ( Dv , Dt ) dDtdDv is intractable since we do not know the analytic form of the data distribution p ( Dv , Dt ) . To lower bound I ( Z , Dv ) , we follow Barber & Agakov ( 2003 ) ( see Appendix A.1 ) by introducing a decoder model pθ ( Dv|Z ) to approximate the intractable qw ( Dv|Z ) where θ are additional parameters1 , I ( Z , Dv ) ≥ Eqw ( Z , Dv ) [ log pθ ( Dv|Z ) p ( Dv ) ] = Eqw ( Z , Dv ) [ log pθ ( D v|Z ) ] +H ( Dv ) , ( 4 ) where the entropy H ( Dv ) is just a constant that does not depend on the tunable parameters ( θ , w ) . Furthermore , to deal with the second intractable mutual information I ( Z , Dt ) so that to maintain an lower bound on LIB ( w ) we need to upper bound this term . Note that I ( Z , Dt ) = Eqw ( Z , Dt ) [ log qw ( Z|Dt ) qw ( Z ) ] , where qw ( Z ) = ∫ qw ( Z|Dt ) p ( Dt ) dDt is intractable since , e.g . it involves the unknown data distribution p ( Dt ) . By working similarly as before , we can approximate qw ( Z ) by a tractable prior model distribution pθ ( Z ) which leads to the following upper bound on the mutual information , I ( Z , Dt ) ≤ Eqw ( Z , Dt ) [ log qw ( Z|Dt ) pθ ( Z ) ] . ( 5 ) Then , by combining the two bounds we obtain the overall bound , F ( θ , w ) +H ( Dv ) ≤ LIB ( w ) : F ( θ , w ) = Eqw ( Z , Dv ) [ log pθ ( D v|Z ) ] − βEqw ( Z , Dt ) [ log qw ( Z|Dt ) pθ ( Z ) ] , where the constant H ( Dv ) is dropped from the objective function . Given a set of task pairs { Dti , Dvi } bi=1 , where each ( Dti , Dvi ) ∼ p ( Dv , Dt ) , during meta-training the objective function for learning ( θ , w ) reduces to the maximization of the empirical average , 1b ∑ i F̃i ( θ , w ) , where each F̃i ( θ , w ) is an unbiased estimate of F ( θ , w ) ( see Appendix A.2 ) and is given by F̃i ( w , θ ) = Eqw ( Zi|Dti ) [ log pθ ( D v i |Zi ) ] − βKL [ qw ( Zi|Dti ) ||pθ ( Zi ) ] . ( 6 ) The meta-training procedure is carried out in different episodes where at each step we receive a minibatch of task pairs and perform a stochastic gradient maximization step . The objective in equation 6 is similar to variational inference objectives for meta learning ( Ravi & Beatson , 2019 ) . In particular , it can be viewed as an evidence lower bound ( ELBO ) on the validation set log marginal likelihood , log ∫ pθ ( Dvi |Zi ) pθ ( Zi ) dZi , with the differences : ( i ) there is the hyperparameter β in front of the KL term and ( ii ) the variational distribution qw ( Zi|Dti ) in equation 6 is more restricted than in standard variational inference , since qw ( Zi|Dti ) now acts as a stochastic bottleneck that encodes the support set Dti ( i.e . it is amortized by Dti ) and via the term Eqw ( Zi|Dti ) [ log pθ ( D v i |Zi ) ] it is optimized to reconstruct the validation set . | The paper derives a meta-learning framework based on the information bottleneck principle. By adapting the variational approximation proposed in [1] to the meta-learning setting, the authors come up with a tractable objective that generalises both gradient based and memory meta-learning methods. Based on this framework, the authors proposed a new memory based meta-learning algorithm by using a GP with a deep kernel and an extension that combines this memory based method with MAML. The authors show that the method outperforms MAML in several standard meta-learning tasks, especially in regression and many shots classification problems. | SP:05b195c7ce6d65c3a48ac79b6fe9d511ae5a3b5d |
Information Theoretic Meta Learning with Gaussian Processes | 1 INTRODUCTION . Meta learning ( Ravi & Larochelle , 2017 ; Vinyals et al. , 2016 ; Edwards & Storkey , 2017 ; Finn et al. , 2017 ; Lacoste et al. , 2019 ; Nichol et al. , 2018 ) and few-shot learning ( Li et al. , 2006 ; Lake et al. , 2011 ) aim to derive data efficient learning algorithms that can rapidly adapt to new tasks . Such systems require training deep neural networks from a set of tasks drawn from a common distribution , where each task is described by a small amount of experience , typically divided into a training or support set and a validation set . By sharing information across tasks the neural network can learn to rapidly adapt to new tasks and generalize from few examples at test time . Several few-shot learning algorithms use memory-based ( Vinyals et al. , 2016 ; Ravi & Larochelle , 2017 ) or gradient-based procedures ( Finn et al. , 2017 ; Nichol et al. , 2018 ) , with the gradient-based model agnostic meta learning algorithm ( MAML ) by Finn et al . ( 2017 ) being very influential in the literature . Despite the success of specific schemes , one fundamental issue in meta learning is concerned with deriving unified principles that can allow to relate different approaches and invent new schemes . While there exist probabilistic interpretations of existing methods , such as the approximate Bayesian inference approach ( Grant et al. , 2018 ; Finn et al. , 2018 ; Yoon et al. , 2018 ) and the related conditional probability modelling approach ( Garnelo et al. , 2018 ; Gordon et al. , 2019 ) , meta learning still lacks of a general and tractable learning principle that can help to get a better understanding of existing algorithms and derive new methods . To this end , the main contribution of this paper is to introduce an information theoretic view of meta learning , by utilizing tools such as the mutual information and the information bottleneck ( Cover & Thomas , 2006 ; Tishby et al. , 1999 ) . Given that each task consists of a support or training set and a target or validation set , we consider the information bottleneck principle , introduced by Tishby et al . ( 1999 ) , which can learn a stochastic encoding of the support set that is highly informative about predicting the validation set . Such stochastic encoding is optimized through the difference between two mutual informations , so that the encoding compresses the training set into a representation that can predict well the validation set . By exploiting recent variational approximations to the information bottleneck ( Alemi et al. , 2017 ; Chalk et al. , 2016 ; Achille & Soatto , 2016 ) that make use of variational lower bounds on the mutual information ( Barber & Agakov , 2003 ) , we derive a general and tractable framework for meta learning . Such framework can allow us to re-interpret gradient-based algorithms , such as MAML , and also derive new methods . Based on the variational information bottleneck ( VIB ) framework ( Alemi et al. , 2017 ; Chalk et al. , 2016 ; Achille & Soatto , 2016 ) , we introduce a new memory-based algorithm for supervised fewshot learning ( right panel in Figure 1 ) based on Gaussian processes ( Rasmussen & Williams , 2006 ) and deep neural kernels ( Wilson et al. , 2016 ) that offers a kernel-based Bayesian view of a memory system . With Gaussian processes , the underlying encoding takes the form of a non-parametric function that follows a stochastic process amortized by the training set . Further , we show that VIB gives rise to gradient-based meta learning methods , such as MAML , when combined with parametric encodings corresponding to model parameters or weights , and based on this we derive a stochastic MAML algorithm . In an additional scheme , we show that our framework can naturally allow for combinations of memory and gradient-based meta learning by constructing suitable encodings , and we derive such an algorithm that combines Gaussian processes with MAML . We demonstrate our methods on few-shot regression and classification by using standard benchmarks such as Omniglot , mini-Imagenet and Augmented Omniglot . 2 META LEARNING WITH INFORMATION BOTTLENECK . Suppose we wish to learn from a distribution of tasks . During training for each task we observe a pair consisted of a task description represented by the support or training set Dt and task validation represented by the target or validation set Dv . At test time only Dt will be given and the learning algorithm should rapidly adapt to form predictions on Dv or on further test data . We wish to formulate meta learning using information theoretic concepts such as mutual information and the information bottleneck ( Tishby et al. , 1999 ) . The idea is to learn a stochastic representation or encoding of the task descriptionDt that is highly informative about predictingDv . We introduce a random variable , Z , associated with this encoding drawn from a distribution qw ( Z|Dt ) parametrized by w. Given this encoding the full joint distribution is written as qw ( Dv , Dt , Z ) = qw ( Z|Dt ) p ( Dv , Dt ) , ( 1 ) where p ( Dv , Dt ) denotes the unknown data distribution overDt andDv . In equation 1 and throughout the paper we use the convention that the full joint as well as any marginal or conditional that depends on Z is denoted by qw ( emphasizing the dependence on the parametrized encoder ) , while corresponding quantities over data Dt , Dv are denoted by p. Eg . from the above we can express a Z-dependent marginal such as , qw ( Z , Dv ) = ∫ qw ( Z|Dt ) p ( Dv , Dt ) dDt . To tune w we would like to maximize the mutual information between Z and the target set Dv , denoted by I ( Z , Dv ) . A trivial way to obtain a maximally informative representation is to set Z = Dt , which does not provide a useful representation . Thus , the information bottleneck ( IB ) principle ( Tishby et al. , 1999 ) adds a model complexity penalty to the maximization of I ( Z , Dv ) which promotes an encoding Z that is highly compressive of Dt , i.e . for which I ( Z , Dt ) is minimized . This leads to the IB objective : LIB ( w ) = I ( Z , Dv ) − βI ( Z , Dt ) , ( 2 ) where β ≥ 0 is a hyperparameter . Nevertheless , in order to use IB for meta learning we need to approximate the mutual information terms I ( Z , Dv ) and I ( Z , Dt ) , which are both intractable since they depend on the unknown data distribution p ( Dv , Dt ) . To overcome this , we will consider variational approximations by following similar arguments to the variational IB approach ( Alemi et al. , 2017 ) that was introduced for supervised learning of a single task , which allows us to express a tractable lower bound on LIB ( w ) by lower bounding I ( Z , Dv ) and upper bounding I ( Z , Dt ) . 2.1 VARIATIONAL INFORMATION BOTTLENECK ( VIB ) FOR META LEARNING . To construct a bound F ≤ LIB ( w ) we need first to lower bound I ( Z , Dv ) , which is written as I ( Z , Dv ) = KL [ qw ( Z , Dv ) ||qw ( Z ) p ( Dv ) ] = Eqw ( Z , Dv ) [ log qw ( Dv|Z ) p ( Dv ) ] , ( 3 ) where KL denotes the Kullback-Leibler divergence and qw ( Dv|Z ) = ∫ qw ( Z|Dt ) p ( Dv , Dt ) dDt∫ qw ( Z|Dt ) p ( Dv , Dt ) dDtdDv is intractable since we do not know the analytic form of the data distribution p ( Dv , Dt ) . To lower bound I ( Z , Dv ) , we follow Barber & Agakov ( 2003 ) ( see Appendix A.1 ) by introducing a decoder model pθ ( Dv|Z ) to approximate the intractable qw ( Dv|Z ) where θ are additional parameters1 , I ( Z , Dv ) ≥ Eqw ( Z , Dv ) [ log pθ ( Dv|Z ) p ( Dv ) ] = Eqw ( Z , Dv ) [ log pθ ( D v|Z ) ] +H ( Dv ) , ( 4 ) where the entropy H ( Dv ) is just a constant that does not depend on the tunable parameters ( θ , w ) . Furthermore , to deal with the second intractable mutual information I ( Z , Dt ) so that to maintain an lower bound on LIB ( w ) we need to upper bound this term . Note that I ( Z , Dt ) = Eqw ( Z , Dt ) [ log qw ( Z|Dt ) qw ( Z ) ] , where qw ( Z ) = ∫ qw ( Z|Dt ) p ( Dt ) dDt is intractable since , e.g . it involves the unknown data distribution p ( Dt ) . By working similarly as before , we can approximate qw ( Z ) by a tractable prior model distribution pθ ( Z ) which leads to the following upper bound on the mutual information , I ( Z , Dt ) ≤ Eqw ( Z , Dt ) [ log qw ( Z|Dt ) pθ ( Z ) ] . ( 5 ) Then , by combining the two bounds we obtain the overall bound , F ( θ , w ) +H ( Dv ) ≤ LIB ( w ) : F ( θ , w ) = Eqw ( Z , Dv ) [ log pθ ( D v|Z ) ] − βEqw ( Z , Dt ) [ log qw ( Z|Dt ) pθ ( Z ) ] , where the constant H ( Dv ) is dropped from the objective function . Given a set of task pairs { Dti , Dvi } bi=1 , where each ( Dti , Dvi ) ∼ p ( Dv , Dt ) , during meta-training the objective function for learning ( θ , w ) reduces to the maximization of the empirical average , 1b ∑ i F̃i ( θ , w ) , where each F̃i ( θ , w ) is an unbiased estimate of F ( θ , w ) ( see Appendix A.2 ) and is given by F̃i ( w , θ ) = Eqw ( Zi|Dti ) [ log pθ ( D v i |Zi ) ] − βKL [ qw ( Zi|Dti ) ||pθ ( Zi ) ] . ( 6 ) The meta-training procedure is carried out in different episodes where at each step we receive a minibatch of task pairs and perform a stochastic gradient maximization step . The objective in equation 6 is similar to variational inference objectives for meta learning ( Ravi & Beatson , 2019 ) . In particular , it can be viewed as an evidence lower bound ( ELBO ) on the validation set log marginal likelihood , log ∫ pθ ( Dvi |Zi ) pθ ( Zi ) dZi , with the differences : ( i ) there is the hyperparameter β in front of the KL term and ( ii ) the variational distribution qw ( Zi|Dti ) in equation 6 is more restricted than in standard variational inference , since qw ( Zi|Dti ) now acts as a stochastic bottleneck that encodes the support set Dti ( i.e . it is amortized by Dti ) and via the term Eqw ( Zi|Dti ) [ log pθ ( D v i |Zi ) ] it is optimized to reconstruct the validation set . | The paper presents a method for Bayesian meta-learning. This method combines a NN feature extractor with a Gaussian Process on top. The GP kernel is linear. The information bottleneck is used to motivate a choice of approximate posterior. Using MAML to adapt the NN feature extractor weights improves performance of the composite model for few-shot learning. | SP:05b195c7ce6d65c3a48ac79b6fe9d511ae5a3b5d |
How to Train Your Super-Net: An Analysis of Training Heuristics in Weight-Sharing NAS | 1 INTRODUCTION . Neural architecture search ( NAS ) has received growing attention in the past few years , yielding stateof-the-art performance on several machine learning tasks ( Liu et al. , 2019a ; Wu et al. , 2019 ; Chen et al. , 2019b ; Ryoo et al. , 2020 ) . One of the milestones that led to the popularity of NAS is weight sharing ( Pham et al. , 2018 ; Liu et al. , 2019b ) , which , by allowing all possible network architectures to share the same parameters , has reduced the computational requirements from thousands of GPU hours to just a few . Figure 1 shows the two phases that are common to weight-sharing NAS ( WS-NAS ) algorithms : the search phase , including the design of the search space and the search algorithm ; and the evaluation phase , which encompasses the final training protocol on the proxy task 1 . While most works focus on developing a good sampling algorithm ( Cai et al. , 2019 ; Xie et al. , 2019 ) or improving existing ones ( Zela et al. , 2020a ; Nayman et al. , 2019 ; Li et al. , 2020 ) , they tend to overlook or gloss over important factors related to the design and training of the shared-weight backbone network , i.e . the super-net . For example , the literature encompasses significant variations of learning hyper-parameter settings , batch normalization and dropout usage , capacities for the initial layers of the network , and depth of the super-net . Furthermore , some of these heuristics are directly transferred from standalone network training to super-net training without carefully studying their impact in this drastically different scenario . For example , the fundamental assumption of batch normalization that the input data follows a slowly changing distribution whose statistics can be tracked during training is violated in WS-NAS , but nonetheless typically assumed to hold . In this paper , we revisit and systematically evaluate commonly-used super-net design and training heuristics and uncover the strong influence of certain factors on the success of super-net training . To this end , we leverage three benchmark search spaces , NASBench-101 ( Ying et al. , 2019 ) , NASBench201 ( Dong & Yang , 2020 ) , and DARTS-NDS ( Radosavovic et al. , 2019 ) , for which the ground-truth stand-alone performance of a large number of architectures is available . We report the results of our experiments according to two sets of metrics : i ) metrics that directly measure the quality of the super-net , such as the widely-adopted super-net accuracy 2 and a modified Kendall-Tau correlation between the searched architectures and their ground-truth performance , which we refer to as sparse 1Proxy task refers to the tasks that neural architecture search aims to optimize on . 2The mean accuracy over a small set of randomly sampled architectures during super-net training . Under review as a conference paper at ICLR 2021 < latexit sha1_base64= '' dWMZqL35IYT82cSzwIxrSfiYzQ0= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye1Gv3syUz75Ypf9edAqyTISQVyNPrlr95AkVRQaQnHxnQDP7FhhrVlhNNpqZcammAyxkPadVRiQU2Yza+dojOnDFCstCtp0Vz9PZFhYcxERK5TYDsyy95M/M/rpja+CjMmk9RSSRaL4pQjq9DsdTRgmhLLJ45gopm7FZER1phYF1DJhRAsv7xKWhfVoFat3dUq9es8jiKcwCmcQwCXUIdbaEATCDzAM7zCm6e8F+/d+1i0Frx85hj+wPv8AdUFj04= < /latexit > < latexit sha1_base64= '' xordNtnCPHeHQsW9SFKXfp9hiUg= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9xP0s0eppMu2XK37VnwOtkiAnFcjR6Je/egNFUkGlJRwb0w38xIYZ1pYRTqelXmpogskYD2nXUYkFNWE2P3iKzpwyQLHSrqRFc/X3RIaFMRMRuU6B7cgsezPxP6+b2vgqzJhMUkslWSyKU46sQrPv0YBpSiyfOIKJZu5WREZYY2JdRiUXQrD88ippXVSDWrV2V6vUr/M4inACp3AOAVxCHW6hAU0gIOAZXuHN096L9+59LFoLXj5zDH/gff4AeDWQ2g== < /latexit > < latexit sha1_base64= '' 3iE++PYZojxQ9mAgZyyC7qSUlxU= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye14n72ZKb9csWv+nOgVRLkpAI5Gv3yV2+gSCqotIRjY7qBn9gww9oywum01EsNTTAZ4yHtOiqxoCbM5tdO0ZlTBihW2pW0aK7+nsiwMGYiItcpsB2ZZW8m/ud1UxtfhRmTSWqpJItFccqRVWj2OhowTYnlE0cw0czdisgIa0ysC6jkQgiWX14lrYtqUKvW7mqV+nUeRxFO4BTOIYBLqMMtNKAJBB7gGV7hzVPei/fufSxaC14+cwx/4H3+APbLj2Q= < /latexit > < latexit sha1_base64= '' 3iE++PYZojxQ9mAgZyyC7qSUlxU= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye14n72ZKb9csWv+nOgVRLkpAI5Gv3yV2+gSCqotIRjY7qBn9gww9oywum01EsNTTAZ4yHtOiqxoCbM5tdO0ZlTBihW2pW0aK7+nsiwMGYiItcpsB2ZZW8m/ud1UxtfhRmTSWqpJItFccqRVWj2OhowTYnlE0cw0czdisgIa0ysC6jkQgiWX14lrYtqUKvW7mqV+nUeRxFO4BTOIYBLqMMtNKAJBB7gGV7hzVPei/fufSxaC14+cwx/4H3+APbLj2Q= < /latexit > < latexit sha1_base64= '' iU9d81SBeMPs61pcpys14gjB6js= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9NPpZotXTZNovV/yqPwdaJUFOKpCj0S9/9QaKpIJKSzg2phv4iQ0zrC0jnE5LvdTQBJMxHtKuoxILasJsfvAUnTllgGKlXUmL5urviQwLYyYicp0C25FZ9mbif143tfFVmDGZpJZKslgUpxxZhWbfowHTlFg+cQQTzdytiIywxsS6jEouhGD55VXSuqgGtWrtrlapX+dxFOEETuEcAriEOtxCA5pAQMAzvMKbp70X7937WLQWvHzmGP7A+/wBVi2QxA== < /latexit > < latexit sha1_base64= '' xordNtnCPHeHQsW9SFKXfp9hiUg= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9xP0s0eppMu2XK37VnwOtkiAnFcjR6Je/egNFUkGlJRwb0w38xIYZ1pYRTqelXmpogskYD2nXUYkFNWE2P3iKzpwyQLHSrqRFc/X3RIaFMRMRuU6B7cgsezPxP6+b2vgqzJhMUkslWSyKU46sQrPv0YBpSiyfOIKJZu5WREZYY2JdRiUXQrD88ippXVSDWrV2V6vUr/M4inACp3AOAVxCHW6hAU0gIOAZXuHN096L9+59LFoLXj5zDH/gff4AeDWQ2g== < /latexit > < latexit sha1_base64= '' dWMZqL35IYT82cSzwIxrSfiYzQ0= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye1Gv3syUz75Ypf9edAqyTISQVyNPrlr95AkVRQaQnHxnQDP7FhhrVlhNNpqZcammAyxkPadVRiQU2Yza+dojOnDFCstCtp0Vz9PZFhYcxERK5TYDsyy95M/M/rpja+CjMmk9RSSRaL4pQjq9DsdTRgmhLLJ45gopm7FZER1phYF1DJhRAsv7xKWhfVoFat3dUq9es8jiKcwCmcQwCXUIdbaEATCDzAM7zCm6e8F+/d+1i0Frx85hj+wPv8AdUFj04= < /latexit > < latexit sha1_base64= '' 3iE++PYZojxQ9mAgZyyC7qSUlxU= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye14n72ZKb9csWv+nOgVRLkpAI5Gv3yV2+gSCqotIRjY7qBn9gww9oywum01EsNTTAZ4yHtOiqxoCbM5tdO0ZlTBihW2pW0aK7+nsiwMGYiItcpsB2ZZW8m/ud1UxtfhRmTSWqpJItFccqRVWj2OhowTYnlE0cw0czdisgIa0ysC6jkQgiWX14lrYtqUKvW7mqV+nUeRxFO4BTOIYBLqMMtNKAJBB7gGV7hzVPei/fufSxaC14+cwx/4H3+APbLj2Q= < /latexit > < latexit sha1_base64= '' iU9d81SBeMPs61pcpys14gjB6js= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9NPpZotXTZNovV/yqPwdaJUFOKpCj0S9/9QaKpIJKSzg2phv4iQ0zrC0jnE5LvdTQBJMxHtKuoxILasJsfvAUnTllgGKlXUmL5urviQwLYyYicp0C25FZ9mbif143tfFVmDGZpJZKslgUpxxZhWbfowHTlFg+cQQTzdytiIywxsS6jEouhGD55VXSuqgGtWrtrlapX+dxFOEETuEcAriEOtxCA5pAQMAzvMKbp70X7937WLQWvHzmGP7A+/wBVi2QxA== < /latexit > < latexit sha1_base64= '' xordNtnCPHeHQsW9SFKXfp9hiUg= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9xP0s0eppMu2XK37VnwOtkiAnFcjR6Je/egNFUkGlJRwb0w38xIYZ1pYRTqelXmpogskYD2nXUYkFNWE2P3iKzpwyQLHSrqRFc/X3RIaFMRMRuU6B7cgsezPxP6+b2vgqzJhMUkslWSyKU46sQrPv0YBpSiyfOIKJZu5WREZYY2JdRiUXQrD88ippXVSDWrV2V6vUr/M4inACp3AOAVxCHW6hAU0gIOAZXuHN096L9+59LFoLXj5zDH/gff4AeDWQ2g== < /latexit > < latexit sha1_base64= '' dWMZqL35IYT82cSzwIxrSfiYzQ0= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye1Gv3syUz75Ypf9edAqyTISQVyNPrlr95AkVRQaQnHxnQDP7FhhrVlhNNpqZcammAyxkPadVRiQU2Yza+dojOnDFCstCtp0Vz9PZFhYcxERK5TYDsyy95M/M/rpja+CjMmk9RSSRaL4pQjq9DsdTRgmhLLJ45gopm7FZER1phYF1DJhRAsv7xKWhfVoFat3dUq9es8jiKcwCmcQwCXUIdbaEATCDzAM7zCm6e8F+/d+1i0Frx85hj+wPv8AdUFj04= < /latexit > Kendall-Tau ; ii ) proxy metrics such as the ability to surpass random search and the stand-alone accuracy of the model found by the WS-NAS algorithm . Via our extensive experiments ( over 700 GPU days ) , we uncover that ( i ) the training behavior of a super-net drastically differs from that of a standalone network , e.g. , in terms of feature statistics and loss landscape , thus allowing us to define training factor settings , e.g. , for batch-normalization ( BN ) and learning rate , that are better suited for super-nets ; ( ii ) while some neglected factors , such as the number of training epochs , have a strong impact on the final performance , others , believed to be important , such as path sampling , only have a marginal effect , and some commonly-used heuristics , such as the use of low-fidelity estimates , negatively impact it ; ( iii ) the commonly-adopted super-net accuracy is unreliable to evaluate the super-net quality . Altogether , our work is the first to systematically analyze the impact of the diverse factors of super-net design and training , and we uncover the factors that are crucial to design a super-net , as well as the non-important ones . Aggregating these findings allows us to boost the performance of simple weight-sharing random search to the point where it reaches that of complex state-of-the-art NAS algorithms across all tested search spaces . We will release our code and trained models so as to establish a solid baseline to facilitate further research . 2 PRELIMINARIES AND RELATED WORK . We first introduce the necessary concepts that will be used throughout the paper . As shown in Figure 1 ( a ) , weight-sharing NAS algorithms consist of three key components : a search algorithm that samples an architecture from the search space in the form of an encoding , a mapping function fproxy that maps the encoding into its corresponding neural network , and a training protocol for a proxy task Pproxy for which the network is optimized . To train the search algorithm , one needs to additionally define the mapping function fws that generates the shared-weight network . Note that the mapping fproxy frequently differs from fws , since in practice the final model contains many more layers and parameters so as to yield competitive results on the proxy task . After fixing fws , a training protocol Pws is required to learn the super-net . In practice , Pws often hides factors that are critical for the final performance of an approach , such as hyper-parameter settings or the use of data augmentation strategies to achieve state-of-the-art performance ( Liu et al. , 2019b ; Chu et al. , 2019 ; Zela et al. , 2020a ) . Again , Pws may differ from Pproxy , which is used to train the architecture that has been found by the search . For example , our experiments reveal that the learning rate and the total number of epochs frequently differ due to the different training behavior of the super-net and stand-alone architectures . Many strategies have been proposed to implement the search algorithm , such as reinforcement learning ( Zoph & Le , 2017 ; Zoph et al. , 2018 ) , evolutionary algorithms ( Real et al. , 2017 ; Miikkulainen et al. , 2019 ; So et al. , 2019 ; Liu et al. , 2018 ; Lu et al. , 2018 ) , gradient-based optimization ( Liu et al. , 2019b ; Xu et al. , 2020 ; Li et al. , 2020 ) , Bayesian optimization ( Kandasamy et al. , 2018 ; Jin et al. , 2019 ; Zhou et al. , 2019 ; Wang et al. , 2020 ) , and separate performance predictors ( Liu et al. , 2018 ; Luo et al. , 2018 ) . Until very recently , the common trend to evaluate NAS consisted of reporting the searched architecture ’ s performance on the proxy task ( Xie et al. , 2019 ; Real et al. , 2019 ; Ryoo et al. , 2020 ) . This , however , hardly provides real insights about the NAS algorithms themselves , because of the many components involved in them . Many factors that differ from one algorithm to another can influence the performance . In practice , the literature even commonly compares NAS methods that employ different protocols to train the final model . Li & Talwalkar ( 2019 ) and Yu et al . ( 2020b ) were the first to systematically compare different algorithms with the same settings for the proxy task and using several random initializations . Their surprising results revealed that many NAS algorithms produce architectures that do not significantly outperform a randomly-sampled architecture . Yang et al . ( 2020 ) highlighted the importance of the training protocol Pproxy . They showed that optimizing the training protocol can improve the final architecture performance on the proxy task by three percent on CIFAR-10 . This non-trivial improvement can be achieved regardless of the chosen sampler , which provides clear evidence for the importance of unifying the protocol to build a solid foundation for comparing NAS algorithms . In parallel to this line of research , the recent series of “ NASBench ” works ( Ying et al. , 2019 ; Zela et al. , 2020b ; Dong & Yang , 2020 ) proposed to benchmark NAS approaches by providing a complete , tabular characterization of a search space . This was achieved by training every realizable stand-alone architecture using a fixed protocol Pproxy . Similarly , other works proposed to provide a partial characterization by sampling and training a sufficient number of architectures in a given search space using a fixed protocol ( Radosavovic et al. , 2019 ; Zela et al. , 2020a ; Wang et al. , 2020 ) . While recent advances for systematic evaluation are promising , no work has yet thoroughly studied the influence of the super-net training protocol Pws and the mapping function fws . Previous works ( Zela et al. , 2020a ; Li & Talwalkar , 2019 ) performed hyper-parameter tuning to evaluate their own algorithms , and focused only on a few parameters . We fill this gap by benchmarking different choices of Pws and fws and by proposing novel variations to improve the super-net quality . Recent works have shown that sub-nets of super-net training can surpass some human designed models without retraining ( Yu et al. , 2020a ; Cai et al. , 2020 ) and that reinforcement learning can surpass the performance of random search ( Bender et al. , 2020 ) . However , these findings are still only shown on MobileNet-like search spaces where we only search for the size of convolution kernels and the channel ratio for each layer . This is an effective approach to discover a compact network , but it does not change the fact that on cell-based search space super-net quality remains low . 3 EVALUATION METHODOLOGY . We first isolate 14 factors that need to be considered during the design and training of a super-net , and then introduce the metrics to evaluate the quality of the trained super-net . Note that these factors are agnostic to the search policy that is used after training the super-net . 3.1 DISENTANGLING THE SUPER-NET FROM THE SEARCH ALGORITHM . Our goal is to evaluate the influence of the super-net mapping fws and weight-sharing training protocol Pws . As shown in Figure 2 , fws translates an architecture encoding , which typically consists of a discrete number of choices or parameters , into a neural network . Based on a well-defined mapping , the super-net is a network in which every sub-path has a one-to-one mapping with an architecture encoding ( Pham et al. , 2018 ) . Recent works ( Xu et al. , 2020 ; Li et al. , 2020 ; Ying et al. , Figure 2 : Constructing a super-net 2019 ) separate the encoding into cell parameters , which define the basic building blocks of a network , and macro parameters , which define how cells are assembled into a complete architecture . Weight-sharing mapping fws . To make the search space manageable , all cell and macro parameters are fixed during the search , except for the topology of the cell and its possible operations . However , the exact choices for each of these fixed factors differ between algorithms and search spaces . We report the common factors in the left part of Table 1 . They include various implementation choices , e.g. , the use of convolutions with a dynamic number of channels ( Dynamic Channeling ) , super-convolutional layers that support dynamic kernel sizes ( OFA Kernel ) ( Cai et al. , 2020 ) , weight-sharing batchnormalization ( WSBN ) that tracks independent running statistics and affine parameters for different incoming edges ( Luo et al. , 2018 ) , and path and global dropout ( Pham et al. , 2018 ; Luo et al. , 2018 ; Liu et al. , 2019b ) . They also include the use of low-fidelity estimates ( Elsken et al. , 2019 ) to reduce the complexity of super-net training , e.g. , by reducing the number of layers ( Liu et al. , 2019b ) and channels ( Yang et al. , 2020 ; Chen et al. , 2019a ) , the portion of the training set used for super-net training ( Liu et al. , 2019b ) , or the batch size ( Liu et al. , 2019b ; Pham et al. , 2018 ; Yang et al. , 2020 ) . Weight-sharing protocol Pws . Given a mapping fws , different training protocols Pws can be employed to train the super-net . Protocols can differ in the training hyper-parameters and the sampling strategies they rely on . We will evaluate the different hyper-parameter choices listed in the right part of Table 1 . This includes the initial learning rate , the hyper-parameters of batch normalization , the total number of training epochs , and the amount of weight decay . We randomly sample one path to train the super-net ( Guo et al. , 2019 ) , which is also known as single-path one-shot ( SPOS ) or Random-NAS ( Li & Talwalkar , 2019 ) . The reason for this choice is that Random-NAS is equivalent to the initial state of many search algorithms ( Liu et al. , 2019b ; Pham et al. , 2018 ; Luo et al. , 2018 ) , some of which even freeze the sampler training so as to use random sampling to warm-up the super-net ( Xu et al. , 2020 ; Dong & Yang , 2019b ) . Note that we also evaluated two variants of Random-NAS , but found their improvement to be only marginal . Please see Appendix C.2 for more detail . In our experiments , for the sake of reproducibility , we ensure that Pws and Pproxy , as well as fws and fproxy , are as close to each other as possible . For the hyper-parameters of Pws , we cross-validate each factor following the order in Table 1 , and after each validation , use the value that yields the best performance in Pproxy . For all other factors , we change one factor at a time . Search spaces . We use three commonly-used search spaces , for which a large number of stand-alone architectures have been trained and evaluated on CIFAR-10 ( Krizhevsky et al. , 2009 ) to obtain their ground-truth performance . In particular , we use NASBench-101 ( Ying et al. , 2019 ) , which consists of 423 , 624 architectures and is compatible with weight-sharing NAS ( Yu et al. , 2020b ; Zela et al. , 2020b ) ; NASBench-201 ( Dong & Yang , 2020 ) , which contains more operations than NASBench-101 but fewer nodes ; and DARTS-NDS ( Radosavovic et al. , 2019 ) that contains over 1013 architectures of which a subset of 5000 models was sampled and trained in a stand-alone fashion . See Appendix A.2 for a detailed discussion . | Neural Architecture Search (NAS) aims to find a model with the best possible accuracy (or best possible accuracy/size tradeoff) from within a human-defined search space. One popular strategy for speeding up NAS is to train a one-shot model -- a single set of shared weights -- that can then be used to evaluate any candidate architecture within the search space without retraining or fine-tuning. The submission investigates how various decisions made when training a one-shot model can affect its ability to rank different candidate architectures within a search space. | SP:cf90c83667e2974b91b0d93c12c5987491314005 |
How to Train Your Super-Net: An Analysis of Training Heuristics in Weight-Sharing NAS | 1 INTRODUCTION . Neural architecture search ( NAS ) has received growing attention in the past few years , yielding stateof-the-art performance on several machine learning tasks ( Liu et al. , 2019a ; Wu et al. , 2019 ; Chen et al. , 2019b ; Ryoo et al. , 2020 ) . One of the milestones that led to the popularity of NAS is weight sharing ( Pham et al. , 2018 ; Liu et al. , 2019b ) , which , by allowing all possible network architectures to share the same parameters , has reduced the computational requirements from thousands of GPU hours to just a few . Figure 1 shows the two phases that are common to weight-sharing NAS ( WS-NAS ) algorithms : the search phase , including the design of the search space and the search algorithm ; and the evaluation phase , which encompasses the final training protocol on the proxy task 1 . While most works focus on developing a good sampling algorithm ( Cai et al. , 2019 ; Xie et al. , 2019 ) or improving existing ones ( Zela et al. , 2020a ; Nayman et al. , 2019 ; Li et al. , 2020 ) , they tend to overlook or gloss over important factors related to the design and training of the shared-weight backbone network , i.e . the super-net . For example , the literature encompasses significant variations of learning hyper-parameter settings , batch normalization and dropout usage , capacities for the initial layers of the network , and depth of the super-net . Furthermore , some of these heuristics are directly transferred from standalone network training to super-net training without carefully studying their impact in this drastically different scenario . For example , the fundamental assumption of batch normalization that the input data follows a slowly changing distribution whose statistics can be tracked during training is violated in WS-NAS , but nonetheless typically assumed to hold . In this paper , we revisit and systematically evaluate commonly-used super-net design and training heuristics and uncover the strong influence of certain factors on the success of super-net training . To this end , we leverage three benchmark search spaces , NASBench-101 ( Ying et al. , 2019 ) , NASBench201 ( Dong & Yang , 2020 ) , and DARTS-NDS ( Radosavovic et al. , 2019 ) , for which the ground-truth stand-alone performance of a large number of architectures is available . We report the results of our experiments according to two sets of metrics : i ) metrics that directly measure the quality of the super-net , such as the widely-adopted super-net accuracy 2 and a modified Kendall-Tau correlation between the searched architectures and their ground-truth performance , which we refer to as sparse 1Proxy task refers to the tasks that neural architecture search aims to optimize on . 2The mean accuracy over a small set of randomly sampled architectures during super-net training . Under review as a conference paper at ICLR 2021 < latexit sha1_base64= '' dWMZqL35IYT82cSzwIxrSfiYzQ0= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye1Gv3syUz75Ypf9edAqyTISQVyNPrlr95AkVRQaQnHxnQDP7FhhrVlhNNpqZcammAyxkPadVRiQU2Yza+dojOnDFCstCtp0Vz9PZFhYcxERK5TYDsyy95M/M/rpja+CjMmk9RSSRaL4pQjq9DsdTRgmhLLJ45gopm7FZER1phYF1DJhRAsv7xKWhfVoFat3dUq9es8jiKcwCmcQwCXUIdbaEATCDzAM7zCm6e8F+/d+1i0Frx85hj+wPv8AdUFj04= < /latexit > < latexit sha1_base64= '' xordNtnCPHeHQsW9SFKXfp9hiUg= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9xP0s0eppMu2XK37VnwOtkiAnFcjR6Je/egNFUkGlJRwb0w38xIYZ1pYRTqelXmpogskYD2nXUYkFNWE2P3iKzpwyQLHSrqRFc/X3RIaFMRMRuU6B7cgsezPxP6+b2vgqzJhMUkslWSyKU46sQrPv0YBpSiyfOIKJZu5WREZYY2JdRiUXQrD88ippXVSDWrV2V6vUr/M4inACp3AOAVxCHW6hAU0gIOAZXuHN096L9+59LFoLXj5zDH/gff4AeDWQ2g== < /latexit > < latexit sha1_base64= '' 3iE++PYZojxQ9mAgZyyC7qSUlxU= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye14n72ZKb9csWv+nOgVRLkpAI5Gv3yV2+gSCqotIRjY7qBn9gww9oywum01EsNTTAZ4yHtOiqxoCbM5tdO0ZlTBihW2pW0aK7+nsiwMGYiItcpsB2ZZW8m/ud1UxtfhRmTSWqpJItFccqRVWj2OhowTYnlE0cw0czdisgIa0ysC6jkQgiWX14lrYtqUKvW7mqV+nUeRxFO4BTOIYBLqMMtNKAJBB7gGV7hzVPei/fufSxaC14+cwx/4H3+APbLj2Q= < /latexit > < latexit sha1_base64= '' 3iE++PYZojxQ9mAgZyyC7qSUlxU= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye14n72ZKb9csWv+nOgVRLkpAI5Gv3yV2+gSCqotIRjY7qBn9gww9oywum01EsNTTAZ4yHtOiqxoCbM5tdO0ZlTBihW2pW0aK7+nsiwMGYiItcpsB2ZZW8m/ud1UxtfhRmTSWqpJItFccqRVWj2OhowTYnlE0cw0czdisgIa0ysC6jkQgiWX14lrYtqUKvW7mqV+nUeRxFO4BTOIYBLqMMtNKAJBB7gGV7hzVPei/fufSxaC14+cwx/4H3+APbLj2Q= < /latexit > < latexit sha1_base64= '' iU9d81SBeMPs61pcpys14gjB6js= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9NPpZotXTZNovV/yqPwdaJUFOKpCj0S9/9QaKpIJKSzg2phv4iQ0zrC0jnE5LvdTQBJMxHtKuoxILasJsfvAUnTllgGKlXUmL5urviQwLYyYicp0C25FZ9mbif143tfFVmDGZpJZKslgUpxxZhWbfowHTlFg+cQQTzdytiIywxsS6jEouhGD55VXSuqgGtWrtrlapX+dxFOEETuEcAriEOtxCA5pAQMAzvMKbp70X7937WLQWvHzmGP7A+/wBVi2QxA== < /latexit > < latexit sha1_base64= '' xordNtnCPHeHQsW9SFKXfp9hiUg= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9xP0s0eppMu2XK37VnwOtkiAnFcjR6Je/egNFUkGlJRwb0w38xIYZ1pYRTqelXmpogskYD2nXUYkFNWE2P3iKzpwyQLHSrqRFc/X3RIaFMRMRuU6B7cgsezPxP6+b2vgqzJhMUkslWSyKU46sQrPv0YBpSiyfOIKJZu5WREZYY2JdRiUXQrD88ippXVSDWrV2V6vUr/M4inACp3AOAVxCHW6hAU0gIOAZXuHN096L9+59LFoLXj5zDH/gff4AeDWQ2g== < /latexit > < latexit sha1_base64= '' dWMZqL35IYT82cSzwIxrSfiYzQ0= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye1Gv3syUz75Ypf9edAqyTISQVyNPrlr95AkVRQaQnHxnQDP7FhhrVlhNNpqZcammAyxkPadVRiQU2Yza+dojOnDFCstCtp0Vz9PZFhYcxERK5TYDsyy95M/M/rpja+CjMmk9RSSRaL4pQjq9DsdTRgmhLLJ45gopm7FZER1phYF1DJhRAsv7xKWhfVoFat3dUq9es8jiKcwCmcQwCXUIdbaEATCDzAM7zCm6e8F+/d+1i0Frx85hj+wPv8AdUFj04= < /latexit > < latexit sha1_base64= '' 3iE++PYZojxQ9mAgZyyC7qSUlxU= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye14n72ZKb9csWv+nOgVRLkpAI5Gv3yV2+gSCqotIRjY7qBn9gww9oywum01EsNTTAZ4yHtOiqxoCbM5tdO0ZlTBihW2pW0aK7+nsiwMGYiItcpsB2ZZW8m/ud1UxtfhRmTSWqpJItFccqRVWj2OhowTYnlE0cw0czdisgIa0ysC6jkQgiWX14lrYtqUKvW7mqV+nUeRxFO4BTOIYBLqMMtNKAJBB7gGV7hzVPei/fufSxaC14+cwx/4H3+APbLj2Q= < /latexit > < latexit sha1_base64= '' iU9d81SBeMPs61pcpys14gjB6js= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9NPpZotXTZNovV/yqPwdaJUFOKpCj0S9/9QaKpIJKSzg2phv4iQ0zrC0jnE5LvdTQBJMxHtKuoxILasJsfvAUnTllgGKlXUmL5urviQwLYyYicp0C25FZ9mbif143tfFVmDGZpJZKslgUpxxZhWbfowHTlFg+cQQTzdytiIywxsS6jEouhGD55VXSuqgGtWrtrlapX+dxFOEETuEcAriEOtxCA5pAQMAzvMKbp70X7937WLQWvHzmGP7A+/wBVi2QxA== < /latexit > < latexit sha1_base64= '' xordNtnCPHeHQsW9SFKXfp9hiUg= '' > AAAB8HicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmIckS5idzCZD5rHMzIphyVd48aCIVz/Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWyc9xP0s0eppMu2XK37VnwOtkiAnFcjR6Je/egNFUkGlJRwb0w38xIYZ1pYRTqelXmpogskYD2nXUYkFNWE2P3iKzpwyQLHSrqRFc/X3RIaFMRMRuU6B7cgsezPxP6+b2vgqzJhMUkslWSyKU46sQrPv0YBpSiyfOIKJZu5WREZYY2JdRiUXQrD88ippXVSDWrV2V6vUr/M4inACp3AOAVxCHW6hAU0gIOAZXuHN096L9+59LFoLXj5zDH/gff4AeDWQ2g== < /latexit > < latexit sha1_base64= '' dWMZqL35IYT82cSzwIxrSfiYzQ0= '' > AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKexKQI9BLx4jmAckS5idzCZj5rHMzCphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnwoZyJmnTMstpJ9EUi4jTdjS+mfntR6oNU/LeThIaCjyULGYEWye1Gv3syUz75Ypf9edAqyTISQVyNPrlr95AkVRQaQnHxnQDP7FhhrVlhNNpqZcammAyxkPadVRiQU2Yza+dojOnDFCstCtp0Vz9PZFhYcxERK5TYDsyy95M/M/rpja+CjMmk9RSSRaL4pQjq9DsdTRgmhLLJ45gopm7FZER1phYF1DJhRAsv7xKWhfVoFat3dUq9es8jiKcwCmcQwCXUIdbaEATCDzAM7zCm6e8F+/d+1i0Frx85hj+wPv8AdUFj04= < /latexit > Kendall-Tau ; ii ) proxy metrics such as the ability to surpass random search and the stand-alone accuracy of the model found by the WS-NAS algorithm . Via our extensive experiments ( over 700 GPU days ) , we uncover that ( i ) the training behavior of a super-net drastically differs from that of a standalone network , e.g. , in terms of feature statistics and loss landscape , thus allowing us to define training factor settings , e.g. , for batch-normalization ( BN ) and learning rate , that are better suited for super-nets ; ( ii ) while some neglected factors , such as the number of training epochs , have a strong impact on the final performance , others , believed to be important , such as path sampling , only have a marginal effect , and some commonly-used heuristics , such as the use of low-fidelity estimates , negatively impact it ; ( iii ) the commonly-adopted super-net accuracy is unreliable to evaluate the super-net quality . Altogether , our work is the first to systematically analyze the impact of the diverse factors of super-net design and training , and we uncover the factors that are crucial to design a super-net , as well as the non-important ones . Aggregating these findings allows us to boost the performance of simple weight-sharing random search to the point where it reaches that of complex state-of-the-art NAS algorithms across all tested search spaces . We will release our code and trained models so as to establish a solid baseline to facilitate further research . 2 PRELIMINARIES AND RELATED WORK . We first introduce the necessary concepts that will be used throughout the paper . As shown in Figure 1 ( a ) , weight-sharing NAS algorithms consist of three key components : a search algorithm that samples an architecture from the search space in the form of an encoding , a mapping function fproxy that maps the encoding into its corresponding neural network , and a training protocol for a proxy task Pproxy for which the network is optimized . To train the search algorithm , one needs to additionally define the mapping function fws that generates the shared-weight network . Note that the mapping fproxy frequently differs from fws , since in practice the final model contains many more layers and parameters so as to yield competitive results on the proxy task . After fixing fws , a training protocol Pws is required to learn the super-net . In practice , Pws often hides factors that are critical for the final performance of an approach , such as hyper-parameter settings or the use of data augmentation strategies to achieve state-of-the-art performance ( Liu et al. , 2019b ; Chu et al. , 2019 ; Zela et al. , 2020a ) . Again , Pws may differ from Pproxy , which is used to train the architecture that has been found by the search . For example , our experiments reveal that the learning rate and the total number of epochs frequently differ due to the different training behavior of the super-net and stand-alone architectures . Many strategies have been proposed to implement the search algorithm , such as reinforcement learning ( Zoph & Le , 2017 ; Zoph et al. , 2018 ) , evolutionary algorithms ( Real et al. , 2017 ; Miikkulainen et al. , 2019 ; So et al. , 2019 ; Liu et al. , 2018 ; Lu et al. , 2018 ) , gradient-based optimization ( Liu et al. , 2019b ; Xu et al. , 2020 ; Li et al. , 2020 ) , Bayesian optimization ( Kandasamy et al. , 2018 ; Jin et al. , 2019 ; Zhou et al. , 2019 ; Wang et al. , 2020 ) , and separate performance predictors ( Liu et al. , 2018 ; Luo et al. , 2018 ) . Until very recently , the common trend to evaluate NAS consisted of reporting the searched architecture ’ s performance on the proxy task ( Xie et al. , 2019 ; Real et al. , 2019 ; Ryoo et al. , 2020 ) . This , however , hardly provides real insights about the NAS algorithms themselves , because of the many components involved in them . Many factors that differ from one algorithm to another can influence the performance . In practice , the literature even commonly compares NAS methods that employ different protocols to train the final model . Li & Talwalkar ( 2019 ) and Yu et al . ( 2020b ) were the first to systematically compare different algorithms with the same settings for the proxy task and using several random initializations . Their surprising results revealed that many NAS algorithms produce architectures that do not significantly outperform a randomly-sampled architecture . Yang et al . ( 2020 ) highlighted the importance of the training protocol Pproxy . They showed that optimizing the training protocol can improve the final architecture performance on the proxy task by three percent on CIFAR-10 . This non-trivial improvement can be achieved regardless of the chosen sampler , which provides clear evidence for the importance of unifying the protocol to build a solid foundation for comparing NAS algorithms . In parallel to this line of research , the recent series of “ NASBench ” works ( Ying et al. , 2019 ; Zela et al. , 2020b ; Dong & Yang , 2020 ) proposed to benchmark NAS approaches by providing a complete , tabular characterization of a search space . This was achieved by training every realizable stand-alone architecture using a fixed protocol Pproxy . Similarly , other works proposed to provide a partial characterization by sampling and training a sufficient number of architectures in a given search space using a fixed protocol ( Radosavovic et al. , 2019 ; Zela et al. , 2020a ; Wang et al. , 2020 ) . While recent advances for systematic evaluation are promising , no work has yet thoroughly studied the influence of the super-net training protocol Pws and the mapping function fws . Previous works ( Zela et al. , 2020a ; Li & Talwalkar , 2019 ) performed hyper-parameter tuning to evaluate their own algorithms , and focused only on a few parameters . We fill this gap by benchmarking different choices of Pws and fws and by proposing novel variations to improve the super-net quality . Recent works have shown that sub-nets of super-net training can surpass some human designed models without retraining ( Yu et al. , 2020a ; Cai et al. , 2020 ) and that reinforcement learning can surpass the performance of random search ( Bender et al. , 2020 ) . However , these findings are still only shown on MobileNet-like search spaces where we only search for the size of convolution kernels and the channel ratio for each layer . This is an effective approach to discover a compact network , but it does not change the fact that on cell-based search space super-net quality remains low . 3 EVALUATION METHODOLOGY . We first isolate 14 factors that need to be considered during the design and training of a super-net , and then introduce the metrics to evaluate the quality of the trained super-net . Note that these factors are agnostic to the search policy that is used after training the super-net . 3.1 DISENTANGLING THE SUPER-NET FROM THE SEARCH ALGORITHM . Our goal is to evaluate the influence of the super-net mapping fws and weight-sharing training protocol Pws . As shown in Figure 2 , fws translates an architecture encoding , which typically consists of a discrete number of choices or parameters , into a neural network . Based on a well-defined mapping , the super-net is a network in which every sub-path has a one-to-one mapping with an architecture encoding ( Pham et al. , 2018 ) . Recent works ( Xu et al. , 2020 ; Li et al. , 2020 ; Ying et al. , Figure 2 : Constructing a super-net 2019 ) separate the encoding into cell parameters , which define the basic building blocks of a network , and macro parameters , which define how cells are assembled into a complete architecture . Weight-sharing mapping fws . To make the search space manageable , all cell and macro parameters are fixed during the search , except for the topology of the cell and its possible operations . However , the exact choices for each of these fixed factors differ between algorithms and search spaces . We report the common factors in the left part of Table 1 . They include various implementation choices , e.g. , the use of convolutions with a dynamic number of channels ( Dynamic Channeling ) , super-convolutional layers that support dynamic kernel sizes ( OFA Kernel ) ( Cai et al. , 2020 ) , weight-sharing batchnormalization ( WSBN ) that tracks independent running statistics and affine parameters for different incoming edges ( Luo et al. , 2018 ) , and path and global dropout ( Pham et al. , 2018 ; Luo et al. , 2018 ; Liu et al. , 2019b ) . They also include the use of low-fidelity estimates ( Elsken et al. , 2019 ) to reduce the complexity of super-net training , e.g. , by reducing the number of layers ( Liu et al. , 2019b ) and channels ( Yang et al. , 2020 ; Chen et al. , 2019a ) , the portion of the training set used for super-net training ( Liu et al. , 2019b ) , or the batch size ( Liu et al. , 2019b ; Pham et al. , 2018 ; Yang et al. , 2020 ) . Weight-sharing protocol Pws . Given a mapping fws , different training protocols Pws can be employed to train the super-net . Protocols can differ in the training hyper-parameters and the sampling strategies they rely on . We will evaluate the different hyper-parameter choices listed in the right part of Table 1 . This includes the initial learning rate , the hyper-parameters of batch normalization , the total number of training epochs , and the amount of weight decay . We randomly sample one path to train the super-net ( Guo et al. , 2019 ) , which is also known as single-path one-shot ( SPOS ) or Random-NAS ( Li & Talwalkar , 2019 ) . The reason for this choice is that Random-NAS is equivalent to the initial state of many search algorithms ( Liu et al. , 2019b ; Pham et al. , 2018 ; Luo et al. , 2018 ) , some of which even freeze the sampler training so as to use random sampling to warm-up the super-net ( Xu et al. , 2020 ; Dong & Yang , 2019b ) . Note that we also evaluated two variants of Random-NAS , but found their improvement to be only marginal . Please see Appendix C.2 for more detail . In our experiments , for the sake of reproducibility , we ensure that Pws and Pproxy , as well as fws and fproxy , are as close to each other as possible . For the hyper-parameters of Pws , we cross-validate each factor following the order in Table 1 , and after each validation , use the value that yields the best performance in Pproxy . For all other factors , we change one factor at a time . Search spaces . We use three commonly-used search spaces , for which a large number of stand-alone architectures have been trained and evaluated on CIFAR-10 ( Krizhevsky et al. , 2009 ) to obtain their ground-truth performance . In particular , we use NASBench-101 ( Ying et al. , 2019 ) , which consists of 423 , 624 architectures and is compatible with weight-sharing NAS ( Yu et al. , 2020b ; Zela et al. , 2020b ) ; NASBench-201 ( Dong & Yang , 2020 ) , which contains more operations than NASBench-101 but fewer nodes ; and DARTS-NDS ( Radosavovic et al. , 2019 ) that contains over 1013 architectures of which a subset of 5000 models was sampled and trained in a stand-alone fashion . See Appendix A.2 for a detailed discussion . | This work analyzes commonly used heuristics for training the supernet in weight sharing NAS. The authors first proposes a new metric, sparse Kendall-Tau, to measure the quality of the supernet. Then extensive experiments are conducted on three NAS benchmarks to empirically evaluate the heuristics, and pick the best settings. To highlight the significance of the training quality of supernet, the author showed that random search, when combined with the best settings, can performs competitive to SOTA results. | SP:cf90c83667e2974b91b0d93c12c5987491314005 |
Non-iterative Parallel Text Generation via Glancing Transformer | 1 INTRODUCTION . Non-autoregressive transformer ( NAT ) has attracted wide attention in neural machine translation ( Gu et al. , 2018 ) , which generates sentences simultaneously rather than sequentially . To enable parallel decoding , NAT imposes a conditional independence assumption among words in the output sentences , which leads to significantly faster inference speed ( almost a dozen times speed-up ) than the autoregressive Transformer ( Vaswani et al. , 2017 ) . However , NAT still falls behind autoregressive Transformer ( AT ) in the quality of output sentences , such as BLEU ( Papineni et al. , 2002 ) for machine translation . We blame it for the imposed conditional independence assumption , which prevents NAT models from explicitly learning the word dependencies in the output sentence . Note that such word dependency is crucial , and it is explicitly learned in the AT model through the autoregressive language models ( left-to-right , see Figure 1a and Figure 1b ) . Recently , Ghazvininejad et al . ( 2019 ) ; Gu et al . ( 2019 ) propose to employ the Masked Language Model ( MLM , Devlin et al. , 2019 ) in NAT , which includes word dependency modeling in an iterative fashion ( see Figure 1c ) , therefore yielding quite competitive results compared to AT . Specifically , such iterative models randomly mask words in the reference and predict these masked words conditioned on unmasked ones during training . In this manner , iterative models are trained to explicitly capture the dependencies between masked words and unmasked words . However , these iterative approaches still give poor results with one decoding iteration and have to perform multiple iterations during inference , namely iteratively refining the generated outputs of the previous iteration . Such iterative process is quite time-consuming , which partly sacrifices the speed merit of NAT . How to abandon the iterative process while enjoy the benefits of explicitly modeling word dependencies in NAT is still an open problem . In this paper , we argue that the major culprit of the problem that mask language models have to be used together with iterative inference , is the sampling strategy of masking words in MLM . In particular , MLM employs a fixed uniform strategy for masking words randomly during training , which prevents the model from effectively learning word dependencies for one-iteration generation . For example , at the beginning of training , the NAT model would be poorly tuned and we should mask fewer words . If we mask too many words , it would be difficult for the NAT model to correctly predict the masked words . On the contrary , if we mask too little words at the end phase of training , the resulting NAT model is rarely trained to predict the whole sentences , and can only predict some sentence fragments . In such a case , to accurately generate the whole sentence in inference , the NAT Under review as a conference paper at ICLR 2021 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glancing Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 y3 y4 [ BOS ] Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n y1 y5y4 y3y1 y5 ( a ) Left-to-right LM in AT ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glancing Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 3 4 y3 y4 y3 y4 [ BOS ] Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n 3 4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n 3 4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 3 4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n y1 y5y4 y3y1 y5 ( b ) Implicit LM in NAT ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glancing Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 y3 y4 [ BOS ] Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n y1 y5y4 y3y1 y5 ( c ) Masked LM in Iterative NAT ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glanci g Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 y3 y4 [ BOS ] Decoder Encoder 1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n 1 y5y4 y3y1 y5 ( d ) Glancing LM in GLAT Figure 1 : Different language modeling approaches of different text generation models . model has to generate the sentence fragments iteratively . To this end , the sampling strategy is crucial for the training of NAT . To address the above issues , we propose a simple yet effective approach called Glancing Transformer ( GLAT ) , which is equipped with the proposed Glancing Language Model ( GLM ) for noniterative parallel text generation , achieving significant improvements upon strong baselines . Intuitively , GLM adopts a adaptive glancing sampling strategy , which glances at some fragments of the reference if the reference is too difficult to fit in the training of NAT . Correspondingly , when the model is well tuned , it will adaptively reduce the percentage of glancing sampling , making sure that the resulting model could learn to generate the whole sentence in a one-iteration fashion . Specifically , our proposed LM differs from MLM in two aspects . Firstly , GLM proposes an adaptive glancing sampling strategy , which enables GLAT to generate sentences in a one-iteration way , working by gradual training instead of iterative inference ( see Figure 1d ) . Generally , GLM is quite similar to curriculum le rning ( Be gio et al. , 2009 ) in spirit , namely first learning to generate some fragments and gradually moving to learn the whole sentences ( from easy to hard ) . To achieve the adaptive glancing sampling , GLM performs decoding twice in training . The first decoding is the same as the vanilla NAT , and the prediction accuracy indicates whether current reference is “ difficult ” for fitting . In the second decoding , GLM gets words of the reference via glancing sampling according to the first decoding , and learn to predict the remaining words that are not sampled . Note that only the second decoding will update the model parameters . Secondly , instead of using the [ MASK ] token , GLM directly use representations from the encoder at corresponding positions , which is more natural and could enhance the interactions between sampled words and signals from the encoder . Experimental results show that GLAT obtains significant improvements ( about 5 BLEU ) on standard benchmarks compared to the vanilla NAT , without losing inference speed-up . GLAT achieves competitive results against iterative approaches like Mask-Predict ( Ghazvininejad et al. , 2019 ) , even outperforming the Mask-Predict model on WMT14 DE-EN and WMT16 RO-EN . Compared to the strong AT baseline , GLAT can still close the performance gap within 1 BLEU point while keeping 7.9× speed-up . Empirically , we even find that GLAT outperforms AT when the length of the reference is less than 20 on WMT14 DE-EN . We speculate this is because GLM could capture bidirectional context for generation while its left-to-right counterpart is only unidirectional , which indicates the potential of parallel generation approaches like GLAT . 2 TEXT GENERATION VIA CONDITIONAL LANGUAGE MODELING . In this section , we compare different language models used in different text generation approaches . Formally , considering a sequence-to-sequence model ( Cho et al. , 2014 ; Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) for predicting Y = { y1 , y2 , ... , yT } given the input sentence X = { x1 , x2 , ... , xN } . In the AT model , the training objective is maximizing the log-likelihood with autoregressive decomposition : LAT = T∑ t=1 log p ( yt|y < t , X ; θ ) , ( 1 ) where the word yt is conditioned on the target prefix y < t = { [ BOS ] , y1 , ... , yt−1 } and the source input X . AT models sentences from left-to-right , therefore word dependencies are learned in a Under review as a conference paper at ICLR 2021 y1 y2 y3 y4bos Left-to-Right Language Modeling NAT Decoder Masked Language Modeling Glancing Language Modeling y1 y2 y3 y4 y5 AT Decoder y1 mask y5 y2 NAT Decoder y1 y5 y2 h2 h4 Encoder x1 x2 x3 x4 Encoder Random Masking Encoder Glancing Sampling y3 y4mask x1 x2 x3 x4 x1 x2 x3 x4 y3 y4 en co de r-d ec od er a tte nt io n en co de r-d ec od er a tte nt io n en co de r-d ec od er a tte nt io n unidirectional way . NAT , on the other hand , incorporates conditional independence assumption among words in a sentence with the aim of enabling parallel generation : LNAT = T∑ t=1 log p ( yt|X ; θ ) . ( 2 ) Note that , in NAT , different yt in Y are predicted simultaneously , which removes the interactions among target words in the NAT modeling . Thus , to generate fluent and faithful sentences , NAT has to model the target word dependencies implicitly , which makes the learning process quite challenging . To explicitly model the target word dependencies , previous iterative approaches , such as Mask-Predict , introduce mask language models ( MLM ) in NAT . MLM models word dependencies by learning to predict the masked words conditioned on the unmasked ones : LMLM = ∑ yt∈RM ( Y ) log p ( yt|Φ ( Y , RM ( Y ) ) , X ; θ ) . ( 3 ) Here RM ( Y ) returns some randomly sampled words from Y , and Φ replaces these sampled words in Y with the [ MASK ] token . For example , if RM ( Y ) = { y2 , y3 } , Φ ( Y , RM ( Y ) ) = { y1 , [ MASK ] , [ MASK ] , y4 , ... } . To this end , the training objective is to predict the masked words RM ( Y ) given the source sentence X and the unmasked target words . As mentioned in the Introduction , though MLM can explicitly model target word dependencies , it can hardly generate satisfactory sentences without iteration . Better language modeling approach should be explored for parallel text generation in the one-iteration way . | The authors propose Glancing Transformer for single step parallel text generation. The approach is inspired from curriculum learning i.e. the training task is adaptively controlled based on the model's current performance. Specifically, the paper proposes a glancing strategy which compares the model's generation and reference sentence, and forms a sequence which is partially masked. The number of masked tokens in this sequence depends on the similarity between model's generation and reference sentence. The model is then trained to complete the partially masked sequence. | SP:dbc876d7c158f89a0f22a4688ff05abca2ad5ddc |
Non-iterative Parallel Text Generation via Glancing Transformer | 1 INTRODUCTION . Non-autoregressive transformer ( NAT ) has attracted wide attention in neural machine translation ( Gu et al. , 2018 ) , which generates sentences simultaneously rather than sequentially . To enable parallel decoding , NAT imposes a conditional independence assumption among words in the output sentences , which leads to significantly faster inference speed ( almost a dozen times speed-up ) than the autoregressive Transformer ( Vaswani et al. , 2017 ) . However , NAT still falls behind autoregressive Transformer ( AT ) in the quality of output sentences , such as BLEU ( Papineni et al. , 2002 ) for machine translation . We blame it for the imposed conditional independence assumption , which prevents NAT models from explicitly learning the word dependencies in the output sentence . Note that such word dependency is crucial , and it is explicitly learned in the AT model through the autoregressive language models ( left-to-right , see Figure 1a and Figure 1b ) . Recently , Ghazvininejad et al . ( 2019 ) ; Gu et al . ( 2019 ) propose to employ the Masked Language Model ( MLM , Devlin et al. , 2019 ) in NAT , which includes word dependency modeling in an iterative fashion ( see Figure 1c ) , therefore yielding quite competitive results compared to AT . Specifically , such iterative models randomly mask words in the reference and predict these masked words conditioned on unmasked ones during training . In this manner , iterative models are trained to explicitly capture the dependencies between masked words and unmasked words . However , these iterative approaches still give poor results with one decoding iteration and have to perform multiple iterations during inference , namely iteratively refining the generated outputs of the previous iteration . Such iterative process is quite time-consuming , which partly sacrifices the speed merit of NAT . How to abandon the iterative process while enjoy the benefits of explicitly modeling word dependencies in NAT is still an open problem . In this paper , we argue that the major culprit of the problem that mask language models have to be used together with iterative inference , is the sampling strategy of masking words in MLM . In particular , MLM employs a fixed uniform strategy for masking words randomly during training , which prevents the model from effectively learning word dependencies for one-iteration generation . For example , at the beginning of training , the NAT model would be poorly tuned and we should mask fewer words . If we mask too many words , it would be difficult for the NAT model to correctly predict the masked words . On the contrary , if we mask too little words at the end phase of training , the resulting NAT model is rarely trained to predict the whole sentences , and can only predict some sentence fragments . In such a case , to accurately generate the whole sentence in inference , the NAT Under review as a conference paper at ICLR 2021 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glancing Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 y3 y4 [ BOS ] Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n y1 y5y4 y3y1 y5 ( a ) Left-to-right LM in AT ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glancing Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 3 4 y3 y4 y3 y4 [ BOS ] Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n 3 4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n 3 4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 3 4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n y1 y5y4 y3y1 y5 ( b ) Implicit LM in NAT ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glancing Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 y3 y4 [ BOS ] Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n y1 y5y4 y3y1 y5 ( c ) Masked LM in Iterative NAT ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 h1 h2 h3 h4 h5 NAT DecodingH H′ Glanci g Sampling Hamming Distance N ( ̂Y , Y ) = 3 y1 y2 y3 y4 y5 ̂y1 ̂y2 ̂y4 ̂y5 ̂y3 y1 y3 y5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 R an do m M as ki ng h2 h4 y1 y2 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 y3 y4 [ BOS ] Decoder Encoder 1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 y4 y5 [ MASK ] [ MASK ] y2 y3 Decoder Encoder x1 x2 en co de r-d ec od er at te nt io n x3 x4 y1 h2 y5h4y3 y2 y4 y1 y2 y5 Decoder Encoder x1 x2 x3 x4 y3 y4 h2 h4h1 h3 h5 G la nc in g Sa m pl in g en co de r-d ec od er at te nt io n en co de r-d ec od er at te nt io n 1 y5y4 y3y1 y5 ( d ) Glancing LM in GLAT Figure 1 : Different language modeling approaches of different text generation models . model has to generate the sentence fragments iteratively . To this end , the sampling strategy is crucial for the training of NAT . To address the above issues , we propose a simple yet effective approach called Glancing Transformer ( GLAT ) , which is equipped with the proposed Glancing Language Model ( GLM ) for noniterative parallel text generation , achieving significant improvements upon strong baselines . Intuitively , GLM adopts a adaptive glancing sampling strategy , which glances at some fragments of the reference if the reference is too difficult to fit in the training of NAT . Correspondingly , when the model is well tuned , it will adaptively reduce the percentage of glancing sampling , making sure that the resulting model could learn to generate the whole sentence in a one-iteration fashion . Specifically , our proposed LM differs from MLM in two aspects . Firstly , GLM proposes an adaptive glancing sampling strategy , which enables GLAT to generate sentences in a one-iteration way , working by gradual training instead of iterative inference ( see Figure 1d ) . Generally , GLM is quite similar to curriculum le rning ( Be gio et al. , 2009 ) in spirit , namely first learning to generate some fragments and gradually moving to learn the whole sentences ( from easy to hard ) . To achieve the adaptive glancing sampling , GLM performs decoding twice in training . The first decoding is the same as the vanilla NAT , and the prediction accuracy indicates whether current reference is “ difficult ” for fitting . In the second decoding , GLM gets words of the reference via glancing sampling according to the first decoding , and learn to predict the remaining words that are not sampled . Note that only the second decoding will update the model parameters . Secondly , instead of using the [ MASK ] token , GLM directly use representations from the encoder at corresponding positions , which is more natural and could enhance the interactions between sampled words and signals from the encoder . Experimental results show that GLAT obtains significant improvements ( about 5 BLEU ) on standard benchmarks compared to the vanilla NAT , without losing inference speed-up . GLAT achieves competitive results against iterative approaches like Mask-Predict ( Ghazvininejad et al. , 2019 ) , even outperforming the Mask-Predict model on WMT14 DE-EN and WMT16 RO-EN . Compared to the strong AT baseline , GLAT can still close the performance gap within 1 BLEU point while keeping 7.9× speed-up . Empirically , we even find that GLAT outperforms AT when the length of the reference is less than 20 on WMT14 DE-EN . We speculate this is because GLM could capture bidirectional context for generation while its left-to-right counterpart is only unidirectional , which indicates the potential of parallel generation approaches like GLAT . 2 TEXT GENERATION VIA CONDITIONAL LANGUAGE MODELING . In this section , we compare different language models used in different text generation approaches . Formally , considering a sequence-to-sequence model ( Cho et al. , 2014 ; Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ) for predicting Y = { y1 , y2 , ... , yT } given the input sentence X = { x1 , x2 , ... , xN } . In the AT model , the training objective is maximizing the log-likelihood with autoregressive decomposition : LAT = T∑ t=1 log p ( yt|y < t , X ; θ ) , ( 1 ) where the word yt is conditioned on the target prefix y < t = { [ BOS ] , y1 , ... , yt−1 } and the source input X . AT models sentences from left-to-right , therefore word dependencies are learned in a Under review as a conference paper at ICLR 2021 y1 y2 y3 y4bos Left-to-Right Language Modeling NAT Decoder Masked Language Modeling Glancing Language Modeling y1 y2 y3 y4 y5 AT Decoder y1 mask y5 y2 NAT Decoder y1 y5 y2 h2 h4 Encoder x1 x2 x3 x4 Encoder Random Masking Encoder Glancing Sampling y3 y4mask x1 x2 x3 x4 x1 x2 x3 x4 y3 y4 en co de r-d ec od er a tte nt io n en co de r-d ec od er a tte nt io n en co de r-d ec od er a tte nt io n unidirectional way . NAT , on the other hand , incorporates conditional independence assumption among words in a sentence with the aim of enabling parallel generation : LNAT = T∑ t=1 log p ( yt|X ; θ ) . ( 2 ) Note that , in NAT , different yt in Y are predicted simultaneously , which removes the interactions among target words in the NAT modeling . Thus , to generate fluent and faithful sentences , NAT has to model the target word dependencies implicitly , which makes the learning process quite challenging . To explicitly model the target word dependencies , previous iterative approaches , such as Mask-Predict , introduce mask language models ( MLM ) in NAT . MLM models word dependencies by learning to predict the masked words conditioned on the unmasked ones : LMLM = ∑ yt∈RM ( Y ) log p ( yt|Φ ( Y , RM ( Y ) ) , X ; θ ) . ( 3 ) Here RM ( Y ) returns some randomly sampled words from Y , and Φ replaces these sampled words in Y with the [ MASK ] token . For example , if RM ( Y ) = { y2 , y3 } , Φ ( Y , RM ( Y ) ) = { y1 , [ MASK ] , [ MASK ] , y4 , ... } . To this end , the training objective is to predict the masked words RM ( Y ) given the source sentence X and the unmasked target words . As mentioned in the Introduction , though MLM can explicitly model target word dependencies , it can hardly generate satisfactory sentences without iteration . Better language modeling approach should be explored for parallel text generation in the one-iteration way . | This submission improves non-autoregressive translation (NAT) by proposing a non-iterative parallel text generation model called Glancing Transformer (GLAT), which includes the explicit word dependency modeling in NAT via a proposed Glancing Language Model (GLM). Compared to previous work, biggest contribution of the proposed method is that it improves the training of NAT model with a similar idea of curriculum learning, while keeping the inference time unchanged, setting a significant improvement for non-iterative NAT without reranking. It would be a good baseline for future research on non-iterative NAT models. | SP:dbc876d7c158f89a0f22a4688ff05abca2ad5ddc |
Sparse matrix products for neural network compression | 1 Introduction . The success of neural networks in the processing of structured data is in part due to their over-parametrization which plays a key role in their ability to learn rich features from the data ( Neyshabur et al. , 2018 ) . Unfortunately , this also makes most state-of-the-art models so huge that they are expensive to store and impossible to operate on devices with limited resources ( memory , computing capacity ) or that can not integrate GPUs ( Cheng et al. , 2017 ) . This problem has led to a popular line of research for “ neural networks compression ” , which aims at building models with few parameters while preserving their accuracy . State of the art techniques for neural network compression . Popular matrix or tensor decomposition methods including Singular Value Decomposition ( SVD ) , CANDECOMP/PARAFAC ( CP ) and Tucker have been used to address the problem of model compression by a low-rank approximation of the neural network ’ s weights after learning . Sainath et al . ( 2013 ) describe a method based on SVD to compress weight matrices in fully connected layers . Denton et al . ( 2014 ) ; Lebedev et al . ( 2015 ) ; Kim et al . ( 2016 ) generalize this idea to convolutional layers and then reduce the memory footprint of convolution kernels by using higher-order low-rank decompositions such as CP or Tucker decompositions . Besides , the Tensor-Train ( TT ) decomposition has been explored to compress both dense and convolutional layers after a pre-training step ( Novikov et al. , 2015 ) . This approach may achieve extreme compression rates but it also have impractical downsides that we demonstrate now . In a TT format , all the elements of a M -order tensor are expressed by a product ofM matrices whose dimensions are determined by the TT-ranks ( R0 , R1 , . . . , RM ) . For each of theM dimension of the initial tensor , the corresponding matrices can be stacked into an order 3 tensor called a “ core ” of the decomposition . Hence , the layer weight is decomposed as a set of M cores of small dimensions . Novikov et al . ( 2015 ) use this tensor representation to factorize fully connected layers . They first reshape the matrix of weights into an M -order tensor , then apply the TT decomposition . By choosing sufficiently small Rm values , this technique allows to obtain a high compression ratio on extremely wide ad hoc neural architectures . Garipov et al . ( 2016 ) have adapted this idea to convolutional layers . However , the current formulation of such TT convolutional layer involves the multiplication of all input values by a matrix of dimension 1 × R1 thus causing an inflation of R1 times the size of the input in memory . This makes the available implementation ( Garipov , 2020 ) unusable for recent wide convolutional networks at inference time . Other compression methods include unstructured pruning techniques that we review more in details in Section 2.3 and structured pruning techniques that reduce the inner hidden dimensions of the network by completely removing neurons ( Anwar et al. , 2017 ) . According to the recent paper of Liu et al . ( 2018 ) however , these techniques are more akin to Neural Architecture Search than actual network compression . Finally , quantization-based compression maps the columns of the weight matrices in the network to a subset of reference columns with lower memory footprint ( Guo , 2018 ) . Sparse matrices product for full rank decompositions . We are specifically interested in high-rate compression of neural networks via the efficient factorization of the layer weight matrices . Most known approaches to layer decomposition usually makes low-rank assumption on the layer weight tensors which does not always hold in practice . As we will show in the experiments , this makes the Tucker and SVD based techniques unable to effectively reach high compression rates for standard architectures including both convolutional and fully connected layers , such as VGG19 or ResNet50 , whose weight matrices usually exhibit full rank . In this paper , we propose instead to express the weight matrices of fully-connected or convolutional layers as a product of sparse factors which contains very little parameters but still can represent high-rank matrices . Moreover , products of matrices with a total sparsity budget are strictly more expressive than single matrices with that sparsity ( Dao et al. , 2019 ) , which motivates our interest in products of multiple matrices . Usually , a linear operator ( a matrix ) from RD to RD has a time and space complexities of O ( D2 ) . But some well known operators like the Hadamard or the Fourier transforms can be expressed in the form of a product of logD sparse matrices , each having O ( D ) non-zero values ( Dao et al. , 2019 ; Magoarou & Gribonval , 2016 ) . These linear operators , called fast-operators , thus have a time and space complexities lowered to O ( D logD ) . This interesting feature of fast-operators have inspired the design of new algorithms that learn sparse matrix product representations of existing fast-transforms ( Dao et al. , 2019 ) or even that computes sparse product approximations of any matrix in order to accelerate learning and inference ( Magoarou & Gribonval , 2016 ; Giffon et al. , 2019 ) . Even though these new methods were initially designed to recover the logD factors corresponding to a fasttransform , they are more general than that and can actually be used to find a factorization with Q < logD sparse matrices . Contributions . We introduce a general framework for neural network compression using the factorization of layers into sparse matrix products . We explore the use of the recently proposed palm4MSA algorithm ( Magoarou & Gribonval , 2016 ) on every layer of a pre-trained neural network to express them as a product of sparse matrices . The obtained sparse matrices are then refined by gradient descent to best fit the final prediction task . When there is only one sparse matrix in the decomposition , our approach recovers the simple procedure of hard thresholding the weights of a matrix after pre-training . We evaluate the effect of different hyper-parameters on our method and show that layers can be factorized into two or three sparse matrices to obtain high compression rates while preserving good performance , compared to several main state-of-the-art methods for neural network compression . 2 Learning sparse matrix products for network compression . We describe how to compress NN weight matrices by sparse matrix factorization . We call our procedure PSM for Product of Sparse Matrices . It is obvious to see that a product of sparse matrices with a given sparsity budget can recover a full rank matrix or a matrix with more non-zero values than the initial sparsity budget . This observation motivates the use of a sparse matrix factorization in place of usual low-rank decomposition and sparsity inducing techniques for neural network compression . We first recall linear transform operations in fully-connected and convolutional layers . Then , inspired by recent work on learning linear operators with fast-transform structures , we propose to use a product of sparse matrices to replace linear transforms in neural networks . We also introduce a procedure to learn such factorization for every layers in deep architecture . Finally , we review some known neural network compression techniques that appear as particular cases of our framework . 2.1 Weight matrices as product of sparse matrices . Fully-connected and convolutional layers are based on the computation of linear operations . In a fully-connected layer , the output z ∈ RD′ is simply given by z = a ( Wx ) where a is some non-linear activation function . W ∈ RD′×D is the weight matrix of the layer and x ∈ RD is the output of the preceding layer . The linear operation in a convolutional layer can be represented by a doubly-block Toeplitz matrix ( Wang et al. , 2020 ) . An other way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input ( Garipov et al. , 2016 ) . In this work , we focus on this latter representation of the convolution operation . More formally , let rS : RH×W×C 7→ RHW×CS 2 be the reshape operation that creates the matrix of all vectorized patches of size ( height and width ) S2 on an input image with C channels . The matrix of K filters W ∈ RCS2×K can then be applied to these patches ( multiplied with rS ) to produce the output of the convolutional layer in a matrix shape . Finally , a second reshape operator t : RHW×K 7→ RH×W×K is applied on the feature map matrix to reconstruct the output tensor of the layer Z ∈ RH×W×K . Altogether , the convolution operation can be written as Z = a ( t ( rS ( X ) W ) ) where a is some non-linear activation function and X is the output 3-D tensor of the preceding layer . We preserve simplicity in notation here , assuming without loss of generality that the stride used by rS is equal to 1 and that the input tensor is padded with bS2 c zeros vertically and horizontally . The whole process is depicted in Supplementary Material A.2 . Our general idea is to replace the weight matrix of each neural network layer with a product of Q sparse matrices , hence reducing the storage and computational complexities of the layer . Indeed , for an initial matrix of dimension ( D × D′ ) , if all sparse matrices store O ( D ) non-zero values , then the total complexity of the product becomes O ( QD ) instead of O ( DD′ ) . To define a fast-transform operator , one would use Q = logD but in practice we show that we can chose much smaller Q and achieve huge compression rates without lowering much the performance . Supplementary Material A.1 illustrates the effect of our compression scheme on a simple architecture including one convolution layer and a single dense layer . Given an input vector x ∈ RD , expressing the weight matrix W ∈ RD′×D of a fully connected layer as a product of sparse matrices gives output z such that : z = a ( Q∏ i=1 Six ) , ( 1 ) where ||Si||0 = O ( D ) so that the complexity in time and space of this layer is reduced to O ( QD ) instead of O ( DD′ ) . Similarly , in the convolution layers , the output Z ∈ RH×W×K is obtained from an input tensor X ∈ RH×W×C : Z = a ( t ( rS ( X ) Q∏ i=1 Si ) ) , ( 2 ) where ||Si||0 = O ( max ( S2C , K ) ) so that the time complexity of the layer is reduced from O ( HWCS2K ) to O ( HWQ ·max ( CS2 , K ) ) and the complexity in space is reduced from O ( CS2K ) to O ( Q ·max ( CS2 , K ) ) . Since there is no constraint on the rank of factors , the sparse matrix products of each layer can reach full rank , unlike low-rank decomposition methods . Moreover , the reconstruction of a sparse matrix product with a total of O ( QD ) non-zero values can produce a matrix with more than O ( QD ) non-zero values . This is consistent with the intuition that a product of sparse matrices can be more expressive than a single sparse matrix . | The paper proposes compressing the layers of the neural networks using a product of sparse matrices. This approach is in line with the initial methods on neural network compression: direct (task-independent) compression of weights, which is followed by NN task-dependent fine-tuning. In this case, the direct compression is obtained using the Palm4MSA method of Magoarou and Gribonval (2016), and then models are fine-tuned in an end-to-end fashion using TensorFlow. | SP:6e9cc976b5835221dd518f26e3a9beaaaf6b890a |
Sparse matrix products for neural network compression | 1 Introduction . The success of neural networks in the processing of structured data is in part due to their over-parametrization which plays a key role in their ability to learn rich features from the data ( Neyshabur et al. , 2018 ) . Unfortunately , this also makes most state-of-the-art models so huge that they are expensive to store and impossible to operate on devices with limited resources ( memory , computing capacity ) or that can not integrate GPUs ( Cheng et al. , 2017 ) . This problem has led to a popular line of research for “ neural networks compression ” , which aims at building models with few parameters while preserving their accuracy . State of the art techniques for neural network compression . Popular matrix or tensor decomposition methods including Singular Value Decomposition ( SVD ) , CANDECOMP/PARAFAC ( CP ) and Tucker have been used to address the problem of model compression by a low-rank approximation of the neural network ’ s weights after learning . Sainath et al . ( 2013 ) describe a method based on SVD to compress weight matrices in fully connected layers . Denton et al . ( 2014 ) ; Lebedev et al . ( 2015 ) ; Kim et al . ( 2016 ) generalize this idea to convolutional layers and then reduce the memory footprint of convolution kernels by using higher-order low-rank decompositions such as CP or Tucker decompositions . Besides , the Tensor-Train ( TT ) decomposition has been explored to compress both dense and convolutional layers after a pre-training step ( Novikov et al. , 2015 ) . This approach may achieve extreme compression rates but it also have impractical downsides that we demonstrate now . In a TT format , all the elements of a M -order tensor are expressed by a product ofM matrices whose dimensions are determined by the TT-ranks ( R0 , R1 , . . . , RM ) . For each of theM dimension of the initial tensor , the corresponding matrices can be stacked into an order 3 tensor called a “ core ” of the decomposition . Hence , the layer weight is decomposed as a set of M cores of small dimensions . Novikov et al . ( 2015 ) use this tensor representation to factorize fully connected layers . They first reshape the matrix of weights into an M -order tensor , then apply the TT decomposition . By choosing sufficiently small Rm values , this technique allows to obtain a high compression ratio on extremely wide ad hoc neural architectures . Garipov et al . ( 2016 ) have adapted this idea to convolutional layers . However , the current formulation of such TT convolutional layer involves the multiplication of all input values by a matrix of dimension 1 × R1 thus causing an inflation of R1 times the size of the input in memory . This makes the available implementation ( Garipov , 2020 ) unusable for recent wide convolutional networks at inference time . Other compression methods include unstructured pruning techniques that we review more in details in Section 2.3 and structured pruning techniques that reduce the inner hidden dimensions of the network by completely removing neurons ( Anwar et al. , 2017 ) . According to the recent paper of Liu et al . ( 2018 ) however , these techniques are more akin to Neural Architecture Search than actual network compression . Finally , quantization-based compression maps the columns of the weight matrices in the network to a subset of reference columns with lower memory footprint ( Guo , 2018 ) . Sparse matrices product for full rank decompositions . We are specifically interested in high-rate compression of neural networks via the efficient factorization of the layer weight matrices . Most known approaches to layer decomposition usually makes low-rank assumption on the layer weight tensors which does not always hold in practice . As we will show in the experiments , this makes the Tucker and SVD based techniques unable to effectively reach high compression rates for standard architectures including both convolutional and fully connected layers , such as VGG19 or ResNet50 , whose weight matrices usually exhibit full rank . In this paper , we propose instead to express the weight matrices of fully-connected or convolutional layers as a product of sparse factors which contains very little parameters but still can represent high-rank matrices . Moreover , products of matrices with a total sparsity budget are strictly more expressive than single matrices with that sparsity ( Dao et al. , 2019 ) , which motivates our interest in products of multiple matrices . Usually , a linear operator ( a matrix ) from RD to RD has a time and space complexities of O ( D2 ) . But some well known operators like the Hadamard or the Fourier transforms can be expressed in the form of a product of logD sparse matrices , each having O ( D ) non-zero values ( Dao et al. , 2019 ; Magoarou & Gribonval , 2016 ) . These linear operators , called fast-operators , thus have a time and space complexities lowered to O ( D logD ) . This interesting feature of fast-operators have inspired the design of new algorithms that learn sparse matrix product representations of existing fast-transforms ( Dao et al. , 2019 ) or even that computes sparse product approximations of any matrix in order to accelerate learning and inference ( Magoarou & Gribonval , 2016 ; Giffon et al. , 2019 ) . Even though these new methods were initially designed to recover the logD factors corresponding to a fasttransform , they are more general than that and can actually be used to find a factorization with Q < logD sparse matrices . Contributions . We introduce a general framework for neural network compression using the factorization of layers into sparse matrix products . We explore the use of the recently proposed palm4MSA algorithm ( Magoarou & Gribonval , 2016 ) on every layer of a pre-trained neural network to express them as a product of sparse matrices . The obtained sparse matrices are then refined by gradient descent to best fit the final prediction task . When there is only one sparse matrix in the decomposition , our approach recovers the simple procedure of hard thresholding the weights of a matrix after pre-training . We evaluate the effect of different hyper-parameters on our method and show that layers can be factorized into two or three sparse matrices to obtain high compression rates while preserving good performance , compared to several main state-of-the-art methods for neural network compression . 2 Learning sparse matrix products for network compression . We describe how to compress NN weight matrices by sparse matrix factorization . We call our procedure PSM for Product of Sparse Matrices . It is obvious to see that a product of sparse matrices with a given sparsity budget can recover a full rank matrix or a matrix with more non-zero values than the initial sparsity budget . This observation motivates the use of a sparse matrix factorization in place of usual low-rank decomposition and sparsity inducing techniques for neural network compression . We first recall linear transform operations in fully-connected and convolutional layers . Then , inspired by recent work on learning linear operators with fast-transform structures , we propose to use a product of sparse matrices to replace linear transforms in neural networks . We also introduce a procedure to learn such factorization for every layers in deep architecture . Finally , we review some known neural network compression techniques that appear as particular cases of our framework . 2.1 Weight matrices as product of sparse matrices . Fully-connected and convolutional layers are based on the computation of linear operations . In a fully-connected layer , the output z ∈ RD′ is simply given by z = a ( Wx ) where a is some non-linear activation function . W ∈ RD′×D is the weight matrix of the layer and x ∈ RD is the output of the preceding layer . The linear operation in a convolutional layer can be represented by a doubly-block Toeplitz matrix ( Wang et al. , 2020 ) . An other way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input ( Garipov et al. , 2016 ) . In this work , we focus on this latter representation of the convolution operation . More formally , let rS : RH×W×C 7→ RHW×CS 2 be the reshape operation that creates the matrix of all vectorized patches of size ( height and width ) S2 on an input image with C channels . The matrix of K filters W ∈ RCS2×K can then be applied to these patches ( multiplied with rS ) to produce the output of the convolutional layer in a matrix shape . Finally , a second reshape operator t : RHW×K 7→ RH×W×K is applied on the feature map matrix to reconstruct the output tensor of the layer Z ∈ RH×W×K . Altogether , the convolution operation can be written as Z = a ( t ( rS ( X ) W ) ) where a is some non-linear activation function and X is the output 3-D tensor of the preceding layer . We preserve simplicity in notation here , assuming without loss of generality that the stride used by rS is equal to 1 and that the input tensor is padded with bS2 c zeros vertically and horizontally . The whole process is depicted in Supplementary Material A.2 . Our general idea is to replace the weight matrix of each neural network layer with a product of Q sparse matrices , hence reducing the storage and computational complexities of the layer . Indeed , for an initial matrix of dimension ( D × D′ ) , if all sparse matrices store O ( D ) non-zero values , then the total complexity of the product becomes O ( QD ) instead of O ( DD′ ) . To define a fast-transform operator , one would use Q = logD but in practice we show that we can chose much smaller Q and achieve huge compression rates without lowering much the performance . Supplementary Material A.1 illustrates the effect of our compression scheme on a simple architecture including one convolution layer and a single dense layer . Given an input vector x ∈ RD , expressing the weight matrix W ∈ RD′×D of a fully connected layer as a product of sparse matrices gives output z such that : z = a ( Q∏ i=1 Six ) , ( 1 ) where ||Si||0 = O ( D ) so that the complexity in time and space of this layer is reduced to O ( QD ) instead of O ( DD′ ) . Similarly , in the convolution layers , the output Z ∈ RH×W×K is obtained from an input tensor X ∈ RH×W×C : Z = a ( t ( rS ( X ) Q∏ i=1 Si ) ) , ( 2 ) where ||Si||0 = O ( max ( S2C , K ) ) so that the time complexity of the layer is reduced from O ( HWCS2K ) to O ( HWQ ·max ( CS2 , K ) ) and the complexity in space is reduced from O ( CS2K ) to O ( Q ·max ( CS2 , K ) ) . Since there is no constraint on the rank of factors , the sparse matrix products of each layer can reach full rank , unlike low-rank decomposition methods . Moreover , the reconstruction of a sparse matrix product with a total of O ( QD ) non-zero values can produce a matrix with more than O ( QD ) non-zero values . This is consistent with the intuition that a product of sparse matrices can be more expressive than a single sparse matrix . | The authors introduced a neural network compressing method, based on factorization of weight matrix to the products of multiple sparse matrices. The goal is to achieve high compression rate. The author used a previous algorithm (Palm4MSA) to implement the method. The experiment result is better than other low-rank-based method, but is similar or worse to Iterative pruning and TT method. | SP:6e9cc976b5835221dd518f26e3a9beaaaf6b890a |
Linear Mode Connectivity in Multitask and Continual Learning | 1 INTRODUCTION . One major consequence of learning multiple tasks in a continual learning ( CL ) setting — where tasks are learned sequentially , and the model can only have access to one task at a time — is catastrophic forgetting ( McCloskey & Cohen , 1989 ) . This is in contrast to multitask learning ( MTL ) , where the learner has simultaneous access to all tasks , which generally learns to perform well on all tasks without suffering from catastrophic forgetting . This limitation hinders the ability of the model to learn continually and efficiently . Recently , several approaches have been proposed to tackle this problem . They have mostly tried to mitigate catastrophic forgetting by using different approximations of the multitask loss . For example , some regularization methods take a quadratic approximation of the loss of previous tasks ( e.g . Kirkpatrick et al. , 2017 ; Yin et al. , 2020 ) . As another example , rehearsal methods attempt to directly use compressed past data either by selecting a representative subset ( e.g . Chaudhry et al. , 2019 ; Titsias et al. , 2019 ) or relying on generative models ( e.g . Shin et al. , 2017 ; Robins , 1995 ) . In this work , we depart from the literature and start from the non-conventional question of understanding “ What is the relationship , potentially in terms of local geometric properties , between the multitask and the continual learning minima ? ” . Our work is inspired by recent work on mode con- ∗Equal contribution 1The code is available at : https : //github.com/imirzadeh/MC-SGD nectivity ( Draxler et al. , 2018 ; Garipov et al. , 2018 ; Frankle et al. , 2020 ) finding that different optima obtained by gradient-based optimization methods are connected by simple paths of non-increasing loss . We try to understand whether the multitask and continual solutions are also connected by a manifold of low error , and what is the simplest form this manifold can take . Surprisingly , we find that a linear manifold , as illustrated in Fig . 1 right , reliably connects the multitask solution to the continual ones , granted that the multitask shares same initialization with the continual learning as described below . This is a significant finding in terms of understanding the phenomenon of catastrophic forgetting through the lens of loss landscapes and optimization trajectory and also for designing better continual learning algorithms . To reach this conclusion , we consider a particular learning regime described in Fig . 1 left , where after learning the first task using the data D1 , we either sequentially learn a second task obtaining ŵ2 or continue by training on both tasks simultaneously ( i.e. , train on D1 + D2 ) , obtaining the multitask solution w∗2 . We investigate the relationship between the two solutions ŵ2 and w∗2 . Note that w∗2 is not the typical multitask solution , which would normally start from w0 and train on both datasets . We chose this slightly non-conventional setup to minimize the potential number of confounding factors that lead to discrepancies be- tween the two solutions ( Fort et al. , 2019 ) . We also rely on the observation from ( Frankle et al. , 2020 ) that initialization can have a big impact on the connectivity between the solutions found on the same task , and sharing the same starting point , as we do between ŵ2 and w∗2 , might warrant a linear path of low error between the two solutions . Moreover , Neyshabur et al . ( 2020 ) noted that in the context of transfer learning , there is no performance barrier between two minima that start from pre-trained weights , which suggests that the pre-trained weights guide the optimization to a flat basin of the loss landscape . In contrast , barriers clearly exist if these two minima start from randomly initialized weights . Our contributions can be summarized as follows : 1 . To the best of our knowledge , our work is the first to study the connectivity between continual learning and multitask learning solutions . 2 . We show that compared to conventional similarity measures such as Euclidean distance or Central Kernel Alignment ( Kornblith et al. , 2019 ) , which are incapable of meaningfully relating these minima , the connectivity through a manifold of low error can reliably be established . And this connecting path is linear , even when considering more than 20 tasks in a row . 3 . Motivated by this , we propose an effective CL algorithm ( Mode Connectivity SGD or MC-SGD ) that is able to outperform several established methods on standard CL benchmarks . 1.1 RELATED WORK . With the trending popularity of deep learning , continual learning has gained a critical importance because the catastrophic forgetting problem imposes key challenges to deploy deep learning models in various applications ( e.g Lange et al. , 2019 ; Kemker et al. , 2018 ) . A growing body of research has attempted to tackle this problem in recent years ( e.g Parisi et al. , 2018 ; Toneva et al. , 2018 ; Nguyen et al. , 2019 ; Farajtabar et al. , 2019 ; Hsu et al. , 2018 ; Rusu et al. , 2016 ; Li et al. , 2019 ; Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Shin et al. , 2017 ; Rolnick et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2018b ; Riemer et al. , 2018 ; Mirzadeh et al. , 2020 ; Wallingford et al. , 2020 ) . Among these works , our proposed MC-SGD bares most similarities to rehearsal based methods such us ( e.g . Shin et al. , 2017 ; Chaudhry et al. , 2018b ) and regularization based methods ( e.g . Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) similar to ( Titsias et al. , 2019 ) . Following ( Lange et al. , 2019 ) , one can categorize continual learning methods into three general categories , based on how they approach dealing with catastrophic forgetting . Experience replay : Experience replay methods build and store a memory of the knowledge learned so far ( Rebuffi et al. , 2016 ; Lopez-Paz & Ranzato , 2017 ; Shin et al. , 2017 ; Riemer et al. , 2018 ; Rios & Itti , 2018 ; Zhang et al. , 2019 ) . As examples , Averaged Gradient Episodic Memory ( A-GEM ) ( Chaudhry et al. , 2018b ) builds an episodic memory of parameter gradients , while ER-Reservoir ( Chaudhry et al. , 2019 ) uses a Reservoir sampling method to maintain the episodic memory . Regularization : These methods explicitly apply regularization techniques to ensure parameters do not change too much ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Lee et al. , 2017 ; Aljundi et al. , 2018 ; Kolouri et al. , 2019 ) . They can either a Bayesian ( Nguyen et al. , 2017 ; Titsias et al. , 2019 ; Schwarz et al. , 2018 ; Ebrahimi et al. , 2019 ; Ritter et al. , 2018 ) or frequentist views ( Farajtabar et al. , 2019 ; He & Jaeger , 2018 ; Zeng et al. , 2018 ) . For instance , Orthogonal Gradient Descent ( OGD ) ( Farajtabar et al. , 2019 ) projects the prediction gradients from new tasks on the subspace of previous tasks ’ gradients to preserve the knowledge . Parameter isolation : Parameter isolation methods allocate different subsets of the parameters to each task ( Rusu et al. , 2016 ; Yoon et al. , 2018 ; Jerfel et al. , 2019 ; Rao et al. , 2019 ; Li et al. , 2019 ) . From the stability-plasticity perspective , these methods implement gating mechanisms that improves the stability and controls the plasticity by activating different gates for each task . Masse et al . ( 2018 ) proposes a bio-inspired approach for a context-dependent gating that activates non-overlapping subset of parameters for any specific task . Supermask in Superposition ( Wortsman et al. , 2020 ) is another parameter isolation method that starts with a randomly initialized , fixed base network and for each task finds a subnetwork ( supermask ) such that the model achieves good performance . Recently , Mirzadeh et al . ( 2020 ) have shown that dropout implicitly creates different pathways or gates for tasks , therefore , it reduces their mutual interference and leads to less forgetting . Continual learning as a problem expands beyond dealing with catastrophic forgetting , one of the hopes behind sequential learning is that it can potentially enable positive forward transfer , as one can build on the previously acquired knowledge . In this sense it connects to topics such as MetaLearning ( Beaulieu et al. , 2020 ; Jerfel et al. , 2019 ; He & Jaeger , 2018 ; Riemer et al. , 2018 ) , Few-Shot Learning ( Wen et al. , 2018 ; Gidaris & Komodakis , 2018 ) , Multi-task and Transfer Learning ( He & Jaeger , 2018 ; Jerfel et al. , 2019 ) . It also aims to work in scenarios where task boundaries are not well defined or provided , or when the data distribution shifts slowly or when a multi-task solution does not exist ( Rao et al. , 2019 ; Aljundi et al. , 2019 ; He et al. , 2019 ; Kaplanis et al. , 2019 ) . Mode connectivity ( Draxler et al. , 2018 ; Garipov et al. , 2018 ) , is a novel tool to understand the loss landscape of deep neural networks . It postulates that different optima obtained by gradient-based optimization methods are connected by simple paths of non-increasing loss ( i.e. , low-loss valleys ) . Recently , various works provided different theoretical explanations for mode connectivity in different scenarios ( Venturi et al. , 2019 ; Kuditipudi et al. , 2019 ) either by relying on over-parametrization or concepts such as noise-stability or dropout-stability . ( Neyshabur et al. , 2020 ) investigated the connection between minima obtained by pre-trained models versus freshly initialized ones . They note that there is no performance barrier between solutions coming from pre-trained models , but there can be a barrier between solutions of different randomly initialized models . ( Frankle et al. , 2020 ) shows that different minima that share the same initialization point are connected by a linear path , even with weight pruning . We rely on this observation when designing our setting that the multitask and continual learning minima share a common starting point . 2 THE RELATION BETWEEN MULTITASK AND CONTINUAL MINIMA . One question driving this work is understanding the relationship between the two different solutions : multitask learning vs. continual learning . In particular , we are interested in scenarios where a multitask solution for the considered tasks exists ( for a discussion when this does not hold see ( He et al. , 2019 ) ) and when both learning regimes have the same objective , finding a solution that performs well on all tasks . We posit that this difference can not be reliably explained by simple and typical distance metrics used to measure similarity between the trained models . In particular we consider Euclidean distance and Central Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ) . However , these solutions are connected by paths of low error , and , provided that learning starts from the same initial conditions , these paths can have a linear form . In Fig . 2 left column we can see the performance of Naive SGD for multitask and continual learning . Details on the experimental setup can be found in the Appendix C. The dashed line represents the multitask solution at convergence , which achieves strong performance on all tasks . It further shows the performance of all tasks during the sequential learning experience ( each point represents the performance after learning another task ) , highlighting how performance on past tasks degrades considerably . Note that as described in Fig . 1 and further detailed in Appendix D.2 Fig . 15 , in the multitask learning scenario , tasks are added sequentially to the loss , to construct a parallel with the continual learning setting . This will be the case throughout this work . Eucledian distance . It might be reasonable to expect that the less the parameter changes , the less the forgetting will be . One can motivate this heuristic on a Taylor expansion of the loss , as done in ( Mirzadeh et al. , 2020 ) , where , the forgetting is defined as : L1 ( ŵ2 ) − L1 ( ŵ1 ) ≈ 1 2 ( ŵ2 − ŵ1 ) > ∇2L1 ( ŵ1 ) ( ŵ2 − ŵ1 ) ≤ 1 2 λmax1 ‖ŵ2 − ŵ1‖2 . ( 1 ) Here , L1 ( w ) is the empirical loss for task 1 , and λmax1 is the largest eigenvalue of its Hessian at ŵ1 . Note that all terms of the Taylor expansion are multiplied with ŵ2 − ŵ1 and hence the norm of this delta will affect the amount of forgetfulness . In fact , this is frequently done when pruning neural networks ( Zhu & Gupta , 2018 ; Han et al. , 2015 ) , where weights are zeroed out based on magnitude , producing minimal Euclidean distance to unprunned model . But , as observed in Fig . 2 ( middle column ) , Eucledian distance does not correlate with not suffering from catastrophic forgetting : the CL solution on task 1 ( ŵ1 ) is closer to the CL solution of task 5 ( ŵ5 ) than the multitask solution of tasks 5 ( w∗5 ) . One explanation could be that the bound defined by Eq . ( 1 ) will not be tight if the vector ŵ5 − ŵ1 does not lie in the direction of the largest eigenvector . Appendix D.1 contains further details on this topic . Centered Kernel Alignment . Centered Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ) measures the similarity of two representations on the same set of examples . Given N examples and two activation outputs on these examples , R1 ∈ RN×d1 and R2 ∈ RN×d2 , CKA is defined by : CKA ( R1 , R2 ) = ‖R > 1 R2‖2F ‖R > 1 R1‖F ‖R > 2 R2‖F , ( 2 ) where , ‖.‖F is the Frobenius norm . Recent work by Ramasesh et al . ( 2020 ) studies catastrophic forgetting on the CIFAR dataset by measuring the CKA similarity score of different layers of ŵ1 and ŵ2 . They argue that the later layers suffer more from catastrophic forgetting by showing that the CKA similarity of initial layers decreases less after training on sequential tasks . However , the CKA score suffers from a few shortcomings . If the number of training epochs per task is small ( e.g. , in streaming case ) , the CKA does not change much , even though the accuracy for previous tasks drops drastically . For instance , in Fig . 2 right column , we show that the pairwise CKA between different layers of the first task minimum ( ŵ1 ) and CL and multitask minima of task 2 and task 5 are roughly the same . Although , the phenomenon observed in ( Ramasesh et al. , 2020 ) is still realizable by a very tiny margin . Moreover , we can see that the multitask minimum of task 2 ( w∗2 ) , and task 5 ( w∗5 ) are more similar to ŵ1 compared to CL minima ( ŵ2 and ŵ5 ) . | The paper studies the relation between the geometry of solutions of continual (CL) and multi-task learning (MTL). Towards this end, the authors empirically identify that all the solutions of CL (i.e. solutions obtained after each task) and MTL are connected by a linear region of low error. This is a very interesting finding and, to my knowledge, has not been studied previously in the CL literature. Based on this observation, the authors propose a memory and regularization-based CL method, MC-SGD, that ensures that the final CL solution is linearly connected to all the task’s solutions. The authors further demonstrate that the solution of the MTL lies in the region where the Hessian of the loss function is low and hence the regularization-based approaches that make use of curvature information (e.g.) EWC, are a promising direction for CL. Experiments are conducted on Permuted and Rotated MNIST, Split CIFAR benchmarks. MC-SGD performs strongly compared to other baselines. | SP:3a07b9f25dd5216e3183232f305f5eeb2333427e |
Linear Mode Connectivity in Multitask and Continual Learning | 1 INTRODUCTION . One major consequence of learning multiple tasks in a continual learning ( CL ) setting — where tasks are learned sequentially , and the model can only have access to one task at a time — is catastrophic forgetting ( McCloskey & Cohen , 1989 ) . This is in contrast to multitask learning ( MTL ) , where the learner has simultaneous access to all tasks , which generally learns to perform well on all tasks without suffering from catastrophic forgetting . This limitation hinders the ability of the model to learn continually and efficiently . Recently , several approaches have been proposed to tackle this problem . They have mostly tried to mitigate catastrophic forgetting by using different approximations of the multitask loss . For example , some regularization methods take a quadratic approximation of the loss of previous tasks ( e.g . Kirkpatrick et al. , 2017 ; Yin et al. , 2020 ) . As another example , rehearsal methods attempt to directly use compressed past data either by selecting a representative subset ( e.g . Chaudhry et al. , 2019 ; Titsias et al. , 2019 ) or relying on generative models ( e.g . Shin et al. , 2017 ; Robins , 1995 ) . In this work , we depart from the literature and start from the non-conventional question of understanding “ What is the relationship , potentially in terms of local geometric properties , between the multitask and the continual learning minima ? ” . Our work is inspired by recent work on mode con- ∗Equal contribution 1The code is available at : https : //github.com/imirzadeh/MC-SGD nectivity ( Draxler et al. , 2018 ; Garipov et al. , 2018 ; Frankle et al. , 2020 ) finding that different optima obtained by gradient-based optimization methods are connected by simple paths of non-increasing loss . We try to understand whether the multitask and continual solutions are also connected by a manifold of low error , and what is the simplest form this manifold can take . Surprisingly , we find that a linear manifold , as illustrated in Fig . 1 right , reliably connects the multitask solution to the continual ones , granted that the multitask shares same initialization with the continual learning as described below . This is a significant finding in terms of understanding the phenomenon of catastrophic forgetting through the lens of loss landscapes and optimization trajectory and also for designing better continual learning algorithms . To reach this conclusion , we consider a particular learning regime described in Fig . 1 left , where after learning the first task using the data D1 , we either sequentially learn a second task obtaining ŵ2 or continue by training on both tasks simultaneously ( i.e. , train on D1 + D2 ) , obtaining the multitask solution w∗2 . We investigate the relationship between the two solutions ŵ2 and w∗2 . Note that w∗2 is not the typical multitask solution , which would normally start from w0 and train on both datasets . We chose this slightly non-conventional setup to minimize the potential number of confounding factors that lead to discrepancies be- tween the two solutions ( Fort et al. , 2019 ) . We also rely on the observation from ( Frankle et al. , 2020 ) that initialization can have a big impact on the connectivity between the solutions found on the same task , and sharing the same starting point , as we do between ŵ2 and w∗2 , might warrant a linear path of low error between the two solutions . Moreover , Neyshabur et al . ( 2020 ) noted that in the context of transfer learning , there is no performance barrier between two minima that start from pre-trained weights , which suggests that the pre-trained weights guide the optimization to a flat basin of the loss landscape . In contrast , barriers clearly exist if these two minima start from randomly initialized weights . Our contributions can be summarized as follows : 1 . To the best of our knowledge , our work is the first to study the connectivity between continual learning and multitask learning solutions . 2 . We show that compared to conventional similarity measures such as Euclidean distance or Central Kernel Alignment ( Kornblith et al. , 2019 ) , which are incapable of meaningfully relating these minima , the connectivity through a manifold of low error can reliably be established . And this connecting path is linear , even when considering more than 20 tasks in a row . 3 . Motivated by this , we propose an effective CL algorithm ( Mode Connectivity SGD or MC-SGD ) that is able to outperform several established methods on standard CL benchmarks . 1.1 RELATED WORK . With the trending popularity of deep learning , continual learning has gained a critical importance because the catastrophic forgetting problem imposes key challenges to deploy deep learning models in various applications ( e.g Lange et al. , 2019 ; Kemker et al. , 2018 ) . A growing body of research has attempted to tackle this problem in recent years ( e.g Parisi et al. , 2018 ; Toneva et al. , 2018 ; Nguyen et al. , 2019 ; Farajtabar et al. , 2019 ; Hsu et al. , 2018 ; Rusu et al. , 2016 ; Li et al. , 2019 ; Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Shin et al. , 2017 ; Rolnick et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ; Chaudhry et al. , 2018b ; Riemer et al. , 2018 ; Mirzadeh et al. , 2020 ; Wallingford et al. , 2020 ) . Among these works , our proposed MC-SGD bares most similarities to rehearsal based methods such us ( e.g . Shin et al. , 2017 ; Chaudhry et al. , 2018b ) and regularization based methods ( e.g . Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) similar to ( Titsias et al. , 2019 ) . Following ( Lange et al. , 2019 ) , one can categorize continual learning methods into three general categories , based on how they approach dealing with catastrophic forgetting . Experience replay : Experience replay methods build and store a memory of the knowledge learned so far ( Rebuffi et al. , 2016 ; Lopez-Paz & Ranzato , 2017 ; Shin et al. , 2017 ; Riemer et al. , 2018 ; Rios & Itti , 2018 ; Zhang et al. , 2019 ) . As examples , Averaged Gradient Episodic Memory ( A-GEM ) ( Chaudhry et al. , 2018b ) builds an episodic memory of parameter gradients , while ER-Reservoir ( Chaudhry et al. , 2019 ) uses a Reservoir sampling method to maintain the episodic memory . Regularization : These methods explicitly apply regularization techniques to ensure parameters do not change too much ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Lee et al. , 2017 ; Aljundi et al. , 2018 ; Kolouri et al. , 2019 ) . They can either a Bayesian ( Nguyen et al. , 2017 ; Titsias et al. , 2019 ; Schwarz et al. , 2018 ; Ebrahimi et al. , 2019 ; Ritter et al. , 2018 ) or frequentist views ( Farajtabar et al. , 2019 ; He & Jaeger , 2018 ; Zeng et al. , 2018 ) . For instance , Orthogonal Gradient Descent ( OGD ) ( Farajtabar et al. , 2019 ) projects the prediction gradients from new tasks on the subspace of previous tasks ’ gradients to preserve the knowledge . Parameter isolation : Parameter isolation methods allocate different subsets of the parameters to each task ( Rusu et al. , 2016 ; Yoon et al. , 2018 ; Jerfel et al. , 2019 ; Rao et al. , 2019 ; Li et al. , 2019 ) . From the stability-plasticity perspective , these methods implement gating mechanisms that improves the stability and controls the plasticity by activating different gates for each task . Masse et al . ( 2018 ) proposes a bio-inspired approach for a context-dependent gating that activates non-overlapping subset of parameters for any specific task . Supermask in Superposition ( Wortsman et al. , 2020 ) is another parameter isolation method that starts with a randomly initialized , fixed base network and for each task finds a subnetwork ( supermask ) such that the model achieves good performance . Recently , Mirzadeh et al . ( 2020 ) have shown that dropout implicitly creates different pathways or gates for tasks , therefore , it reduces their mutual interference and leads to less forgetting . Continual learning as a problem expands beyond dealing with catastrophic forgetting , one of the hopes behind sequential learning is that it can potentially enable positive forward transfer , as one can build on the previously acquired knowledge . In this sense it connects to topics such as MetaLearning ( Beaulieu et al. , 2020 ; Jerfel et al. , 2019 ; He & Jaeger , 2018 ; Riemer et al. , 2018 ) , Few-Shot Learning ( Wen et al. , 2018 ; Gidaris & Komodakis , 2018 ) , Multi-task and Transfer Learning ( He & Jaeger , 2018 ; Jerfel et al. , 2019 ) . It also aims to work in scenarios where task boundaries are not well defined or provided , or when the data distribution shifts slowly or when a multi-task solution does not exist ( Rao et al. , 2019 ; Aljundi et al. , 2019 ; He et al. , 2019 ; Kaplanis et al. , 2019 ) . Mode connectivity ( Draxler et al. , 2018 ; Garipov et al. , 2018 ) , is a novel tool to understand the loss landscape of deep neural networks . It postulates that different optima obtained by gradient-based optimization methods are connected by simple paths of non-increasing loss ( i.e. , low-loss valleys ) . Recently , various works provided different theoretical explanations for mode connectivity in different scenarios ( Venturi et al. , 2019 ; Kuditipudi et al. , 2019 ) either by relying on over-parametrization or concepts such as noise-stability or dropout-stability . ( Neyshabur et al. , 2020 ) investigated the connection between minima obtained by pre-trained models versus freshly initialized ones . They note that there is no performance barrier between solutions coming from pre-trained models , but there can be a barrier between solutions of different randomly initialized models . ( Frankle et al. , 2020 ) shows that different minima that share the same initialization point are connected by a linear path , even with weight pruning . We rely on this observation when designing our setting that the multitask and continual learning minima share a common starting point . 2 THE RELATION BETWEEN MULTITASK AND CONTINUAL MINIMA . One question driving this work is understanding the relationship between the two different solutions : multitask learning vs. continual learning . In particular , we are interested in scenarios where a multitask solution for the considered tasks exists ( for a discussion when this does not hold see ( He et al. , 2019 ) ) and when both learning regimes have the same objective , finding a solution that performs well on all tasks . We posit that this difference can not be reliably explained by simple and typical distance metrics used to measure similarity between the trained models . In particular we consider Euclidean distance and Central Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ) . However , these solutions are connected by paths of low error , and , provided that learning starts from the same initial conditions , these paths can have a linear form . In Fig . 2 left column we can see the performance of Naive SGD for multitask and continual learning . Details on the experimental setup can be found in the Appendix C. The dashed line represents the multitask solution at convergence , which achieves strong performance on all tasks . It further shows the performance of all tasks during the sequential learning experience ( each point represents the performance after learning another task ) , highlighting how performance on past tasks degrades considerably . Note that as described in Fig . 1 and further detailed in Appendix D.2 Fig . 15 , in the multitask learning scenario , tasks are added sequentially to the loss , to construct a parallel with the continual learning setting . This will be the case throughout this work . Eucledian distance . It might be reasonable to expect that the less the parameter changes , the less the forgetting will be . One can motivate this heuristic on a Taylor expansion of the loss , as done in ( Mirzadeh et al. , 2020 ) , where , the forgetting is defined as : L1 ( ŵ2 ) − L1 ( ŵ1 ) ≈ 1 2 ( ŵ2 − ŵ1 ) > ∇2L1 ( ŵ1 ) ( ŵ2 − ŵ1 ) ≤ 1 2 λmax1 ‖ŵ2 − ŵ1‖2 . ( 1 ) Here , L1 ( w ) is the empirical loss for task 1 , and λmax1 is the largest eigenvalue of its Hessian at ŵ1 . Note that all terms of the Taylor expansion are multiplied with ŵ2 − ŵ1 and hence the norm of this delta will affect the amount of forgetfulness . In fact , this is frequently done when pruning neural networks ( Zhu & Gupta , 2018 ; Han et al. , 2015 ) , where weights are zeroed out based on magnitude , producing minimal Euclidean distance to unprunned model . But , as observed in Fig . 2 ( middle column ) , Eucledian distance does not correlate with not suffering from catastrophic forgetting : the CL solution on task 1 ( ŵ1 ) is closer to the CL solution of task 5 ( ŵ5 ) than the multitask solution of tasks 5 ( w∗5 ) . One explanation could be that the bound defined by Eq . ( 1 ) will not be tight if the vector ŵ5 − ŵ1 does not lie in the direction of the largest eigenvector . Appendix D.1 contains further details on this topic . Centered Kernel Alignment . Centered Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ) measures the similarity of two representations on the same set of examples . Given N examples and two activation outputs on these examples , R1 ∈ RN×d1 and R2 ∈ RN×d2 , CKA is defined by : CKA ( R1 , R2 ) = ‖R > 1 R2‖2F ‖R > 1 R1‖F ‖R > 2 R2‖F , ( 2 ) where , ‖.‖F is the Frobenius norm . Recent work by Ramasesh et al . ( 2020 ) studies catastrophic forgetting on the CIFAR dataset by measuring the CKA similarity score of different layers of ŵ1 and ŵ2 . They argue that the later layers suffer more from catastrophic forgetting by showing that the CKA similarity of initial layers decreases less after training on sequential tasks . However , the CKA score suffers from a few shortcomings . If the number of training epochs per task is small ( e.g. , in streaming case ) , the CKA does not change much , even though the accuracy for previous tasks drops drastically . For instance , in Fig . 2 right column , we show that the pairwise CKA between different layers of the first task minimum ( ŵ1 ) and CL and multitask minima of task 2 and task 5 are roughly the same . Although , the phenomenon observed in ( Ramasesh et al. , 2020 ) is still realizable by a very tiny margin . Moreover , we can see that the multitask minimum of task 2 ( w∗2 ) , and task 5 ( w∗5 ) are more similar to ŵ1 compared to CL minima ( ŵ2 and ŵ5 ) . | The paper starts by that observing the local minima obtained in a multi task scenario are connected with a linear path of low error regime to the local minima of each task in a continual learning scenario in contrast to the path between the different minima of tasks incrementally learned, provided the both training of multi task and continual learning have started from the same initialization. The paper studies and shows this mode connectivity empirically. It further discusses and analyses the factors behind this connectivity while noting that this is valid when tasks have shared structure in which local minima can be found nearby. | SP:3a07b9f25dd5216e3183232f305f5eeb2333427e |
Divide-and-Conquer Monte Carlo Tree Search | 1 INTRODUCTION . This is the first sentence of this paper , but it was not the first one we wrote . In fact , the entire introduction section was actually one of the last sections to be added to this manuscript . The discrepancy between the order of inception of ideas and the order of their presentation in this paper probably does not come as a surprise to the reader . Nonetheless , it serves as a point for reflection that is central to the rest of this work , and that can be summarized as “ the order in which we construct a plan does not have to coincide with the order in which we execute it ” . Most standard planners for sequential decision making problems—including Monte Carlo planning , Monte Carlo Tree Search ( MCTS ) and dynamic programming—have a baked-in sequential planning assumption ( Bertsekas et al. , 1995 ; Browne et al. , 2012 ) . These methods begin at either the initial or final state and then plan actions sequentially forward or backwards in time . However , this sequential approach faces two main challenges . ( i ) The transition model used for planning needs to be reliable over long horizons , which is often difficult to achieve when it has to be inferred from data . ( ii ) Credit assignment to each individual action is difficult : In a planning problem spanning a horizon of 100 steps , to assign credit to the first action , we have to compute the optimal cost-to-go for the remaining problem with a horizon of 99 steps , which is only slightly easier than solving the original problem . To overcome these two fundamental challenges , here we consider alternatives to the basic assumptions of sequential planners . We focus on goal-directed decision making problems where an agent should reach a goal state from a start state . Instead of a transition and reward model of the environment , we assume a given goal-directed policy ( the “ low-level ” policy ) and the associated value oracle that returns its success probability on any given task.1 In general , a low-level policy will not be not optimal , e.g . it might be too “ myopic ” to reliably reach goal states that are far away from its current state . We now seek to improve the low-level policy via a suitable sequence of sub-goals that guide it from the start to the final goal , thus maximizing the overall task success probability . This formulation of planning as finding good sub-goal sequences , makes learning of explicit environment models unnecessary , as they are replaced by low-level policies and their value functions . The sub-goal planning problem can still be solved by a conventional sequential planner that begins by searching for the first sub-goal to reach from the start state , then planning the next sub-goal in sequence , and so on . Indeed , this is the approach taken in most hierarchical RL settings based on options or sub-goals ( e.g . Dayan & Hinton , 1993 ; Sutton et al. , 1999 ; Vezhnevets et al. , 2017 ) . However , the credit assignment problem mentioned above persists , as assessing if the first sub-goal is useful still requires evaluating the success probability of the remaining plan . Instead , it could be substantially easier to reason about the utility of a sub-goal “ in the middle ” of the plan , as this breaks the long-horizon problem into two sub-problems with much shorter horizons : how to get to the sub-goal and how to get from there to the final goal . Based on this intuition , we propose the Divide-and-Conquer MCTS ( DC-MCTS ) planner that searches for sub-goals to split the original task into two independent sub-tasks of comparable complexity and then recursively solves these , thereby drastically facilitating credit assignment . To search the space of intermediate sub-goals efficiently , DC-MCTS uses a heuristic for proposing promising sub-goals that is learned from previous search results and agent experience . Humans can plan efficiently over long horizons to solve complex tasks , such as theorem proving or navigation , and some plans even span over decades ( e.g . economic measures ) : In these situations , planning sequentially in terms of next steps – such as what arm to move , or what phone call to make – will cover a tiny proportion of the horizon , neglecting the long uncertainty beyond the last planned step . The algorithm put forward in this paper is a step in the direction of efficient planners that tackle long horizons by recursively and parallelly splitting them into many smaller and smaller sub-problems . In Section 2 , we formulate planning in terms of sub-goals instead of primitive actions . In Section 3 , as our main contribution , we propose the novel Divide-and-Conquer Monte Carlo Tree Search algorithm for this planning problem . In Section 4 we position DC-MCTS within the literature of related work . In Section 5 , we show that it outperforms sequential planners both on grid world and continuous control navigation tasks , demonstrating the utility of constructing plans in a flexible order that can be different from their execution order . 2 IMPROVING GOAL-DIRECTED POLICIES WITH PLANNING . Let S and A be finite sets of states and actions . We consider a multi-task setting , where for each episode the agent has to solve a new task consisting of a new Markov Decision Process ( MDP ) M over S and A. EachM has a single start state s0 and a special absorbing state s∞ , also termed the goal state . If the agent transitions into s∞ at any time it receives a reward of 1 and the episode terminates ; otherwise the reward is 0 . We assume that the agent observes the start and goal states ( s0 , s∞ ) at the beginning of each episode , as well as an encoding vector cM ∈ Rd . This vector provides the agent with additional information about the MDPM of the current episode and will be key to transfer learning across tasks in the multi-task setting . A stochastic , goal-directed policy π is a mapping from S × S × Rd into distributions over A , where π ( a|s , s∞ , cM ) denotes the probability of taking action a in state s in order to get to goal s∞ . For a fixed goal s∞ , we can interpret π as a regular policy , here denoted as πs∞ , mapping states to action probabilities . We denote the value of π in state s for goal s∞ as vπ ( s , s∞|cM ) ; we assume no discounting γ = 1 . Under the above definition of the reward , the value is equal to the success probability of π on the task , i.e . the absorption probability of the stochastic process starting in s0 defined by running πs∞ : vπ ( s0 , s∞|cM ) = P ( s∞ ∈ τ πs∞ s0 |cM ) , 1As we will observe in Section 5 , in practice both the low-level policy and value can be learned . Approximating the value oracle with a learned value function was sufficient for DC-MCTS to plan successfully . where τπs∞s0 is the trajectory generated by running πs∞ from state s0 2 . To keep the notation compact , we will omit the explicit dependence on cM and abbreviate tasks with pairs of states in S × S . 2.1 PLANNING OVER SUB-GOAL SEQUENCES . Assume a given goal-directed policy π , which we also refer to as the low-level policy . If π is not already optimal , we can potentially improve it by planning : If π has a low probability of directly reaching s∞ from the initial state s0 , i.e . vπ ( s0 , s∞ ) ≈ 0 , we will try to find a plan consisting of a sequence of intermediate sub-goals such that they guide π from the start s0 to the goal state s∞ . Concretely , let S∗ = ∪∞n=0Sn be the set of sequences over S , and let |σ| be the length of a sequence σ ∈ S∗ . We define for convenience S̄ : = S ∪ { ∅ } , where ∅ is the empty sequence representing no sub-goal . We refer to σ as a plan for task ( s0 , s∞ ) if σ1 = s0 and σ|σ| = s∞ , i.e . if the first and last elements of σ are equal to s0 and s∞ , respectively . s0S∗s∞ denotes the set of plans for this task . To execute a plan σ , we construct a policy πσ by conditioning the low-level policy π on each of the sub-goals in order : Starting with n = 1 , we feed sub-goal σn+1 to π , i.e . we run πσn+1 ; if σn+1 is reached , we will execute πσn+2 and so on . We now wish to do open-loop planning , i.e . find the plan with the highest success probability P ( s∞ ∈ τπσs0 ) of reaching s∞ . However , this success probability depends on the transition kernels of the underlying MDPs , which might not be known . We can instead define planning as maximizing the following lower bound of the success probability , that can be expressed in terms of the low-level value vπ . Proposition 1 ( Lower bound of success probability ) . The success probability P ( s∞ ∈ τπσs0 ) ≥ L ( σ ) of a plan σ is bounded from below by L ( σ ) : = ∏|σ|−1 i=1 v π ( σi , σi+1 ) , i.e . the product of the success probabilities of π on the sub-tasks defined by ( σi , σi+1 ) . The straight-forward proof is given in Appendix A.1 . Intuitively , L ( σ ) is a lower bound for the success of πσ , as it neglects the probability of “ accidentally ” ( due to stochasticity of the policy or transitions ) running into the goal s∞ before having executed the full plan . We summarize : Definition 1 ( Open-Loop Goal-Directed Planning ) . Given a goal-directed policy π and its corresponding value oracle vπ , we define planning as maximizing L ( σ ) over σ ∈ s0S∗s∞ , i.e . the set of plans for task ( s0 , s∞ ) . We define the high-level ( HL ) value v∗ ( s0 , s∞ ) : = maxσ L ( σ ) as the maximum value of the planning objective . Note the difference between the low-level value vπ and the high-level v∗ . vπ ( s , s′ ) is the probability of the agent directly reaching s′ from s following π , whereas v∗ ( s , s′ ) the probability reaching s′ from s under the optimal plan , which likely includes intermediate sub-goals . In particular , v∗ ≥ vπ . 2.2 AND/OR SEARCH TREE REPRESENTATION . In the following we cast the planning problem into a representation amenable to efficient search . To this end , we use the natural compositionality of plans : We can concatenate a plan σ for the task ( s , s′ ) and a plan σ̂ for the task ( s′ , s′′ ) into a plan σ ◦ σ̂ for the task ( s , s′′ ) . Conversely , we can decompose any given plan σ for task ( s0 , s∞ ) by splitting it at any sub-goal s ∈ σ into σ = σl ◦ σr , where σl is the “ left ” sub-plan for task ( s0 , s ) , and σr is the “ right ” sub-plan for task ( s , s∞ ) . Trivially , the planning objective and the optimal high-level value factorize wrt . to this decomposition : L ( σl ◦ σr ) = L ( σl ) L ( σr ) v∗ ( s0 , s∞ ) = max s∈S̄ v∗ ( s0 , s ) · v∗ ( s , s∞ ) . This allows us to recursively reformulate planning as : arg max s∈S̄ ( arg max σl∈s0S∗s L ( σl ) ) · ( arg max σr∈sS∗s∞ L ( σr ) ) . ( 1 ) The above equations are the Bellman equations and the Bellman optimality equations for the classical single pair shortest path problem in graphs , where edge weights are given by − log vπ ( s , s′ ) . We can represent this planning problem by an AND/OR search tree ( Nilsson , N. J. , 1980 ) with alternating levels of OR and AND nodes . An OR node , also termed an action node , is labeled by a task 2We assume MDPs with multiple absorbing states such that this probability is not trivially equal to 1 for most policies , e.g . uniform policy . In experiments , we used a finite episode length . ( s , s′′ ) ∈ S × S ; the root of the search tree is an OR node labeled by the original task ( s0 , s∞ ) . A terminal OR node ( s , s′′ ) has a value vπ ( s , s′′ ) attached to it , which reflects the success probability of πs′′ for completing the sub-task ( s , s′′ ) . Each non-terminal OR node has |S|+ 1 AND nodes as children . Each of these is labeled by a triple ( s , s′ , s′′ ) for s′ ∈ S̄ , which correspond to inserting a sub-goal s′ into the overall plan , or not inserting one in case of s = ∅ . Every AND node ( s , s′ , s′′ ) , or conjunction node , has two OR children , the “ left ” sub-task ( s , s′ ) and the “ right ” sub-task ( s′ , s′′ ) . In this representation , plans are induced by solution trees . A solution tree Tσ is a sub-tree of the complete AND/OR search tree , with the properties that ( i ) the root ( s0 , s∞ ) ∈ Tσ , ( ii ) each OR node in Tσ has at most one child in Tσ and ( iii ) each AND node in Tσ as two children in Tσ . The plan σ and its objective L ( σ ) can be computed from Tσ by a depth-first traversal of Tσ . The correspondence of sub-trees to plans is many-to-one , as Tσ , in addition to the plan itself , contains the order in which the plan was constructed . Figure 6 in Section 5.3 shows an example for a search and solution tree . Below we will discuss how to construct a favourable search order heuristic . | This paper proposes Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS) for for goal-directed planning problems (i.e. problems where reaching a specific goal state is the objective, like traversing a maze with specified start and goal positions). The assumed setting is one where transition and reward models of the environment are not (necessarily) available, but a low-level goal-directed policy that can attempt to navigate from a given start to a given goal position, as well as value oracle that can return the success probability of the low-level policy on any given task, are available. Planning problems are modelled as AND/OR search trees, where OR nodes are labelled by a start state s and a goal state s'', and AND nodes are labelled by triples (s, s', s''). An OR node has children for every possible state, such that traversing to a child indicates the insertion of the corresponding state as an additional subgoal in between s and s', plus one extra child to indicate the choice of returning the current plan without inserting any additional subgoals. AND nodes have two children; one OR node for the first half (s, s') of the plan, and a second OR node for the second half (s', s'') of the plan. The MCTS can construct a plan by inserting subgoals such that they become easier to solve for the low-level policy by searching this tree. | SP:6deef1227ab2e0bf5dd2880ea7f3947490fb521d |
Divide-and-Conquer Monte Carlo Tree Search | 1 INTRODUCTION . This is the first sentence of this paper , but it was not the first one we wrote . In fact , the entire introduction section was actually one of the last sections to be added to this manuscript . The discrepancy between the order of inception of ideas and the order of their presentation in this paper probably does not come as a surprise to the reader . Nonetheless , it serves as a point for reflection that is central to the rest of this work , and that can be summarized as “ the order in which we construct a plan does not have to coincide with the order in which we execute it ” . Most standard planners for sequential decision making problems—including Monte Carlo planning , Monte Carlo Tree Search ( MCTS ) and dynamic programming—have a baked-in sequential planning assumption ( Bertsekas et al. , 1995 ; Browne et al. , 2012 ) . These methods begin at either the initial or final state and then plan actions sequentially forward or backwards in time . However , this sequential approach faces two main challenges . ( i ) The transition model used for planning needs to be reliable over long horizons , which is often difficult to achieve when it has to be inferred from data . ( ii ) Credit assignment to each individual action is difficult : In a planning problem spanning a horizon of 100 steps , to assign credit to the first action , we have to compute the optimal cost-to-go for the remaining problem with a horizon of 99 steps , which is only slightly easier than solving the original problem . To overcome these two fundamental challenges , here we consider alternatives to the basic assumptions of sequential planners . We focus on goal-directed decision making problems where an agent should reach a goal state from a start state . Instead of a transition and reward model of the environment , we assume a given goal-directed policy ( the “ low-level ” policy ) and the associated value oracle that returns its success probability on any given task.1 In general , a low-level policy will not be not optimal , e.g . it might be too “ myopic ” to reliably reach goal states that are far away from its current state . We now seek to improve the low-level policy via a suitable sequence of sub-goals that guide it from the start to the final goal , thus maximizing the overall task success probability . This formulation of planning as finding good sub-goal sequences , makes learning of explicit environment models unnecessary , as they are replaced by low-level policies and their value functions . The sub-goal planning problem can still be solved by a conventional sequential planner that begins by searching for the first sub-goal to reach from the start state , then planning the next sub-goal in sequence , and so on . Indeed , this is the approach taken in most hierarchical RL settings based on options or sub-goals ( e.g . Dayan & Hinton , 1993 ; Sutton et al. , 1999 ; Vezhnevets et al. , 2017 ) . However , the credit assignment problem mentioned above persists , as assessing if the first sub-goal is useful still requires evaluating the success probability of the remaining plan . Instead , it could be substantially easier to reason about the utility of a sub-goal “ in the middle ” of the plan , as this breaks the long-horizon problem into two sub-problems with much shorter horizons : how to get to the sub-goal and how to get from there to the final goal . Based on this intuition , we propose the Divide-and-Conquer MCTS ( DC-MCTS ) planner that searches for sub-goals to split the original task into two independent sub-tasks of comparable complexity and then recursively solves these , thereby drastically facilitating credit assignment . To search the space of intermediate sub-goals efficiently , DC-MCTS uses a heuristic for proposing promising sub-goals that is learned from previous search results and agent experience . Humans can plan efficiently over long horizons to solve complex tasks , such as theorem proving or navigation , and some plans even span over decades ( e.g . economic measures ) : In these situations , planning sequentially in terms of next steps – such as what arm to move , or what phone call to make – will cover a tiny proportion of the horizon , neglecting the long uncertainty beyond the last planned step . The algorithm put forward in this paper is a step in the direction of efficient planners that tackle long horizons by recursively and parallelly splitting them into many smaller and smaller sub-problems . In Section 2 , we formulate planning in terms of sub-goals instead of primitive actions . In Section 3 , as our main contribution , we propose the novel Divide-and-Conquer Monte Carlo Tree Search algorithm for this planning problem . In Section 4 we position DC-MCTS within the literature of related work . In Section 5 , we show that it outperforms sequential planners both on grid world and continuous control navigation tasks , demonstrating the utility of constructing plans in a flexible order that can be different from their execution order . 2 IMPROVING GOAL-DIRECTED POLICIES WITH PLANNING . Let S and A be finite sets of states and actions . We consider a multi-task setting , where for each episode the agent has to solve a new task consisting of a new Markov Decision Process ( MDP ) M over S and A. EachM has a single start state s0 and a special absorbing state s∞ , also termed the goal state . If the agent transitions into s∞ at any time it receives a reward of 1 and the episode terminates ; otherwise the reward is 0 . We assume that the agent observes the start and goal states ( s0 , s∞ ) at the beginning of each episode , as well as an encoding vector cM ∈ Rd . This vector provides the agent with additional information about the MDPM of the current episode and will be key to transfer learning across tasks in the multi-task setting . A stochastic , goal-directed policy π is a mapping from S × S × Rd into distributions over A , where π ( a|s , s∞ , cM ) denotes the probability of taking action a in state s in order to get to goal s∞ . For a fixed goal s∞ , we can interpret π as a regular policy , here denoted as πs∞ , mapping states to action probabilities . We denote the value of π in state s for goal s∞ as vπ ( s , s∞|cM ) ; we assume no discounting γ = 1 . Under the above definition of the reward , the value is equal to the success probability of π on the task , i.e . the absorption probability of the stochastic process starting in s0 defined by running πs∞ : vπ ( s0 , s∞|cM ) = P ( s∞ ∈ τ πs∞ s0 |cM ) , 1As we will observe in Section 5 , in practice both the low-level policy and value can be learned . Approximating the value oracle with a learned value function was sufficient for DC-MCTS to plan successfully . where τπs∞s0 is the trajectory generated by running πs∞ from state s0 2 . To keep the notation compact , we will omit the explicit dependence on cM and abbreviate tasks with pairs of states in S × S . 2.1 PLANNING OVER SUB-GOAL SEQUENCES . Assume a given goal-directed policy π , which we also refer to as the low-level policy . If π is not already optimal , we can potentially improve it by planning : If π has a low probability of directly reaching s∞ from the initial state s0 , i.e . vπ ( s0 , s∞ ) ≈ 0 , we will try to find a plan consisting of a sequence of intermediate sub-goals such that they guide π from the start s0 to the goal state s∞ . Concretely , let S∗ = ∪∞n=0Sn be the set of sequences over S , and let |σ| be the length of a sequence σ ∈ S∗ . We define for convenience S̄ : = S ∪ { ∅ } , where ∅ is the empty sequence representing no sub-goal . We refer to σ as a plan for task ( s0 , s∞ ) if σ1 = s0 and σ|σ| = s∞ , i.e . if the first and last elements of σ are equal to s0 and s∞ , respectively . s0S∗s∞ denotes the set of plans for this task . To execute a plan σ , we construct a policy πσ by conditioning the low-level policy π on each of the sub-goals in order : Starting with n = 1 , we feed sub-goal σn+1 to π , i.e . we run πσn+1 ; if σn+1 is reached , we will execute πσn+2 and so on . We now wish to do open-loop planning , i.e . find the plan with the highest success probability P ( s∞ ∈ τπσs0 ) of reaching s∞ . However , this success probability depends on the transition kernels of the underlying MDPs , which might not be known . We can instead define planning as maximizing the following lower bound of the success probability , that can be expressed in terms of the low-level value vπ . Proposition 1 ( Lower bound of success probability ) . The success probability P ( s∞ ∈ τπσs0 ) ≥ L ( σ ) of a plan σ is bounded from below by L ( σ ) : = ∏|σ|−1 i=1 v π ( σi , σi+1 ) , i.e . the product of the success probabilities of π on the sub-tasks defined by ( σi , σi+1 ) . The straight-forward proof is given in Appendix A.1 . Intuitively , L ( σ ) is a lower bound for the success of πσ , as it neglects the probability of “ accidentally ” ( due to stochasticity of the policy or transitions ) running into the goal s∞ before having executed the full plan . We summarize : Definition 1 ( Open-Loop Goal-Directed Planning ) . Given a goal-directed policy π and its corresponding value oracle vπ , we define planning as maximizing L ( σ ) over σ ∈ s0S∗s∞ , i.e . the set of plans for task ( s0 , s∞ ) . We define the high-level ( HL ) value v∗ ( s0 , s∞ ) : = maxσ L ( σ ) as the maximum value of the planning objective . Note the difference between the low-level value vπ and the high-level v∗ . vπ ( s , s′ ) is the probability of the agent directly reaching s′ from s following π , whereas v∗ ( s , s′ ) the probability reaching s′ from s under the optimal plan , which likely includes intermediate sub-goals . In particular , v∗ ≥ vπ . 2.2 AND/OR SEARCH TREE REPRESENTATION . In the following we cast the planning problem into a representation amenable to efficient search . To this end , we use the natural compositionality of plans : We can concatenate a plan σ for the task ( s , s′ ) and a plan σ̂ for the task ( s′ , s′′ ) into a plan σ ◦ σ̂ for the task ( s , s′′ ) . Conversely , we can decompose any given plan σ for task ( s0 , s∞ ) by splitting it at any sub-goal s ∈ σ into σ = σl ◦ σr , where σl is the “ left ” sub-plan for task ( s0 , s ) , and σr is the “ right ” sub-plan for task ( s , s∞ ) . Trivially , the planning objective and the optimal high-level value factorize wrt . to this decomposition : L ( σl ◦ σr ) = L ( σl ) L ( σr ) v∗ ( s0 , s∞ ) = max s∈S̄ v∗ ( s0 , s ) · v∗ ( s , s∞ ) . This allows us to recursively reformulate planning as : arg max s∈S̄ ( arg max σl∈s0S∗s L ( σl ) ) · ( arg max σr∈sS∗s∞ L ( σr ) ) . ( 1 ) The above equations are the Bellman equations and the Bellman optimality equations for the classical single pair shortest path problem in graphs , where edge weights are given by − log vπ ( s , s′ ) . We can represent this planning problem by an AND/OR search tree ( Nilsson , N. J. , 1980 ) with alternating levels of OR and AND nodes . An OR node , also termed an action node , is labeled by a task 2We assume MDPs with multiple absorbing states such that this probability is not trivially equal to 1 for most policies , e.g . uniform policy . In experiments , we used a finite episode length . ( s , s′′ ) ∈ S × S ; the root of the search tree is an OR node labeled by the original task ( s0 , s∞ ) . A terminal OR node ( s , s′′ ) has a value vπ ( s , s′′ ) attached to it , which reflects the success probability of πs′′ for completing the sub-task ( s , s′′ ) . Each non-terminal OR node has |S|+ 1 AND nodes as children . Each of these is labeled by a triple ( s , s′ , s′′ ) for s′ ∈ S̄ , which correspond to inserting a sub-goal s′ into the overall plan , or not inserting one in case of s = ∅ . Every AND node ( s , s′ , s′′ ) , or conjunction node , has two OR children , the “ left ” sub-task ( s , s′ ) and the “ right ” sub-task ( s′ , s′′ ) . In this representation , plans are induced by solution trees . A solution tree Tσ is a sub-tree of the complete AND/OR search tree , with the properties that ( i ) the root ( s0 , s∞ ) ∈ Tσ , ( ii ) each OR node in Tσ has at most one child in Tσ and ( iii ) each AND node in Tσ as two children in Tσ . The plan σ and its objective L ( σ ) can be computed from Tσ by a depth-first traversal of Tσ . The correspondence of sub-trees to plans is many-to-one , as Tσ , in addition to the plan itself , contains the order in which the plan was constructed . Figure 6 in Section 5.3 shows an example for a search and solution tree . Below we will discuss how to construct a favourable search order heuristic . | This paper proposes Divide-and-Conquer Monte Carlo Tree Search (DC-MCTS), a planning algorithm for goal-directed decision-making problems, which makes a plan of the trajectory via recursive hierarchical partitioning. DC-MCTS assumes a (suboptimal) goal-directed low-level policy and its oracle value function. Then, it formulates the given planning problem as finding a sequence of sub-goal states and applies the divide-and-conquer strategy, i.e. split the original task into two sub-tasks (defined as initial state and goal state) and recursively solve them. Unlike the standard MCTS, the decision making of DC-MCTS operates not on the action space but on the state space of the problem, and the decision is made non-sequential way. Experimental results show that DC-MCTS outperforms the MCTS baseline that expands only the right sub-problem. | SP:6deef1227ab2e0bf5dd2880ea7f3947490fb521d |
Asynchronous Advantage Actor Critic: Non-asymptotic Analysis and Linear Speedup | Asynchronous and parallel implementation of standard reinforcement learning ( RL ) algorithms is a key enabler of the tremendous success of modern RL . Among many asynchronous RL algorithms , arguably the most popular and effective one is the asynchronous advantage actor-critic ( A3C ) algorithm . Although A3C is becoming the workhorse of RL , its theoretical properties are still not well-understood , including the non-asymptotic analysis and the performance gain of parallelism ( a.k.a . speedup ) . This paper revisits the A3C algorithm with TD ( 0 ) for the critic update , termed A3C-TD ( 0 ) , with provable convergence guarantees . With linear value function approximation for the TD update , the convergence of A3C-TD ( 0 ) is established under both i.i.d . and Markovian sampling . Under i.i.d . sampling , A3CTD ( 0 ) obtains sample complexity ofO ( −2.5/N ) per worker to achieve accuracy , where N is the number of workers . Compared to the best-known sample complexity of O ( −2.5 ) for two-timescale AC , A3C-TD ( 0 ) achieves linear speedup , which justifies the advantage of parallelism and asynchrony in AC algorithms theoretically for the first time . Numerical tests on synthetically generated instances and OpenAI Gym environments have been provided to verify our theoretical analysis . 1 INTRODUCTION Reinforcement learning ( RL ) has achieved impressive performance in many domains such as robotics [ 1 , 2 ] and video games [ 3 ] . However , these empirical successes are often at the expense of significant computation . To unlock high computation capabilities , the state-of-the-art RL approaches rely on sampling data from massive parallel simulators on multiple machines [ 3 , 4 , 5 ] . Empirically , these approaches can stabilize the learning processes and reduce training time when they are implemented in an asynchronous manner . One popular RL method that often achieves the best empirical performance is the asynchronous variant of the actor-critic ( AC ) algorithm , which is referred to as A3C [ 3 ] . A3C builds on the original AC algorithm [ 6 ] . At a high level , AC simultaneously performs policy optimization ( a.k.a . the actor step ) using the policy gradient method [ 7 ] and policy evaluation ( a.k.a . the critic step ) using the temporal difference learning ( TD ) algorithm [ 8 ] . To ensure scalability , both actor and critic steps can combine with various function approximation techniques . To ensure stability , AC is often implemented in a two time-scale fashion , where the actor step runs in the slow timescale and the critic step runs in the fast timescale . Similar to other on-policy RL algorithms , AC uses samples generated from the target policy . Thus , data sampling is entangled with the learning procedure , which generates significant overhead . To speed up the sampling process of AC , A3C introduces multiple workers with a shared policy , and each learner has its own simulator to perform data sampling . The shared policy can be then updated using samples collected from multiple learners . Despite the tremendous empirical success achieved by A3C , to the best of our knowledge , its theoretical property is not well-understood . The following theoretical questions remain unclear : Q1 ) Under what assumption does A3C converge ? Q2 ) What is its convergence rate ? Q3 ) Can A3C obtain benefit ( or speedup ) using parallelism and asynchrony ? For Q3 ) , we are interested in the training time linear speedup with N workers , which is the ratio between the training time using a single worker and that using N workers . Since asynchronous parallelism mitigates the effect of stragglers and keeps all workers busy , the training time speedup can be measured roughly by the sample ( i.e. , computational ) complexity linear speedup [ 9 ] , given by Speedup ( N ) = sample complexity when using one worker average sample complexity per worker when using N workers . ( 1 ) If Speedup ( N ) = Θ ( N ) , the speedup is linear , and the training time roughly reduces linearly as the number of workers increases . This paper aims to answer these questions , towards the goal of providing theoretical justification for the empirical successes of parallel and asynchronous RL . 1.1 RELATED WORKS Analysis of actor critic algorithms . AC method was first proposed by [ 6 , 10 ] , with asymptotic convergence guarantees provided in [ 6 , 10 , 11 ] . It was not until recently that the non-asymptotic analyses of AC have been established . The finite-sample guarantee for the batch AC algorithm has been established in [ 12 , 13 ] with i.i.d . sampling . Later , in [ 14 ] , the finite-sample analysis was established for the double-loop nested AC algorithm under the Markovian setting . An improved analysis for the Markovian setting with minibatch updates has been presented in [ 15 ] for the nested AC method . More recently , [ 16 , 17 ] have provided the first finite-time analyses for the two-timescale AC algorithms under Markov sampling , with both Õ ( −2.5 ) sample complexity , which is the bestknown sample complexity for two-timescale AC . Through the lens of bi-level optimization , [ 18 ] has also provided finite-sample guarantees for this two-timescale Markov sampling setting , with global optimality guarantees when a natural policy gradient step is used in the actor . However , none of the existing works has analyzed the effect of the asynchronous and parallel updates in AC . Empirical parallel and distributed AC . In [ 3 ] , the original A3C method was proposed and became the workhorse in empirical RL . Later , [ 19 ] has provided a GPU-version of A3C which significantly decreases training time . Recently , the A3C algorithm is further optimized in modern computers by [ 20 ] , where a large batch variant of A3C with improved efficiency is also proposed . In [ 21 ] , an importance weighted distributed AC algorithm IMPALA has been developed to solve a collection of problems with one single set of parameters . Recently , a gossip-based distributed yet synchronous AC algorithm has been proposed in [ 5 ] , which has achieved the performance competitive to A3C . Asynchronous stochastic optimization . For solving general optimization problems , asynchronous stochastic methods have received much attention recently . The study of asynchronous stochastic methods can be traced back to 1980s [ 22 ] . With the batch size M , [ 23 ] analyzed asynchronous SGD ( async-SGD ) for convex functions , and derived a convergence rate of O ( K− 12M− 12 ) if delay K0 is bounded by O ( K 1 4M− 3 4 ) . This result implies linear speedup . [ 24 ] extended the analysis of [ 23 ] to smooth convex with nonsmooth regularization and derived a similar rate . Recent studies by [ 25 ] improved upper bound of K0 to O ( K 1 2M− 1 2 ) . However , all these works have focused on the single-timescale SGD with a single variable , which can not capture the stochastic recursion of the AC and A3C algorithms . To best of our knowledge , non-asymptotic analysis of asynchronous two-timescale SGD has remained unaddressed , and its speedup analysis is even an uncharted territory . 1.2 THIS WORK In this context , we revisit A3C with TD ( 0 ) for the critic update , termed A3C-TD ( 0 ) . The hope is to provide non-asymptotic guarantee and linear speedup justification for this popular algorithm . Our contributions . Compared to the existing literature on both the AC algorithms and the asyncSGD , our contributions can be summarized as follows . c1 ) We revisit two-timescale A3C-TD ( 0 ) and establish its convergence rates with both i.i.d . and Markovian sampling . To the best of our knowledge , this is the first non-asymptotic convergence result for asynchronous parallel AC algorithms . c2 ) We characterize the sample complexity of A3C-TD ( 0 ) . In i.i.d . setting , A3C-TD ( 0 ) achieves a sample complexity of O ( −2.5/N ) per worker , where N is the number of workers . Compared to the best-known complexity of O ( −2.5 ) for i.i.d . two-timescale AC [ 18 ] , A3C-TD ( 0 ) achieves linear speedup , thanks to the parallelism and asynchrony . In the Markovian setting , if delay is bounded , the sample complexity of A3C-TD ( 0 ) matches the order of the non-parallel AC algorithm [ 17 ] . c3 ) We test A3C-TD ( 0 ) on the synthetically generated environment to verify our theoretical guarantees with both i.i.d . and Markovian sampling . We also test A3C-TD ( 0 ) on the classic control tasks and Atari Games from OpenAI Gym . Code is available in the supplementary material . Technical challenges . Compared to the recent convergence analysis of nonparallel two-timescale AC in [ 16 , 17 , 18 ] , several new challenges arise due to the parallelism and asynchrony . Markovian noise coupled with asynchrony and delay . The analysis of two-timescale AC algorithm is non-trivial because of the Markovian noise coupled with both the actor and critic steps . Different from the nonparallel AC that only involves a single Markov chain , asynchronous parallel AC introduces multiple Markov chains ( one per worker ) that mix at different speed . This is because at a given iteration , workers collect different number of samples and thus their chains mix to different degrees . As we will show later , the worker with the slowest mixing chain will determine the convergence . Linear speedup for SGD with two coupled sequences . Parallel async-SGD has been shown to achieve linear speedup recently [ 9 , 26 ] . Different from async-SGD , asynchronous AC is a two-timescale stochastic semi-gradient algorithm for solving the more challenging bilevel optimization problem ( see [ 18 ] ) . The errors induced by asynchrony and delay are intertwined with both actor and critic updates via a nested structure , which makes the sharp analysis more challenging . Our linear speedup analysis should be also distinguished from that of mini-batch async-SGD [ 27 ] , where the speedup is a result of variance reduction thanks to the larger batch size generated by parallel workers . 2 PRELIMINARIES 2.1 MARKOV DECISION PROCESS AND POLICY GRADIENT THEOREM RL problems are often modeled as an MDP described byM = { S , A , P , r , γ } , where S is the state space , A is the action space , P ( s′|s , a ) is the probability of transitioning to s′ ∈ S given current state s ∈ S and action a ∈ A , and r ( s , a , s′ ) is the reward associated with the transition ( s , a , s′ ) , and γ ∈ [ 0 , 1 ) is a discount factor . Throughout the paper , we assume the reward r is upper-bounded by a constant rmax . A policy π : S → ∆ ( A ) is defined as a mapping from the state space S to the probability distribution over the action space A . Considering discrete time t in an infinite horizon , a policy π can generate a trajectory of state-action pairs ( s0 , a0 , s1 , a1 , . . . ) with at ∼ π ( ·|st ) and st+1 ∼ P ( ·|st , at ) . Given a policy π , we define the state and state action value functions as Vπ ( s ) : = E [ ∞∑ t=0 γtr ( st , at , st+1 ) | s0 = s ] , Qπ ( s , a ) : = E [ ∞∑ t=0 γtr ( st , at , st+1 ) | s0 = s , a0 = a ] ( 2 ) where E is taken over the trajectory ( s0 , a0 , s1 , a1 , . . . ) generated under policy π . With the above definitions , the advantage function is Aπ ( s , a ) : = Qπ ( s , a ) − Vπ ( s ) . With η denoting the initial state distribution , the discounted state visitation measure induced by policy π is defined as dπ ( s ) : = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s | s0 ∼ η , π ) , and the discounted state action visitation measure is d′π ( s , a ) = ( 1− γ ) ∑∞ t=0 γ tP ( st = s | s0 ∼ η , π ) π ( a|s ) . The goal of RL is to find a policy that maximizes the expected accumulative reward J ( π ) : = Es∼η [ Vπ ( s ) ] . When the state and action spaces are large , finding the optimal policy π becomes computationally intractable . To overcome the inherent difficulty of learning a function , the policy gradient methods search the best performing policy over a class of parameterized policies . We parameterize the policy with parameter θ ∈ Rd , and solve the optimization problem as max θ∈Rd J ( θ ) with J ( θ ) : = E s∼η [ Vπθ ( s ) ] . ( 3 ) To maximize J ( θ ) with respect to θ , one can update θ using the policy gradient direction given by [ 7 ] ∇J ( θ ) = E s , a∼d′ θ [ Aπθ ( s , a ) ψθ ( s , a ) ] , ( 4 ) where ψθ ( s , a ) : = ∇ log πθ ( a|s ) , and d′θ : = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s | s0 , πθ ) πθ ( a|s ) . Since computing E in ( 4 ) is expensive if not impossible , popular policy gradient-based algorithms iteratively update θ using stochastic estimate of ( 4 ) such as REINFORCE [ 28 ] and G ( PO ) MDP [ 29 ] . 2.2 ACTOR-CRITIC ALGORITHM WITH VALUE FUNCTION APPROXIMATION Both REINFORCE and G ( PO ) MDP-based policy gradient algorithms rely on a Monte-Carlo estimate of the value function Vπθ ( s ) and thus ∇J ( θ ) by generating a trajectory per iteration . However , policy gradient methods based on Monte-Carlo estimate typically suffer from high variance and large sampling cost . An alternative way is to recursively refine the estimate of Vπθ ( s ) . For a policy πθ , it is known that Vπθ ( s ) satisfies the Bellman equation [ 30 ] , that is Vπθ ( s ) = E a∼πθ ( ·|s ) , s′∼P ( ·|s , a ) [ r ( s , a , s′ ) + γVπθ ( s ′ ) ] , ∀s ∈ S. ( 5 ) In practice , when the state space S is prohibitively large , one can not afford the computational and memory complexity of computing Vπθ ( s ) and Aπθ ( s , a ) . To overcome this curse-of-dimensionality , a popular method is to approximate the value function using function approximation techniques . Given the state feature mapping φ ( · ) : S −→ Rd′ for some d′ > 0 , we approximate the value function linearly as Vπθ ( s ) ≈ V̂ω ( s ) : = φ ( s ) > ω , where ω ∈ Rd ′ is the critic parameter . Given a policy πθ , the task of finding the best ω such that Vπθ ( s ) ≈ V̂ω ( s ) is usually addressed by TD learning [ 8 ] . Defining the kth transition as xk : = ( sk , ak , sk+1 ) and the corresponding TD-error as δ̂ ( xk , ωk ) : = r ( sk , ak , sk+1 ) + γφ ( sk+1 ) > ωk − φ ( sk ) > ωk , the parameter ω is updated via ωk+1 = ΠRω ( ωk + βkg ( xk , ωk ) ) with g ( xk , ωk ) : = δ̂ ( xk , ωk ) ∇ωk V̂ωk ( sk ) ( 6 ) where βk is the critic stepsize , and ΠRω is the projection with Rω being a pre-defined constant . The projection step is often used to control the norm of the gradient . In AC , it prevents the actor and critic updates from going a too large step in the ‘ wrong ’ direction ; see e.g. , [ 6 , 16 , 17 ] . Using the definition of advantage function Aπθ ( s , a ) = Es′∼P [ r ( s , a , s′ ) + γVπθ ( s′ ) ] − Vπθ ( s ) , we can also rewrite ( 4 ) as ∇J ( θ ) = Es , a∼d′θ , s′∼P [ ( r ( s , a , s ′ ) + γVπθ ( s ′ ) − Vπθ ( s ) ) ψθ ( s , a ) ] . Leveraging the value function approximation , we can then approximate the policy gradient as ∇̂J ( θ ) = ( r ( s , a , s′ ) + γV̂ω ( s ′ ) − V̂ω ( s ) ) ψθ ( s , a ) = δ̂ ( x , ω ) ψθ ( s , a ) ( 7 ) which gives rise to the policy update θk+1 = θk + αkv ( xk , θk , ωk ) with v ( xk , θk , ωk ) : = δ̂ ( xk , ωk ) ψθk ( sk , ak ) ( 8 ) where αk is the stepsize for the actor update . To ensure convergence when simultaneously performing critic and actor updates , the stepsizes αk and βk often decay at two different rates , which is referred to the two-timescale AC [ 17 , 18 ] . 3 ASYNCHRONOUS ADVANTAGE ACTOR CRITIC WITH TD ( 0 ) To speed up the training process , we implement AC over N workers in a shared memory setting without coordinating among workers — a setting similar to that in A3C [ 3 ] . Each worker has its own simulator to perform sampling , and then collaboratively updates the shared policy πθ using AC updates . As there is no synchronization after each update , the policy used by workers to generate samples may be outdated , which introduces staleness . Notations on transition ( s , a , s′ ) . Since each worker will maintain a separate Markov chain , we thereafter use subscription t in ( st , at , st+1 ) to indicate the tth transition on a Markov chain . We use k to denote the global counter ( or iteration ) , which increases by one whenever a worker finishes the actor and critic updates in the shared memory . We use subscription ( k ) in ( s ( k ) , a ( k ) , s′ ( k ) ) to indicate the transition used in the kth update . Specifically , we initialize θ0 , ω0 in the shared memory . Each worker will initialize the simulator with initial state s0 . Without coordination , workers will read θ , ω in the shared memory . From each worker ’ s view , it then generates sample ( st , at , st+1 ) by either sampling st from µθ ( · ) , where µθ ( · ) is the stationary distribution of an artificial MDP with transition probability measure P̃ ( ·|st , at ) : = γP ( ·|st , at ) + ( 1 − γ ) η ( · ) , or , sampling st from a Markov chain under policy πθ . In both cases , each worker obtains at ∼ πθ ( ·|st ) and st+1 ∼ P̃ ( ·|st , at ) . Sampling st+1 from P̃ ( ·|st , at ) can be achieved by sampling st+1 from η ( · ) with probability 1− γ and from P ( ·|st , at ) otherwise . Once Algorithm 1 Asynchronous advantage AC with TD ( 0 ) : each worker ’ s view . 1 : Global initialize : Global counter k = 0 , initial θ0 , ω0 in the shared memory . 2 : Worker initialize : Local counter t = 0 . Obtain initial state s0 . 3 : for t = 0 , 1 , 2 , · · · do 4 : Read θ , ω in the shared memory . 5 : Option 1 ( i.i.d . sampling ) : 6 : Sample st ∼ µθ ( · ) , at ∼ πθ ( ·|s ) , st+1 ∼ P̃ ( ·|st , at ) . 7 : Option 2 ( Markovian sampling ) : 8 : Sample at ∼ πθ ( ·|st ) , st+1 ∼ P̃ ( ·|st , at ) . 9 : Compute δ̂ ( xt , ω ) = r ( st , at , st+1 ) + γV̂ω ( st+1 ) − V̂ω ( st ) . 10 : Compute g ( xt , ω ) = δ̂ ( xt , ω ) ∇ωV̂ω ( st ) . 11 : Compute v ( xt , θ , ω ) = δ̂ ( xt , ω ) ψθ ( st , at ) . 12 : In the shared memory , perform update ( 9 ) . 13 : end for obtaining xt : = ( st , at , st+1 ) , each worker locally computes the policy gradient v ( xt , θ , ω ) and the TD ( 0 ) update g ( xt , ω ) , and then updates the parameters in shared memory asynchronously by ωk+1 = ΠRω ( ωk + βkg ( x ( k ) , ωk−τk ) ) , ( 9a ) θk+1 = θk + αkv ( x ( k ) , θk−τk , ωk−τk ) , ( 9b ) where τk is the delay in the kth actor and critic updates . See the A3C with TD ( 0 ) in Algorithm 1 . Sampling distributions . Since the transition kernel required by the actor and critic updates are different in the discounted MDP , it is difficult to design a two-timescale AC algorithm . To address this issue , we adopt the sampling method introduced in the seminal work [ 6 , 31 ] and the recent work [ 15 , 16 ] , which inevitably introduces bias by sampling from the artificial transition P̃ instead of P . However , as we will mention later , this extra bias is small when the discount factor γ is close to 1 . Parallel sampling . The AC update ( 6 ) and ( 8 ) uses samples generated “ on-the-fly ” from the target policy πθ , which brings overhead . Compared with ( 6 ) and ( 8 ) , the A3C-TD ( 0 ) update ( 9 ) allows parallel sampling from N workers , which is the key to linear speedup . We consider the case where only one worker can update parameters in the shared memory at the same time and the update can not be interrupted . In practice , ( 9 ) can also be performed in a mini-batch fashion . Minor differences from A3C [ 3 ] . The A3C-TD ( 0 ) algorithm resembles the popular A3C method [ 3 ] . With nmax denoting the horizon of steps , for n ∈ { 1 , ... , nmax } , A3C iteratively uses n-step TD errors to compute actor and critic gradients . In A3C-TD ( 0 ) , we use the TD ( 0 ) method which is the 1-step TD method for actor and critic update . When nmax = 1 , A3C method reduces to A3C-TD ( 0 ) . The n-step TD method is a hybrid version of the TD ( 0 ) method and the Monte-Carlo method . The A3C method with Monte-Carlo sampling is essentially the delayed policy gradient method , and thus its convergence follows directly from the delayed SGD . Therefore , we believe that the convergence of the A3C method based on TD ( 0 ) in this paper can be easily extended to the convergence of the A3C method with n-step TD . We here focus on A3C with TD ( 0 ) just for ease of exposition . 4 CONVERGENCE ANALYSIS OF TWO-TIMESCALE A3C-TD ( 0 ) In this section , we analyze the convergence of A3C-TD ( 0 ) in both i.i.d . and Markovian settings . Throughout this section , the notation O ( · ) contains constants that are independent of N and K0 . To analyze the performance of A3C-TD ( 0 ) , we make the following assumptions . Assumption 1 . There exists K0 such that the delay at each iteration is bounded by τk ≤ K0 , ∀k . Assumption 1 ensures the viability of analyzing the asynchronous update ; see the same assumption in e.g. , [ 5 , 25 ] . In practice , the delay usually scales as the number of workers , that is K0 = Θ ( N ) . With P̃πθ ( s′|s ) = ∑ a∈A P̃ ( s′|s , a ) πθ ( a|s ) , we define that : Aθ , φ : = E s∼µθ , s′∼P̃πθ [ φ ( s ) ( γφ ( s′ ) − φ ( s ) ) > ] , bθ , φ : = E s∼µθ , a∼πθ , s′∼P̃ [ r ( s , a , s′ ) φ ( s ) ] . ( 10 ) It is known that for a given θ , the stationary point ω∗θ of the TD ( 0 ) update in Algorithm 1 satisfies Aθ , φω ∗ θ + bθ , φ = 0 . ( 11 ) Assumption 2 . For all s ∈ S , the feature vector φ ( s ) is normalized so that ‖φ ( s ) ‖2 ≤ 1 . For all γ ∈ [ 0 , 1 ] and θ ∈ Rd , Aθ , φ is negative definite and its max eigenvalue is upper bounded by −λ . Assumption 2 is common in analyzing TD with linear function approximation ; see e.g. , [ 17 , 32 , 33 ] . With this assumption , Aθ , φ is invertible , so we have ω∗θ = −A −1 θ , φbθ , φ . Define Rω : = rmax/λ , then we have ‖ω∗θ‖2 ≤ Rω . It justifies the projection introduced in Algorithm 1 . In practice , the projection radius Rω can be estimated online by methods proposed in [ 32 , Section 8.2 ] or [ 34 , Lemma 1 ] . Assumption 3 . For any θ , θ′ ∈ Rd , s ∈ S and a ∈ A , there exist constants such that : i ) ‖ψθ ( s , a ) ‖2 ≤ Cψ ; ii ) ‖ψθ ( s , a ) − ψθ′ ( s , a ) ‖2 ≤ Lψ‖θ − θ′‖2 ; iii ) |πθ ( a|s ) − πθ′ ( a|s ) | ≤ Lπ‖θ − θ′‖2 . Assumption 3 is common in analyzing policy gradient-type algorithms which has also been made by e.g. , [ 34 , 35 , 36 ] . This assumption holds for many policy parameterization methods such as tabular softmax policy [ 36 ] , Gaussian policy [ 37 ] and Boltzmann policy [ 31 ] . Assumption 4 . For any θ ∈ Rd , the Markov chain under policy πθ and transition kernel P ( ·|s , a ) or P̃ ( ·|s , a ) is irreducible and aperiodic . Then there exist constants κ > 0 and ρ ∈ ( 0 , 1 ) such that sup s∈S dTV ( P ( st ∈ ·|s0 = s , πθ ) , µθ ) ≤ κρt , ∀t ( 12 ) where µθ is the stationary state distribution under πθ , and st is the state of Markov chain at time t. Assumption 4 assumes the Markov chain mixes at a geometric rate ; see also [ 32 , 33 ] . The stationary distribution µθ of an artificial Markov chain with transition P̃ is the same as the discounted visitation measure dθ of the Markov chain with transition P [ 6 ] . This means that if we sample according to at ∼ πθ ( ·|st ) , st+1 ∼ P̃ ( ·|st , at ) , the marginal distribution of ( st , at ) will converge to the discounted state-action visitation measure d′θ ( s , a ) , which allows us to control the gradient bias . 4.1 LINEAR SPEEDUP RESULT WITH I.I.D . SAMPLING In this section , we consider A3C-TD ( 0 ) under the i.i.d . sampling setting , which is widely used for analyzing RL algorithms ; see e.g. , [ 13 , 18 , 38 ] . We first give the convergence result of critic update as follows . Theorem 1 ( Critic convergence ) . Suppose Assumptions 1–4 hold . Consider Algorithm 1 with i.i.d . sampling and V̂ω ( s ) = φ ( s ) > ω . Select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ∥∥ωk − ω∗θk∥∥22=O ( 1K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K0 Kσ1 ) +O ( 1 Kσ2 ) . ( 13 ) Different from async-SGD ( e.g. , [ 9 ] ) , the optimal critic parameter ω∗θ is constantly drifting as θ changes at each iteration . This necessitates setting σ1 > σ2 to make the policy change slower than the critic , which can be observed in the second term in ( 13 ) . If σ1 > σ2 , then the policy is static relative to the critic in an asymptotic sense . To introduce the convergence of actor update , we first define the critic approximation error as app : = max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω∗θ ( s ) | 2 ≤ fa + sp , ( 14 ) where µθ is the stationary distribution under πθ and P̃ . The error app captures the quality of the critic approximation under Algorithm 1 . It can be further decomposed into the function approximation error fa , which is common in analyzing AC with function approximation [ 14 , 15 , 17 ] , and the sampling error sp = O ( 1− γ ) , which is unique in analyzing two-timescale AC for a discounted MDP . The error app is small when the value function approximation is accurate and the discounting factor γ is close to 1 ; see the detailed derivations in Lemma 7 of supplementary material . Now we are ready to give the actor convergence . Theorem 2 ( Actor convergence ) . Under the same assumptions of Theorem 1 , select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K K∑ k=1 E ‖ωk − ω∗θk‖ 2 2 ) +O ( app ) . ( 15 ) Different from the analysis of async-SGD , in actor update , the stochastic gradient v ( x , θ , ω ) is biased because of inexact value function approximation . The bias introduced by the critic optimality gap and the function approximation error correspond to the last two terms in ( 15 ) . In Theorem 1 and Theorem 2 , optimizing σ1 and σ2 gives the following convergence rate . Corollary 1 ( Linear speedup ) . Given Theorem 1 and Theorem 2 , select σ1 = 35 and σ2 = 2 5 . If we further assume K0 = O ( K 1 5 ) , then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = O ( K− 2 5 ) +O ( app ) ( 16 ) where O ( · ) contains constants that are independent of N and K0 . By setting the first term in ( 16 ) to , we get the total iteration complexity to reach -accuracy is O ( −2.5 ) . Since each iteration only uses one sample ( one transition ) , it also implies a total sample complexity of O ( −2.5 ) . Then the average sample complexity per worker is O ( −2.5/N ) which indicates linear speedup in ( 1 ) . Intuitively , the negative effect of parameter staleness introduced by parallel asynchrony vanishes asymptotically , which implies linear speedup in terms of convergence . 4.2 CONVERGENCE RESULT WITH MARKOVIAN SAMPLING Theorem 3 ( Critic convergence ) . Suppose Assumptions 1–4 hold . Consider Algorithm 1 with Markovian sampling and V̂ω ( s ) = φ ( s ) > ω . Select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ∥∥ωk − ω∗θk∥∥22 = O ( 1K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K20 log 2K Kσ1 ) +O ( K0 logK Kσ2 ) . ( 17 ) The following theorem gives the convergence rate of actor update in Algorithm 1 . Theorem 4 ( Actor convergence ) . Under the same assumptions of Theorem 3 , select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 =O ( 1 K1−σ1 ) +O ( K20 log 2K Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K K∑ k=1 E ‖ωk − ω∗θk‖ 2 2 ) +O ( app ) . ( 18 ) Assume K0 = O ( K 1 5 ) . Given Theorem 3 , select σ1 = 35 and σ2 = 2 5 , then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = Õ ( K0K − 2 5 ) +O ( app ) , ( 19 ) where Õ ( · ) hides constants and the logarithmic order of K. With Markovian sampling , the stochastic gradients g ( x , ω ) and v ( x , θ , ω ) are biased , and the bias decreases as the Markov chain mixes . The mixing time corresponds to the logarithmic term logK in ( 17 ) and ( 18 ) . Because of asynchrony , at a given iteration , workers collect different number of samples and their chains mix to different degrees . The worker with the slowest mixing chain will determine the rate of convergence . The product of K0 and logK in ( 17 ) and ( 18 ) appears due to the slowest mixing chain . As the last term in ( 17 ) dominates other terms asymptotically , the convergence rate reduces as the number of workers increases . While the theoretical linear speedup is difficult to establish in the Markovian setting , we will empirically demonstrate it in Section 5.2 . 5 NUMERICAL EXPERIMENTS We test the speedup performance of A3C-TD ( 0 ) on both synthetically generated and OpenAI Gym environments . The settings , parameters , and codes are provided in supplementary material . 5.1 A3C-TD ( 0 ) IN SYNTHETIC ENVIRONMENT To verify the theoretical result , we tested A3C-TD ( 0 ) with linear value function approximation in a synthetic environment . We use tabular softmax policy parameterization [ 36 ] , which satisfies Assumption 3 . The MDP has a state space |S| = 100 , an discrete action space of |A| = 5 . Each state feature has a dimension of 10 . Elements of the transition matrix , the reward and the state features are randomly sampled from a uniform distribution over ( 0 , 1 ) . We evaluate the convergence of actor and critic respectively with the running average of test reward and critic optimality gap ‖ωk − ω∗θk‖2 . Figures 1 and 2 show the training time and sample complexity of running A3C-TD ( 0 ) with i.i.d . sampling and Markovian sampling respectively . The speedup plot is measured by the number of samples needed to achieve a target running average reward under different number of workers . All the results are average over 10 Monte-Carlo runs . Figure 1 shows that the sample complexity of A3C-TD ( 0 ) stays about the same with different number of workers under i.i.d . sampling . Also , it can be observed from the speedup plot of Figure 1 that the A3C-TD ( 0 ) achieves roughly linear speedup with i.i.d . sampling , which is consistent with Corollary 1 . The speedup of A3C-TD ( 0 ) with Markovian sampling shown in Figure 2 is roughly linear when number of workers is small . 5.2 A3C-TD ( 0 ) IN OPENAI GYM ENVIRONMENTS We have also tested A3C-TD ( 0 ) with neural network parametrization in the classic control ( Carpole ) environment and the Atari game ( Seaquest and Beamrider ) environments . In Figures 3-5 , each curve is generated by averaging over 5 Monte-Carlo runs with 95 % confidence interval . Figures 3–5 show the speedup of A3C-TD ( 0 ) under different number of workers , where the average reward is computed by taking the running average of test rewards . The speedup and runtime speedup plots are respectively measured by the number of samples and training time needed to achieve a target running average reward under different number of workers . Although not justified theoretically , Figures 3–5 suggest that the sample complexity speedup is roughly linear , and the runtime speedup slightly degrades when the number of workers increases . This is partially due to our hardware limit . Similar observation has also been obtained in async-SGD [ 9 ] . 6 CONCLUSIONS This paper revisits the A3C algorithm with TD ( 0 ) for the critic update , termed A3C-TD ( 0 ) . With linear value function approximation , the convergence of the A3C-TD ( 0 ) algorithm has been established under both i.i.d . and Markovian sampling settings . Under i.i.d . sampling , A3C-TD ( 0 ) achieves linear speedup compared to the best-known sample complexity of two-timescale AC , theoretically justifying the benefit of parallelism and asynchrony for the first time . Under Markov sampling , such a linear speedup can be observed in most classic benchmark tasks . REFERENCES [ 1 ] T. P. Lillicrap , J. J . Hunt , A. Pritzel , N. Heess , T. Erez , Y. Tassa , D. Silver , and D. Wierstra , “ Continuous control with deep reinforcement learning , ” in Proc . of International Conference on Learning Representations , 2016 . [ 2 ] V. Mnih , K. Kavukcuoglu , D. Silver , A . A. Rusu , J. Veness , M. G. Bellemare , A. Graves , M. Riedmiller , A. K. Fidjeland , G. Ostrovski et al. , “ Human-level control through deep reinforcement learning , ” Nature , vol . 518 , no . 7540 , p. 529 , 2015 . [ 3 ] V. Mnih , A. P. Badia , M. Mirza , A. Graves , T. P. Lillicrap , T. Harley , D. Silver , and K. Kavukcuoglu , “ Asynchronous methods for deep reinforcement learning. ” in Proc . of International Conference on Machine Learning , 2016 . [ 4 ] A. Nair , P. Srinivasan , S. Blackwell , C. Alcicek , R. Fearon , A . De Maria , V. Panneershelvam , M. Suleyman , C. Beattie , S. Petersen et al. , “ Massively parallel methods for deep reinforcement learning , ” arXiv preprint:1507.04296 , 2015 . [ 5 ] M. Assran , J. Romoff , N. Ballas , J. Pineau , and M. Rabbat , “ Gossip-based actor-learner architectures for deep reinforcement learning , ” in Proc . of Advances in Neural Information Processing Systems , 2019 . [ 6 ] V. Konda , Actor-critic algorithms . PhD thesis , Department of Electrical Engineering and Computer Science , Massachusetts Institute of Technology , 2002 . [ 7 ] R. Sutton , D. McAllester , S. Singh , and Y. Mansour , “ Policy gradient methods for reinforcement learning with function approximation. ” in Proc . of Advances in Neural Information Processing Systems , 2000 . [ 8 ] R. Sutton , “ Learning to predict by the methods of temporal differences , ” Machine Learning , vol . 3 , pp . 9–44 , 1988 . [ 9 ] X. Lian , H. Zhang , C. Hsieh , Y. Yijun Huang , and J. Liu , “ A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order , ” in Proc . of the Advances in Neural Information Processing Systems , 2016 . [ 10 ] V. Borkar and V. Konda , “ The actor-critic algorithm as multi-time-scale stochastic approximation , ” Sadhana , vol . 22 , no . 4 , pp . 525–543 , 1997 . [ 11 ] S. Bhatnagar , R. Sutton , M. Ghavamzadeh , and M. Lee , “ Natural actor critic algorithms , ” Automatica , vol . 45 , pp . 2471–2482 , 2009 . [ 12 ] Z. Yang , K. Zhang , M. Hong , and T. Başar , “ A finite sample analysis of the actor-critic algorithm , ” in Proc . of IEEE Conference on Decision and Control , 2018 , pp . 2759–2764 . [ 13 ] H. Kumar , A. Koppel , and A. Ribeiro , “ On the sample complexity of actor-critic method for reinforcement learning with function approximation , ” arXiv preprint:1910.08412 , 2019 . [ 14 ] S. Qiu , Z. Yang , J. Ye , and Z. Wang , “ On the finite-time convergence of actor-critic algorithm , ” in Optimization Foundations for Reinforcement Learning Workshop at Advances in Neural Information Processing Systems , 2019 . [ 15 ] T. Xu , Z. Wang , and Y. Liang , “ Improving sample complexity bounds for ( natural ) actor-critic algorithms , ” in Proc . of Advances in Neural Information Processing Systems , 2020 . [ 16 ] —— , “ Non-asymptotic convergence analysis of two time-scale ( natural ) actor-critic algorithms , ” arXiv preprint:2005.03557 , 2020 . [ 17 ] Y. Wu , W. Zhang , P. Xu , and Q. Gu , “ A finite time analysis of two time-scale actor critic methods , ” in Proc . of Advances in Neural Information Processing Systems , 2020 . [ 18 ] M. Hong , H.-T. Wai , Z. Wang , and Z. Yang , “ A two-timescale framework for bilevel optimization : Complexity analysis and application to actor-critic , ” arXiv preprint:2007.05170 , 2020 . [ 19 ] M. Babaeizadeh , I. Frosio , S. Tyree , J. Clemons , and J. Kautz , “ Reinforcement learning through asynchronous advantage actor-critic on a gpu , ” in Proc . of International Conference on Learning Representations , 2017 . [ 20 ] A. Stooke and P. Abbeel , “ Accelerated methods for deep reinforcement learning , ” arXiv preprint:1803.02811 , 2019 . [ 21 ] L. Espeholt , H. Soyer , R. Munos , K. Simonyan , V. Mnih , T. Ward , Y. Doron , V. Firoiu , T. Harley , I. Dunning , S. Legg , and K. Kavukcuoglu , “ Impala : Scalable distributed deep-rl with importance weighted actor-learner architectures , ” arXiv preprint:1802.01561 , 2018 . [ 22 ] D. Bertsekas and J. Tsitsiklis , Parallel and distributed computation : numerical methods . Prentice-Hall , 1989 . [ 23 ] A. Agarwal and J. Duchi , “ Distributed delayed stochastic optimization , ” in Proc . of Advances in Neural Information Processing Systems , 2011 . [ 24 ] H. Feyzmahdavian , A. Aytekin , and M. Johansson , “ An asynchronous mini-batch algorithm for regularized stochastic optimization , ” arXiv preprint:1505.04824 , 2015 . [ 25 ] X. Lian , Y. Huang , Y. Li , and J. Liu , “ Asynchronous parallel stochastic gradient for nonconvex optimization. ” in Proc . of Advances in Neural Information Processing Systems , 2015 . [ 26 ] T. Sun , R. Hannah , and W. Yin , “ Asynchronous coordinate descent under more realistic assumptions , ” in Proc . of Advances in Neural Information Processing Systems , 2017 . [ 27 ] X. Lian , C. Zhang , H. Zhang , C.-J . Hsieh , W. Zhang , and J. Liu , “ Can decentralized algorithms outperform centralized algorithms ? a case study for decentralized parallel stochastic gradient descent , ” in Proc . of Advances in Neural Information Processing Systems , 2017 . [ 28 ] R. J. Williams , “ Simple statistical gradient-following algorithms for connectionist reinforcement learning , ” Machine Learning , vol . 8 , no . 3-4 , pp . 229–256 , May 1992 . [ 29 ] J. Baxter and P. L. Bartlett , “ Infinite-horizon policy-gradient estimation , ” J . Artificial Intelligence Res. , vol . 15 , pp . 319–350 , 2001 . [ 30 ] R. S. Sutton and A. G. Barto , Reinforcement learning : An introduction . MIT Press , 2018 . [ 31 ] V. Konda and V. Borkar , “ Actor-critic–type learning algorithms for markov decision processes , ” SIAM Journal on Control and Optimization , vol . 38 , no . 1 , pp . 94–123 , 1999 . [ 32 ] J. Bhandari , D. Russo , and R. Singal , “ A finite time analysis of temporal difference learning with linear function approximation. ” in Proc . of Conference on Learning Theory , 2018 . [ 33 ] T. Xu , Z. Wang , Y. Zhou , and Y. Liang , “ Reanalysis of variance reduced temporal difference learning , ” in Proc . of International Conference on Learning Representations , 2020 . [ 34 ] S. Zou , T. Xu , and Y. Liang , “ Finite-sample analysis for SARSA with linear function approximation , ” in Proc . of Advances in Neural Information Processing Systems , 2019 . [ 35 ] K. Zhang , A. Koppel , H. Zhu , and T. Başar , “ Global convergence of policy gradient methods to ( almost ) locally optimal policies , ” arXiv preprint:1906.08383 , 2019 . [ 36 ] A. Agarwal , S. M. Kakade , J. D. Lee , and G. Mahajan , “ Optimality and approximation with policy gradient methods in markov decision processes. ” in Proc . of Thirty Third Conference on Learning Theory , 2020 . [ 37 ] K. Doya , “ Reinforcement learning in continuous time and space , ” Neural Computation , vol . 12 , no . 1 , pp . 219–245 , 2000 . [ 38 ] R. Sutton , H. Maei , D. Precup , S. Bhatnagar , D. Silver , and E. Szepesvári , C.and Wiewiora , “ Fast gradient-descent methods for temporal-difference learning with linear function approximation , ” in Proc . of International Conference on Machine Learning , 2009 . [ 39 ] A. Y. Mitrophanov , “ Sensitivity and convergence of uniformly ergodic markov chains , ” Journal of Applied Probability , vol . 42 , no . 4 , pp . 1003–1014 , 2005 . [ 40 ] Dgriff , “ Pytorch implementation of a3c , ” https : //github.com/dgriff777/rl_a3c_pytorch , 2018 . Supplementary Material A PRELIMINARY LEMMAS A.1 GEOMETRIC MIXING The operation p⊗ q denotes the tensor product between two distributions p ( x ) and q ( y ) , i.e . ( p⊗ q ) ( x , y ) = p ( x ) · q ( y ) . Lemma 1 . Suppose Assumption 4 holds for a Markov chain generated by the rule at ∼ πθ ( ·|st ) , st+1 ∼ P̃ ( ·|st , at ) . For any θ ∈ Rd , we have sup s0∈S dTV ( P ( ( st , at , st+1 ) ∈ ·|s0 , πθ ) , µθ ⊗ πθ ⊗ P̃ ) ≤ κρt . ( 20 ) where µθ ( · ) is the stationary distribution with policy πθ and transition kernel P̃ ( ·|s , a ) . Proof . We start with sup s0∈S dTV ( P ( ( st , at , st+1 ) = ·|s0 , πθ ) , µθ ⊗ πθ ⊗ P̃ ) = sup s0∈S dTV ( P ( st = ·|s0 , πθ ) ⊗ πθ ⊗ P̃ , µθ ⊗ πθ ⊗ P̃ ) = sup s0∈S 1 2 ∫ s∈S ∑ a∈A ∫ s′∈S |P ( st = ds|s0 , πθ ) πθ ( a|s ) P̃ ( ds′|s , a ) − µθ ( ds ) πθ ( a|s ) P̃ ( ds′|s , a ) ∣∣∣ = sup s0∈S 1 2 ∫ s∈S |P ( st = ds|s0 , πθ ) − µθ ( ds ) | ∑ a∈A πθ ( a|s ) ∫ s′∈S P̃ ( ds′|s , a ) = sup s0∈S dTV ( P ( st ∈ ·|s0 , πθ ) , µθ ) ≤ κρt , which completes the proof . For the use in the later proof , given K > 0 , we first define mK as : mK : = min { m ∈ N+ |κρm−1 ≤ min { αk , βk } } , ( 21 ) where κ and ρ are constants defined in ( 4 ) . mK is the minimum number of samples needed for the Markov chain to approach the stationary distribution so that the bias incurred by the Markovian sampling is small enough . A.2 AUXILIARY MARKOV CHAIN The auxiliary Markov chain is a virtual Markov chain with no policy drifting — a technique developed in [ 34 ] to analyze stochastic approximation algorithms in non-stationary settings . Lemma 2 . Under Assumption 1 and Assumption 3 , consider the update ( 9 ) in Algorithm 1 with Markovian sampling . For a given number of samples m , consider the Markov chain of the worker that contributes to the kth update : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct its auxiliary Markov chain by repeatedly applying πθk−dm : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . Define xt : = ( st , at , st+1 ) , then we have : dTV ( P ( xt ∈ ·|θk−dm , st−m+1 ) , P ( x̃t ∈ ·|θk−dm , st−m+1 ) ) ≤ 1 2 |A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2|θk−dm , st−m+1 ] . ( 22 ) Proof . Throughout the lemma , all expectations and probabilities are conditioned on θk−dm and st−m+1 . We omit this condition for convenience . First we have dTV ( P ( st+1 ∈ · ) , P ( s̃t+1 ∈ · ) ) = 1 2 ∫ s′∈S |P ( st+1 = ds′ ) − P ( s̃t+1 = ds′ ) | = 1 2 ∫ s′∈S ∣∣∣∣∣ ∫ s∈S ∑ a∈A P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) ∣∣∣∣∣ ≤ 1 2 ∫ s′∈S ∫ s∈S ∑ a∈A |P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) | = 1 2 ∫ s∈S ∑ a∈A ∫ s′∈S |P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) | = dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) , ( 23 ) where the last second equality is due to Tonelli ’ s theorem . Next we have dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) = 1 2 ∫ s∈S ∑ a∈A ∫ s′∈S |P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) | = 1 2 ∫ s∈S ∑ a∈A |P ( st = ds , at = a ) − P ( s̃t = ds , ãt = a ) | ∫ s′∈S P̃ ( st+1 = ds′|st = ds , at = a ) = 1 2 ∫ s∈S ∑ a∈A |P ( st = ds , at = a ) − P ( s̃t = ds , ãt = a ) | = dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) . ( 24 ) Due to the fact that θk−τk is dependent on st , we need to write P ( st , at ) as P ( st , at ) = ∫ θk−τk∈Rd P ( st , θk−τk , at ) = ∫ θ∈Rd P ( st ) P ( θk−τk = dθ|st ) πθk−τk ( at|st ) = P ( st ) ∫ θ∈Rd P ( θk−τk = dθ|st ) πθk−τk ( at|st ) = P ( st ) E [ πθk−τk ( at|st ) |st ] . Then we have dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) = 1 2 ∫ s∈S ∑ a∈A ∣∣∣P ( st = ds ) E [ πθk−τk ( at = a|st = ds ) |st = ds ] − P ( s̃t = ds ) πθk−dm ( ãt = a|s̃t = ds ) ∣∣∣ ≤ 1 2 ∫ s∈S ∑ a∈A ∣∣∣P ( st = ds ) E [ πθk−τk ( at = a|st = ds ) |st = ds ] − P ( st = ds ) πθk−dm ( at = a|st = ds ) ∣∣∣ + 1 2 ∫ s∈S ∑ a∈A ∣∣P ( st = ds ) πθk−dm ( ãt = a|s̃t = ds ) − P ( s̃t = ds ) πθk−dm ( ãt = a|s̃t = ds ) ∣∣ = 1 2 ∫ s∈S P ( st = ds ) ∑ a∈A ∣∣∣E [ πθk−τk ( at = a|st = ds ) |st = ds ] − πθk−dm ( at = a|st = ds ) ∣∣∣ + 1 2 ∫ s∈S |P ( st = ds ) − P ( s̃t = ds ) | . ( 25 ) Using Jensen ’ s inequality , we have dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) ≤ 1 2 ∫ s∈S P ( st = ds ) ∑ a∈A E [ ∣∣∣πθk−τk ( at = a|st = ds ) − πθk−dm ( at = a|st = ds ) ∣∣∣∣∣∣ st = ds ] + 1 2 ∫ s∈S |P ( st = ds ) − P ( s̃t = ds ) | ≤ 1 2 ∫ s∈S P ( st = ds ) ∑ a∈A E [ ‖θk−τk − θk−dm‖2| st = ds ] + 1 2 ∫ s∈S |P ( st = ds ) − P ( s̃t = ds ) | = 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 + dTV ( P ( st ∈ · ) , P ( s̃t ∈ · ) ) ( 26 ) where the last inequality follows Assumption 3 . Now we start to prove ( 22 ) . dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) ( 24 ) = dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) ( 25 ) ≤ dTV ( P ( st ∈ · ) , P ( s̃t ∈ · ) ) + 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 ( 23 ) ≤ dTV ( P ( xt−1 ∈ · ) , P ( x̃t−1 ∈ · ) ) + 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 . Now we have dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) ≤ dTV ( P ( xt−1 ∈ · ) , P ( x̃t−1 ∈ · ) ) + 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 . ( 27 ) Since dTV ( P ( xt−m ∈ · ) , P ( xt−m ∈ · ) ) = 0 , recursively applying ( 27 ) for { t− 1 , ... , t−m } gives dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) ≤ 1 2 |A|Lπ m∑ j=0 E ‖θk−dj − θk−dm‖2 ≤ 1 2 |A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 , which completes the proof . A.3 LIPSCHITZ CONTINUITY OF VALUE FUNCTION Lemma 3 . Suppose Assumption 3 holds . For any θ1 , θ2 ∈ Rd and s ∈ S , we have ‖∇Vπθ1 ( s ) ‖2 ≤ LV , ( 28a ) |Vπθ1 ( s ) − Vπθ2 ( s ) | ≤ LV ‖θ1 − θ2‖2 , ( 28b ) where the constant is LV : = Cψrmax/ ( 1− γ ) with Cψ defined as in Assumption 3 . Proof . First we have Qπ ( s , a ) = E [ ∞∑ t=0 γtr ( st , at , st+1 ) |s0 = s , a0 = a ] ≤ ∞∑ t=0 γtrmax = rmax 1− γ . By the policy gradient theorem [ 7 ] , we have ‖∇Vπθ1 ( s ) ‖2 = ∥∥E [ Qπθ1 ( s , a ) ψθ1 ( s , a ) ] ∥∥2 ≤ E ∥∥Qπθ1 ( s , a ) ψθ1 ( s , a ) ∥∥2 ≤ E [ |Qπθ1 ( s , a ) |‖ψθ1 ( s , a ) ‖2 ] ≤ rmax 1− γ Cψ , where the first inequality is due to Jensen ’ s inequality , and the last inequality follows Assumption 3 and the fact that Qπ ( s , a ) ≤ rmax1−γ . By the mean value theorem , we immediately have |Vπθ1 ( s ) − Vπθ2 ( s ) | ≤ sup θ1∈Rd ∥∥∇Vπθ1 ( s ) ∥∥2 ‖θ1 − θ2‖2 = LV ‖θ1 − θ2‖2 , which completes the proof . A.4 LIPSCHITZ CONTINUITY OF POLICY GRADIENT We give a proposition regarding the LJ -Lipschitz of the policy gradient under proper assumptions , which has been shown by [ 35 ] . Proposition 1 . Suppose Assumption 3 and 4 hold . For any θ , θ′ ∈ Rd , we have ‖∇J ( θ ) − ∇J ( θ′ ) ‖2 ≤ LJ‖θ − θ′‖2 , where LJ is a positive constant . A.5 LIPSCHITZ CONTINUITY OF OPTIMAL CRITIC PARAMETER We provide a justification for Lipschitz continuity of ω∗θ in the next proposition . Proposition 2 . Suppose Assumption 3 and 4 hold . For any θ1 , θ2 ∈ Rd , we have ‖ω∗θ1 − ω ∗ θ2‖2 ≤ Lω‖θ1 − θ2‖2 , where Lω : = 2rmax|A|Lπ ( λ−1 + λ−2 ( 1 + γ ) ) ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) . Proof . We use A1 , A2 , b1 and b2 as shorthand notations of Aπθ1 , Aπθ2 , bπθ1 and bπθ2 respectively . By Assumption 2 , Aθ , φ is invertible for any θ ∈ Rd , so we can write ω∗θ = −A −1 θ , φbθ , φ . Then we have ‖ω∗1 − ω∗2‖2 = ‖ −A−11 b1 +A −1 2 b2‖2 = ‖ −A−11 b1 −A −1 1 b2 +A −1 1 b2 +A −1 2 b2‖2 = ‖ −A−11 ( b1 − b2 ) − ( A −1 1 −A −1 2 ) b2‖2 ≤ ‖A−11 ( b1 − b2 ) ‖2 + ‖ ( A −1 1 −A −1 2 ) b2‖2 ≤ ‖A−11 ‖2‖b1 − b2‖2 + ‖A −1 1 −A −1 2 ‖2‖b2‖2 = ‖A−11 ‖2‖b1 − b2‖2 + ‖A −1 1 ( A2 −A1 ) A −1 2 ‖2‖b2‖2 ≤ ‖A−11 ‖2‖b1 − b2‖2 + ‖A −1 1 ‖2‖A −1 2 ‖2‖b2‖2‖ ( A2 −A1 ) ‖2 ≤ λ−1 ‖b1 − b2‖2 + λ −2rmax ‖A1 −A2‖2 , ( 29 ) where the last inequality follows Assumption 2 , and the fact that ‖b2‖2 = ‖E [ r ( s , a , s′ ) φ ( s ) ] ‖2 ≤ E ‖r ( s , a , s ′ ) φ ( s ) ‖2 ≤ E [ |r ( s , a , s ′ ) |‖φ ( s ) ‖2 ] ≤ rmax . Denote ( s1 , a1 , s′1 ) and ( s2 , a2 , s′2 ) as samples drawn with θ1 and θ2 respectively , i.e . s1 ∼ µθ1 , a1 ∼ πθ1 , s′1 ∼ P̃ and s2 ∼ µθ2 , a2 ∼ πθ2 , s′2 ∼ P̃ . Then we have ‖b1 − b2‖2 = ∥∥E [ r ( s1 , a1 , s′1 ) φ ( s1 ) ] − E [ r ( s2 , a2 , s′2 ) φ ( s2 ) ] ∥∥ 2 ≤ sup s , a , s′ ‖r ( s , a , s′ ) φ ( s ) ‖2‖P ( ( s1 , a1 , s′1 ) ∈ · ) − P ( ( s2 , a2 , s′2 ) ∈ · ) ‖TV ≤ rmax‖P ( ( s1 , a1 , s′1 ) ∈ · ) − P ( ( s2 , a2 , s′2 ) ∈ · ) ‖TV = 2rmaxdTV ( µθ1 ⊗ πθ1 ⊗ P̃ , µθ2 ⊗ πθ2 ⊗ P̃ ) ≤ 2rmax|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ‖θ1 − θ2‖2 , ( 30 ) where the first inequality follows the definition of total variation ( TV ) norm , and the last inequality follows Lemma A.1 . in [ 17 ] . Similarly we have : ‖A1 −A2‖2 ≤ 2 ( 1 + γ ) dTV ( µθ1 ⊗ πθ1 , µθ2 ⊗ πθ2 ) = ( 1 + γ ) |A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ‖θ1 − θ2‖2 . ( 31 ) Substituting ( 30 ) and ( 31 ) into ( 29 ) completes the proof . B PROOF OF MAIN THEOREMS B.1 PROOF OF THEOREM 1 For brevity , we first define the following notations : x : = ( s , a , s′ ) , δ̂ ( x , ω ) : = r ( s , a , s′ ) + γφ ( s′ ) > ω − φ ( s ) > ω , g ( x , ω ) : = δ̂ ( x , ω ) φ ( s ) , g ( θ , ω ) : = E s∼µθ , a∼πθ , s′∼P̃ [ g ( x , ω ) ] . We also define constant Cδ : = rmax + ( 1 + γ ) max { rmax1−γ , Rω } , and we immediately have ‖g ( x , ω ) ‖2 ≤ |r ( x ) + γφ ( s′ ) > ω − φ ( s ) > ω| ≤ rmax + ( 1 + γ ) Rω ≤ Cδ ( 32 ) and likewise , we have ‖g ( x , ω ) ‖2 ≤ Cδ . The critic update in Algorithm 1 can be written compactly as : ωk+1 = ΠRω ( ωk + βkg ( x ( k ) , ωk−τk ) ) , ( 33 ) where τk is the delay of the parameters used in evaluating the kth stochastic gradient , and x ( k ) : = ( s ( k ) , a ( k ) , s ′ ( k ) ) is the sample used to evaluate the stochastic gradient at kth update . Proof . Using ω∗k as shorthand notation of ω ∗ θk , we start with the optimality gap ‖ωk+1 − ω∗k+1‖22 = ‖ΠRω ( ωk + βkg ( x ( k ) , ωk−τk ) ) − ω∗k+1‖22 ≤ ‖ωk + βkg ( x ( k ) , ωk−τk ) − ω∗k+1‖22 = ‖ωk − ω∗k‖ 2 2 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) 〉 + 2 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + ∥∥ω∗k − ω∗k+1 + βkg ( x ( k ) , ωk−τk ) ∥∥22 = ‖ωk − ω∗k‖ 2 2 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + 2βk 〈ωk − ω∗k , g ( θk , ωk ) 〉+ 2 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + ∥∥ω∗k − ω∗k+1 + βkg ( x ( k ) , ωk−τk ) ∥∥22 ≤ ‖ωk − ω∗k‖ 2 2 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + 2βk 〈ωk − ω∗k , g ( θk , ωk ) 〉+ 2 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + 2 ∥∥ω∗k − ω∗k+1∥∥22 + 2C2δβ2k . ( 34 ) We first bound 〈ωk − ω∗k , g ( θk , ωk ) 〉 in ( 34 ) as 〈ωk − ω∗k , g ( θk , ωk ) 〉 = 〈ωk − ω∗k , g ( θk , ωk ) − g ( θk , ω∗k ) 〉 = 〈 ωk − ω∗k , E [ ( γφ ( s′ ) − φ ( s ) ) > ( ωk − ω∗k ) φ ( s ) ] 〉 = 〈 ωk − ω∗k , E [ φ ( s ) ( γφ ( s′ ) − φ ( s ) ) > ] ( ωk − ω∗k ) 〉 = 〈 ωk − ω∗k , Aπθk ( ωk − ω ∗ k ) 〉 ≤ −λ‖ωk − ω∗k‖22 , ( 35 ) where the first equality is due to g ( θ , ω∗θ ) = Aθ , φω ∗ θ + b = 0 , and the last inequality follows Assumption 2 . Substituting ( 35 ) into ( 34 ) , then taking expectation on both sides of ( 34 ) yield E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + 2E 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + 2E ∥∥ω∗k − ω∗k+1∥∥22 + 2C2δβ2k . ( 36 ) We then bound the term E 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 in ( 36 ) as E 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 = E 〈 ωk − ω∗k , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ωk−τk − ωk ) φ ( s ( k ) ) 〉 ≤ ( 1 + γ ) E [ ‖ωk − ω∗k‖2‖ωk−τk − ωk‖2 ] ≤ ( 1 + γ ) E ‖ωk − ω∗k‖2 ∥∥∥∥∥ k−1∑ i=k−τk ( ωi+1 − ωi ) ∥∥∥∥∥ 2 ≤ ( 1 + γ ) E [ ‖ωk − ω∗k‖2 k−1∑ i=k−τk βi‖g ( xi , ωi−τi ) ‖2 ] ≤ ( 1 + γ ) E [ ‖ωk − ω∗k‖2 k−1∑ i=k−τk βk−K0‖g ( xi , ωi−τi ) ‖2 ] ≤ Cδ ( 1 + γ ) K0βk−K0 E ‖ωk − ω∗k‖2 , ( 37 ) where the second last inequality is due to the monotonicity of step size , and the last inequality follows the definition of Cδ in ( 32 ) . Next we jointly bound the fourth and fifth term in ( 36 ) as 2E 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + 2E ∥∥ω∗k − ω∗k+1∥∥22 ≤ 2E [ ‖ωk − ω∗k‖2 ∥∥ω∗k − ω∗k+1∥∥2 ] + 2E∥∥ω∗k − ω∗k+1∥∥22 ≤ 2Lω E [ ‖ωk − ω∗k‖2 ‖θk − θk+1‖2 ] + 2L 2 ω E ‖θk − θk+1‖ 2 2 = 2Lωαk E [ ‖ωk − ω∗k‖2 ∥∥∥δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∥∥∥2 ] + 2L2ωα2k E∥∥∥δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∥∥∥22 ≤ 2LωCpαk E ‖ωk − ω∗k‖2 + 2L 2 ωC 2 pα 2 k , ( 38 ) where constant Cp : = CδCψ . The second inequality is due to the Lω-Lipschitz of ω∗θ shown in Proposition 2 , and the last inequality follows the fact that ‖δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ≤ CδCψ = Cp . ( 39 ) Substituting ( 37 ) and ( 38 ) into ( 36 ) yields E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + Cqβ 2 k , ( 40 ) where C1 : = LωCp , C2 : = Cδ ( 1 + γ ) and Cq : = 2C2δ + 2L 2 ωC 2 p max ( k ) α2k β2k = 2C2δ + 2L 2 ωC 2 p c21 c22 . For brevity , we use x ∼ θ to denote s ∼ µθ , a ∼ πθ and s′ ∼ P̃ in this proof . Consider the third term in ( 40 ) conditioned on θk , ωk , θk−τk . We bound it as E [ 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 |θk , ωk , θk−τk ] = 〈 ωk − ω∗k , E x ( k ) ∼θk−τk [ g ( x ( k ) , ωk ) |ωk ] − g ( θk , ωk ) 〉 = 〈 ωk − ω∗k , g ( θk−τk , ωk ) − g ( θk , ωk ) 〉 ≤ ‖ωk − ω∗k‖2‖g ( θk−τk , ωk ) − g ( θk , ωk ) ‖2 ≤ 2Rω ∥∥∥∥ Ex∼θk−τk [ g ( x , ωk ) ] − Ex∼θk [ g ( x , ωk ) ] ∥∥∥∥ 2 ≤ 2Rω sup x ‖g ( x , ωk ) ‖2 ∥∥∥µθk−τk ⊗ πθk−τk ⊗ P̃ − µθk ⊗ πθk ⊗ P̃∥∥∥TV ≤ 4RωCδdTV ( µθk−τk ⊗ πθk−τk ⊗ P̃ , µθk ⊗ πθk ⊗ P̃ ) , ( 41 ) where second last inequality follows the definition of TV norm and the last inequality uses the definition of Cδ in ( 32 ) . Define constant C3 : = 2RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) . Then by following the third item in Lemma A.1 shown by [ 17 ] , we can write ( 41 ) as E [ 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 |θk , ωk , θk−τk ] ≤ 4RωCδdTV ( µθk−τk ⊗ πθk−τk ⊗ P̃ , µθk ⊗ πθk ⊗ P̃ ) ≤ C3 ‖θk−τk − θk‖2 ≤ C3 k−1∑ i=k−τk αi‖g ( xi , ωi−τi ) ‖2 ≤ C3CδK0αk−K0 , ( 42 ) where we used the monotonicity of αk and Assumption 1 . Taking total expectation on both sides of ( 42 ) and substituting it into ( 40 ) yield E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 + 2C3CδK0βkαk−K0 + Cqβ 2 k. ( 43 ) Taking summation on both sides of ( 43 ) and rearranging yield 2λ K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ≤ K∑ k=K0 1 βk ( E ‖ωk − ω∗k‖ 2 2 − E ∥∥ωk+1 − ω∗k+1∥∥22 ) I1 +Cq K∑ k=K0 βk I2 + 2 K∑ k=K0 2C3CδK0αk−K0 I3 +2 K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 I4 . ( 44 ) We bound I1 as I1 = K∑ k=MK 1 βk ( E ‖ωk − ω∗k‖ 2 2 − E ∥∥ωk+1 − ω∗k+1∥∥22 ) = K∑ k=MK ( 1 βk − 1 βk−1 ) E ‖ωk − ω∗k‖ 2 2 + 1 βMK−1 E ∥∥ωMK − ω∗MK∥∥22 − 1βk E ∥∥ωK+1 − ω∗K+1∥∥22 ≤ K∑ k=MK ( 1 βk − 1 βk−1 ) E ‖ωk − ω∗k‖ 2 2 + 1 βMK−1 E ∥∥ωMK − ω∗MK∥∥22 ≤ 4R2ω ( K∑ k=MK ( 1 βk − 1 βk−1 ) + 1 βMK−1 ) = 4R2ω βk = O ( Kσ2 ) , ( 45 ) where the last inequality is due to the fact that ‖ωk − ω∗θ‖2 ≤ ‖ωk‖2 + ‖ω∗θ‖2 ≤ 2Rω . We bound I2 as K∑ k=MK βk = K∑ k=MK c2 ( 1 + k ) σ2 = O ( K1−σ2 ) ( 46 ) where the inequality follows from the integration rule ∑b k=a k −σ ≤ b 1−σ 1−σ . We bound I3 as I3 = K∑ k=K0 2C3CδK0αk−K0 = 2C3Cδc1K0 K−K0∑ k=0 ( 1 + k ) −σ1 = O ( K0K1−σ1 ) . ( 47 ) For the last term I4 , we have I4 = K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 ≤ √√√√ K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) 2√√√√ K∑ k=K0 ( E ‖ωk − ω∗k‖2 ) 2 ≤ √√√√ K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) 2√√√√ K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 , ( 48 ) where the first inequality follows Cauchy–Schwartz inequality , and the second inequality follows Jensen ’ s inequality . In ( 48 ) , we have K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) 2 ≤ K−K0∑ k=0 ( C1 αk βk + C2K0βk ) 2 = C21 K−K0∑ k=0 α2k β2k + 2C1C2K0 K−K0∑ k=0 αk + C 2 2K 2 0 K−K0∑ k=0 β2k = O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K −σ1+1 ) +O ( K20K 1−2σ2 ) ( 49 ) where the first inequality is due to the fact that αkβk and βk−K0 are monotonically decreasing . Substituting ( 49 ) into ( 48 ) gives I4 ≤ √ O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K−σ1+1 ) +O ( K20K1−2σ2 ) √√√√ K∑ k=MK E ‖ωk − ω∗k‖ 2 2 . ( 50 ) Substituting ( 45 ) , ( 46 ) , ( 47 ) and ( 50 ) into ( 44 ) , and dividing both sides of ( 44 ) by K −K0 + 1 give 2λ 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ≤ √ O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K−σ1+1 ) +O ( K20K1−2σ2 ) K −K0 + 1 √√√√ K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 +O ( 1 K1−σ2 ) +O ( 1 Kσ2 ) +O ( K0 Kσ1 ) . ( 51 ) We define the following functions : T1 ( K ) : = 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 , T2 ( K ) : = O ( 1 K1−σ2 ) +O ( 1 Kσ2 ) +O ( K0 Kσ1 ) , T3 ( K ) : = O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K −σ1+1 ) +O ( K20K 1−2σ2 ) K −K0 + 1 . Then ( 51 ) can be written as : T1 ( K ) − 1 2λ √ T1 ( K ) √ T3 ( K ) ≤ 1 2λ T2 ( K ) . Solving this quadratic inequality in terms of T1 ( K ) , we obtain T1 ( K ) ≤ 1 λ T2 ( K ) + 1 2λ2 T3 ( K ) , ( 52 ) which implies 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 = O ( 1 K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K0 Kσ1 ) +O ( 1 Kσ2 ) . We further have 1 K K∑ k=1 E ‖ωk − ω∗k‖22 ≤ 1 K ( K0−1∑ k=1 4R2ω + K∑ k=K0 E ‖ωk − ω∗k‖22 ) = K0 − 1 K 4R2ω + K −K0 + 1 K 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖22 = O ( K0 K ) +O ( 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ) = O ( 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ) ( 53 ) which completes the proof . B.2 PROOF OF THEOREM 2 We first clarify the notations : x : = ( s , a , s′ ) , δ̂ ( x , ω ) : = r ( s , a , s′ ) + γφ ( s′ ) > ω − φ ( s ) > ω , δ ( x , θ ) : = r ( s , a , s′ ) + γVπθ ( s ′ ) − Vπθ ( s ) . The update in Algorithm 1 can be written compactly as : θk+1 = θk + αk δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) . ( 54 ) For brevity , we use ω∗k as shorthand notation of ω ∗ θk . Then we are ready to give the proof . Proof . From LJ -Lipschitz of policy gradient shown in Proposition 1 , we have : J ( θk+1 ) ≥ J ( θk ) + 〈∇J ( θk ) , θk+1 − θk〉 − LJ 2 ‖θk+1 − θk‖22 = J ( θk ) + αk 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 + αk 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 − LJ 2 α2k‖δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ‖ 2 2 ≥ J ( θk ) + αk 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 + αk 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 − LJ 2 C2pα 2 k , where the last inequality follows the definition of Cp in ( 39 ) . Taking expectation on both sides of the last inequality yields E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] + αk E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I1 + αk E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I2 −LJ 2 C2pα 2 k. ( 55 ) We first decompose I1 as I1 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ωk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 1 ) 1 + E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 2 ) 1 . We bound I ( 1 ) 1 as I ( 1 ) 1 = E 〈 ∇J ( θk ) , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ωk−τk − ωk ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2‖γφ ( s′ ( k ) ) − φ ( s ( k ) ) ‖2‖ωk − ωk−τk‖2‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −2Cψ E [ ‖∇J ( θk ) ‖2‖ωk − ωk−τk‖2 ] ≥ −2CψCδK0βk−1 E ‖∇J ( θk ) ‖2 , where the last inequality follows ‖ωk − ωk−τk‖2 = ∥∥∥∥∥ k−1∑ i=k−τk ( ωi+1 − ωi ) ∥∥∥∥∥ 2 ≤ k−1∑ i=k−τk ‖βig ( xi , ωi−τi ) ‖2 ≤ βk−1 k−1∑ i=k−τk ‖g ( xi , ωi−τi ) ‖2 ≤ βk−1K0Cδ , where the second inequality is due to the monotonicity of step size , and the third one follows ( 32 ) . Then we bound I ( 2 ) 1 as I ( 2 ) 1 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = −E 〈 ∇J ( θk ) , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ω∗k − ωk ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2‖γφ ( s′ ( k ) ) − φ ( s ( k ) ) ‖2‖ωk − ω ∗ k‖2‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −2Cψ E [ ‖∇J ( θk ) ‖2‖ωk − ω∗k‖2 ] . Collecting the lower bounds of I ( 1 ) 1 and I ( 2 ) 1 gives I1 ≥ −2Cψ E [ ‖∇J ( θk ) ‖2 ( CδK0βk−1 + ‖ωk − ω∗k‖2 ) ] . ( 56 ) Now we consider I2 . We first decompose I2 as I2 = E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ̂ ( x ( k ) , ω∗k−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 1 ) 2 + E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k−τk ) − δ ( x ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 2 ) 2 + E 〈 ∇J ( θk ) , δ ( x ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 I ( 3 ) 2 +‖∇J ( θk ) ‖22 . We bound I ( 1 ) 2 as I ( 1 ) 2 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ̂ ( x ( k ) , ω∗k−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ω∗k − ω∗k−τk ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2‖ ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ‖2 ∥∥ω∗k − ω∗k−τk∥∥2 ‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −LV Cψ ( 1 + γ ) E ∥∥ω∗k − ω∗k−τk∥∥2 ≥ −LV LωCψ ( 1 + γ ) E ‖θk − θk−τk‖2 ≥ −LV LωCψCp ( 1 + γ ) K0αk−K0 , where the second last inequality follows from Proposition 2 and the last inequality uses ( 39 ) as ‖θk − θk−τk‖2 ≤ k−1∑ i=k−τk ‖θi+1 − θi‖2 = k−1∑ i=k−τk αi‖δ̂ ( xi , ωi−τi ) ψθi−τi ( si , ai ) ‖2 ≤ k−1∑ i=k−τk αk−τkCp ≤ CpK0αk−K0 . ( 57 ) We bound I ( 2 ) 2 as I ( 2 ) 2 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k−τk ) − δ ( x ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2 ∣∣∣δ̂ ( x ( k ) , ω∗k−τk ) − δ ( x ( k ) , θk−τk ) ∣∣∣ ‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −Cψ E [ ‖∇J ( θk ) ‖2 ∣∣∣δ̂ ( x ( k ) , ω∗k−τk ) − δ ( x ( k ) , θk−τk ) ∣∣∣ ] = −Cψ E [ ‖∇J ( θk ) ‖2 ∣∣∣γ ( φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ) + Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣ ] ≥ −Cψ E [ ‖∇J ( θk ) ‖2 ( γ ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣+ ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣ ) ] = −Cψ E [ ‖∇J ( θk ) ‖2 E [ γ ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣+ ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣∣∣∣ θk , θk−τk ] ] ≥ −2Cψ app E ‖∇J ( θk ) ‖2 ≥ −2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 ( 58 ) where the second last inequality follows from the fact that E [ γ ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣+ ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣ ] ≤ γ √ E ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣2 + √ E ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣2 ≤ 2 app . Define artificial transition x̄ ( k ) : = ( s ( k ) , a ( k ) , s̄′ ( k ) ∼ P ) , then I ( 3 ) 2 can be bounded as I ( 3 ) 2 = E 〈 ∇J ( θk ) , δ ( x ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 = E [ E [ 〈 ∇J ( θk ) , δ ( x ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉∣∣∣ θk−τk , θk ] ] = E 〈 ∇J ( θk ) , E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] 〉 + E 〈 ∇J ( θk ) , E [ δ ( x̄ ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] −∇J ( θk ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2 ∥∥∥E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] ∥∥∥2 ] − E [ ‖∇J ( θk ) ‖2 ∥∥∥E [ δ ( x̄ ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] −∇J ( θk ) ∥∥∥2 ] . ( 59 ) The first term in the last inequality can be bounded as E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] = E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] = E [ ( r ( x ( k ) ) + γ E [ r ( s′k , a′ , s′′ ) ] − ( r ( x̄ ( k ) ) + γ E [ r ( s̄′k , a′ , s′′ ) ] ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] ≤ 2Cψrmax‖P̃ − P‖TV ≤ 8Cψrmax ( 1− γ ) , ( 60 ) where the last inequality follows ‖P̃ − P‖TV = 2 ∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣ = 2 ( 1− γ ) ∫ s′∈S |P ( s′|s , a ) − η ( s′ ) | ≤ 4 ( 1− γ ) . ( 61 ) The second term in ( 59 ) can be rewritten as E [ δ ( x̄ ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] = E s ( k ) ∼µθk−τk a ( k ) ∼πθk−τk s̄′ ( k ) ∼P [ ( r ( x̄ ( k ) ) + γVπθk−τk ( s̄′ ( k ) ) − Vπθk−τk ( s ( k ) ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = E s ( k ) ∼µθk−τk a ( k ) ∼πθk−τk [ ( Qπθk−τk ( s ( k ) , a ( k ) ) − Vπθk−τk ( s ( k ) ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = E s ( k ) ∼µθk−τk a ( k ) ∼πθk−τk [ Aπθk−τk ( s ( k ) , a ( k ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = E s ( k ) ∼dθk−τk a ( k ) ∼πθk−τk [ Aπθk−τk ( s ( k ) , a ( k ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = ∇J ( θk−τk ) ( 62 ) where the second last equality follows µθ ( · ) = dθ ( · ) with dθ being a shorthand notation of dπθ [ 6 ] . Substituting ( 60 ) and ( 62 ) into ( 59 ) yields I ( 3 ) 2 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 − E [ ‖∇J ( θk ) ‖2‖∇J ( θk−τk ) −∇J ( θk ) ‖2 ] ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 − LV LJ E ‖θk−τk − θk‖2 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 − LV LJCpK0αk−K0 , ( 63 ) where the second last inequality is due to LJ -Lipschitz of policy gradient shown in Proposition 1 , and the last inequality follows ( 57 ) . Collecting lower bounds of I ( 1 ) 2 , I ( 2 ) 2 and I ( 3 ) 2 gives I2 ≥ −D1K0αk−K0 − ( 2Cψ sp + 8Cψrmax ( 1− γ ) ) E ‖∇J ( θk ) ‖2 − 2CψLV fa + ‖∇J ( θk ) ‖ 2 2 , ( 64 ) where the constant is D1 : = LV LωCψCp ( 1 + γ ) + LV LJCp . Substituting ( 56 ) and ( 64 ) into ( 55 ) yields E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 2αkCψ ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) E ‖∇J ( θk ) ‖2 − αkD1K0αk−K0 − 2αkCψLV fa + αk‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. ( 65 ) By following Cauchy-Schwarz inequality , the second term in ( 65 ) can be bounded as ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) E ‖∇J ( θk ) ‖2 ≤ √ E ‖∇J ( θk ) ‖22 E [ ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) 2 ] ≤ √ E ‖∇J ( θk ) ‖22 √ E [ 4C2δK 2 0β 2 k−1 + 4‖ωk − ω∗k‖22 + 4 2sp + 64r2max ( 1− γ ) 2 ] = 2 √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) , ( 66 ) where the last inequality follows the order of sp in Lemma 7 . Collecting the upper bound gives E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 4αkCψ √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) − αkD1K0αk−K0 − 2αkCψLV fa + αk‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. ( 67 ) Dividing both sides of ( 67 ) by αk , then rearranging and taking summation on both sides give K∑ k=K0 E ‖∇J ( θk ) ‖22 ≤ K∑ k=K0 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) I3 + K∑ k=K0 ( D1K0αk−K0 + LJ 2 C2pαk ) I4 + 4Cψ K∑ k=K0 √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) I5 + 2CψLV ( K −K0 + 1 ) fa . ( 68 ) We bound I3 as I3 = K∑ k=K0 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) = K∑ k=K0 ( 1 αk−1 − 1 αk ) E [ J ( θk ) ] − 1 αMK−1 E [ J ( θMK ) ] + 1 αK E [ J ( θK+1 ) ] ≤ 1 αK E [ J ( θK+1 ) ] ≤ rmax 1− γ 1 αK = O ( Kσ1 ) , ( 69 ) where the first inequality is due to the αk is monotonic decreasing and positive , and last inequality is due to Vπθ ( s ) ≤ rmax1−γ for any s ∈ S and πθ . We bound I4 as I4 = K∑ k=K0 ( D1K0αk−K0 + LJ 2 C2pαk ) ≤ K−K0∑ k=0 ( D1K0αk + LJ 2 C2pαk ) = O ( K0K1−σ1 ) . We bound I5 as I5 = K∑ k=K0 √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) ≤ √√√√ K∑ k=K0 E ‖∇J ( θk ) ‖22 √√√√ K∑ k=K0 ( C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) ) = √√√√ K∑ k=K0 E ‖∇J ( θk ) ‖22 √√√√C2δK20 K∑ k=K0 β2k−1 + K∑ k=K0 E ‖ωk − ω∗k‖22 +O ( K 2sp ) , ( 70 ) where the first inequality follows Cauchy-Schwartz inequality . In ( 70 ) , we have K∑ k=K0 β2k−1 ≤ K−K0∑ k=0 β2k = K−K0∑ k=0 c22 ( 1 + k ) −2σ2 = O ( K1−2σ2 ) . Substituting the last equality into ( 70 ) gives I5 ≤ √√√√ K∑ k=MK E ‖∇J ( θk ) ‖22 √√√√O ( K20K1−2σ2 ) + K∑ k=MK E ‖ωk − ω∗k‖22 +O ( K 2sp ) . ( 71 ) Dividing both sides of ( 67 ) by K −K0 + 1 and collecting upper bounds of I3 , I4 and I5 give 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 ≤ 4Cψ K −K0 + 1 √√√√ K∑ k=K0 E ‖∇J ( θk ) ‖22 √√√√O ( K20K1−2σ2 ) + K∑ k=K0 E ‖ωk − ω∗k‖22 +O ( K 2sp ) +O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( fa ) . ( 72 ) Define the following functions T4 ( K ) : = 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 , T5 ( K ) : = 1 K −K0 + 1 ( O ( K20K1−2σ2 ) + K∑ k=K0 E ‖ωk − ω∗k‖22 +O ( K 2sp ) ) , T6 ( K ) : = O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( fa ) . Then ( 72 ) can be rewritten as T4 ( K ) ≤ T6 ( K ) + √ 2 ( 1 + γ ) Cψ √ T4 ( K ) √ T5 ( K ) . Solving this quadratic inequality in terms of T4 ( K ) , we obtain T4 ( K ) ≤ 2T6 ( K ) + 4 ( 1 + γ ) 2C2ψT5 ( K ) , ( 73 ) which implies 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 = O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖22 ) +O ( app ) . We further have 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 ≤ 1 K ( K0−1∑ k=1 L2V + K∑ k=K0 E ‖∇J ( θk ) ‖22 ) = K0 − 1 K L2V + K −K0 + 1 K 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 = O ( K0 K ) +O ( 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 ) = O ( 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 ) ( 74 ) which completes the proof . B.3 PROOF OF THEOREM 3 Given the definition in Section B.1 , we now give the convergence proof of critic update in Algorithm 1 with linear function approximation and Markovian sampling . By following the derivation of ( 40 ) , we have E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + Cqβ 2 k , ( 75 ) where C1 : = CpLω , C2 : = Cδ ( 1 + γ ) and Cq : = 2C2δ + 2L 2 ωC 2 p max ( k ) α2k β2k = 2C2δ + 2L 2 ωC 2 p c21 c22 . Now we consider the third item in the last inequality . For some m ∈ N+ , we define M : = ( K0 + 1 ) m+K0 . Following Lemma 4 ( to be presented in Sec . C.1 ) , for some dm ≤M and positive constants C4 , C5 , C6 , C7 , we have E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 ≤ C4 E ‖θk − θk−dm‖2 + C5 dm∑ i=τk E ‖θk−i − θk−dm‖2 + C6 E ‖ωk − ωk−dm‖2 + C7κρm−1 ≤ C4 k−1∑ i=k−dm E ‖θi+1 − θi‖2 + C5 dm−1∑ i=τk k−i−1∑ j=k−dm E ‖θj+1 − θj‖2 + C6 k−1∑ i=k−dm E ‖ωi+1 − ωi‖2 + C7κρm−1 ≤ C4 k−1∑ i=k−dm αiCp + C5 dm−1∑ i=τk k−i−1∑ j=k−dm αjCp + C6 k−1∑ i=k−dm βiCδ + C7κρ m−1 ≤ C4αk−dm k−1∑ i=k−dm Cp + C5αk−dm dm−1∑ i=τk k−i−1∑ j=k−dm Cp + C6βk−dm k−1∑ i=k−dm Cδ + C7κρ m−1 ≤ C4dmCpαk−dm + C5 ( dm − τk ) 2Cpαk−dm + C6dmCδβk−dm + C7κρm−1 ≤ ( C4M + C5M 2 ) Cpαk−M + C6MCδβk−M + C7κρ m−1 , ( 76 ) where the third last inequality is due to the monotonicity of step size , and the last inequality is due to τk ≥ 0 and dm ≤M . Further letting m = mK which is defined in ( 21 ) yields E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 = ( C4MK + C5M 2 K ) Cpαk−MK + C6CδMKβk−MK + C7κρ mK−1 ≤ ( C4MK + C5M 2 K ) Cpαk−MK + C6CδMKβk−MK + C7αK , ( 77 ) where MK = ( K0 + 1 ) mK +K0 , and the last inequality follows the definition of mK . Substituting ( 77 ) into ( 75 ) , then rearranging and summing up both sides over k = MK , ... , K yield 2λ K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ≤ K∑ k=MK 1 βk ( E ‖ωk − ω∗k‖ 2 2 − E ∥∥ωk+1 − ω∗k+1∥∥22 ) I1 +Cq K∑ k=MK βk I2 + 2 K∑ k=MK ( ( C4MK + C5M 2 K ) Cpαk−MK + C6CδMKβk−MK + C7αK ) I3 + 2 K∑ k=MK ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 I4 . ( 78 ) where the order of I1 , I2 and I4 have already been given by ( 45 ) , ( 46 ) and ( 50 ) respectively . We bound I3 as I3 = ( C4MK + C5M 2 K ) Cp K∑ k=MK αk + C6CδMK K∑ k=MK βk + C7αK K∑ k=MK 1 ≤ ( C4MK + C5M 2 K ) Cpc1 K1−σ1 1− σ1 + C6CδMKc2 K1−σ2 1− σ2 + C7c1K ( 1 +K ) −σ1 = O ( ( K20 log 2K ) K1−σ1 ) +O ( ( K0 logK ) K 1−σ2 ) , ( 79 ) where the last inequality follows from the integration rule ∑b k=a k −σ ≤ b 1−σ 1−σ , and the last equality is due to O ( MK ) = O ( K0mK ) = O ( K0 logK ) . Collecting the bounds of I1 , I2 , I3 and I4 , and dividing both sides of ( 78 ) by K −MK + 1 yield 2λ 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ≤ √ O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K−σ1+1 ) +O ( K20K1−2σ2 ) K −MK + 1 √√√√ K∑ k=MK E ‖ωk − ω∗k‖ 2 2 +O ( 1 K1−σ2 ) +O ( K20 log 2K Kσ1 ) +O ( K0 logK Kσ2 ) . ( 80 ) Similar to the derivation of ( 52 ) , ( 80 ) implies 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 = O ( 1 K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K20 log 2K Kσ1 ) +O ( K0 logK Kσ2 ) . Similar to ( 53 ) , we have 1 K K∑ k=1 E ‖ωk − ω∗k‖22 = O ( K0 logK K ) +O ( 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ) = O ( 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ) ( 81 ) which completes the proof . B.4 PROOF OF THEOREM 4 Given the definition in section B.2 , we now give the convergence proof of actor update in Algorithm 1 with linear value function approximation and Markovian sampling method . By following the derivation of ( 55 ) , we have E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] + αk E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I1 + αk E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I2 −LJ 2 C2pα 2 k. ( 82 ) The item I1 can be bounded by following ( 56 ) as I1 ≥ −2Cψ E [ ‖∇J ( θk ) ‖2 ( CδK0βk−1 + ‖ωk − ω∗k‖2 ) ] . ( 83 ) Next we consider I2 . We first decompose it as I2 = E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 1 ) 2 + E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 I ( 2 ) 2 +E ‖∇J ( θk ) ‖22 . ( 84 ) For some m ∈ N+ , define M : = ( K0 + 1 ) m + K0 . Following Lemma 5 , for some dm ≤ M and positive constants D2 , D3 , D4 , D5 , I ( 1 ) 2 can be bounded as I ( 1 ) 2 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −D2 E ‖θk−τk − θk−dm‖2 −D3 E ‖θk − θk−dm‖2 −D4 k−τk∑ i=k−dm E ‖θi − θk−dm‖2 −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 ≥ −D2 ( dm − τk ) Cpαk−dm −D3dmCpαk−dm −D4 ( dm − τk ) 2Cpαk−dm −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 , ( 85 ) where the derivation of the last inequality is similar to that of ( 76 ) . By setting m = mK in ( 85 ) , and following the fact that dmK ≤MK and τk ≥ 0 , we have I ( 1 ) 2 ≥ −D2MKCpαk−MK −D3MKCpαk−MK −D4M2KCpαk−MK −D5κρmK−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θ ) ‖2 = − ( ( D2 +D3 ) CpMK +D4CpM 2 K ) αk−MK −D5κρmK−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 ≥ − ( ( D2 +D3 ) CpMK +D4CpM 2 K ) αk−MK −D5αK − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 , ( 86 ) where the last inequality is due to the definition of mK . Following Lemma 6 , for some positive constants D6 , D7 , D8 and D9 , we bound I ( 2 ) 2 as I ( 2 ) 2 = E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −D6 E ‖θk−τk − θk−dm‖2 −D7 E ‖θk − θk−dm‖2 −D8 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D9κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 . Similar to the derivation of ( 86 ) , we have I ( 2 ) 2 ≥ − ( D6 +D7 +D8MK ) CpMKαk−MK −D9αK − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 . ( 87 ) Collecting the lower bounds of I ( 1 ) 2 and I ( 2 ) 2 yields I2 ≥ −2CψLV fa − 2Cψ ( sp + 4rmax ( 1− γ ) ) E ‖∇J ( θk ) ‖2 + E ‖∇J ( θk ) ‖22 −DKαk−MK − ( D5 +D9 ) αK , ( 88 ) where we define DK : = ( D4 +D8 ) CpM2K + ( D2 +D3 +D6 +D7 ) CpMK for brevity . Substituting ( 83 ) and ( 88 ) into ( 82 ) yields E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 2αkCψ E [ ‖∇J ( θk ) ‖2 ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) ] − αk ( DKαk−MK + ( D5 +D9 ) αK ) − 2CψLV faαk + αk E ‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. Similar to the derivation of ( 67 ) , the last inequality implies E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 4αkCψ √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) − αk ( DKαk−MK + ( D5 +D9 ) αK ) − 2CψLV faαk + αk E ‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. Rearranging and dividing both sides by αk yield E ‖∇J ( θk ) ‖22 ≤ 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) +DKαk−MK + ( D5 +D9 ) αK + LJ 2 C2pαk + 4Cψ √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) + 2CψLV fa . Taking summation gives K∑ k=MK E ‖∇J ( θk ) ‖22 ≤ K∑ k=MK 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) I3 + K∑ k=MK ( DKαk−MK + LJ 2 C2pαk + ( D5 +D9 ) αK ) I4 + 4Cψ K∑ k=MK √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) I5 + 2CψLV ( K −MK + 1 ) fa . ( 89 ) in which the upper bounds of I3 and I5 have already been given by ( 69 ) and ( 71 ) respectively . We bound I4 as I4 = K∑ k=MK ( DKαk−MK + LJ 2 C2pαk + ( D5 +D9 ) αK ) ≤ K∑ k=MK ( DKαk−MK + LJ 2 C2pαk−MK + ( D5 +D9 ) αK ) = ( DK + LJ 2 C2p ) K∑ k=MK αk−MK + ( D5 +D9 ) ( K −MK + 1 ) αK = ( DK + LJ 2 C2p ) K−MK∑ k=0 αk + ( D5 +D9 ) ( K −MK + 1 ) αK ≤ ( DK + LJ 2 C2p ) c1 1− σ1 K1−σ1 + c1 ( D5 +D9 ) ( K + 1 ) 1−σ1 = O ( ( K20 log 2K ) K1−σ1 ) ( 90 ) where the last inequality uses ∑b k=a k −σ ≤ b 1−σ 1−σ , and the last equality is due to the fact that O ( DK ) = O ( M2K +MK ) = O ( ( K0mK ) 2 +K0mK ) = O ( K20 log 2K ) . Substituting the upper bounds of I3 , I4 and I5 into ( 89 ) , and dividing both sides by K−MK + 1 give 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 ≤ 4Cψ K −MK + 1 √√√√ K∑ k=MK E ‖∇J ( θk ) ‖22 √√√√O ( K20K1−2σ2 ) + K∑ k=MK E ‖ωk − ω∗k‖22 +O ( K 2sp ) +O ( 1 K1−σ1 ) +O ( K20 log 2K Kσ1 ) +O ( fa ) . ( 91 ) Following the similar steps of those in ( 73 ) , ( 91 ) essentially implies 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 = O ( 1 K1−σ1 ) +O ( K20 log 2K Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗θk‖ 2 2 ) +O ( app ) . Similar to ( 74 ) , we have 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = O ( K0 logK K ) +O ( 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 ) = O ( 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 ) which completes the proof . C SUPPORTING LEMMAS C.1 SUPPORTING LEMMAS FOR THEOREM 3 Lemma 4 . For any m ≥ 1 and k ≥ ( K0 + 1 ) m+K0 + 1 , we have E 〈 ωk − ω∗θk , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 ≤ C4 E ‖θk − θk−dm‖2 + C5 dm∑ i=τk E ‖θk−i − θk−dm‖2 + C6 E ‖ωk − ωk−dm‖2 + C7κρm−1 , where dm ≤ ( K0 + 1 ) m + K0 , and C4 : = 2CδLω + 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1 − ρ ) −1 ) , C5 : = 4RωCδ|A|Lπ and C6 : = 4 ( 1 + γ ) Rω + 2Cδ , C7 : = 8RωCδ . Proof . Consider the collection of random samples { x ( k−K0−1 ) , x ( k−K0 ) , ... , x ( k ) } . Suppose x ( k ) is sampled by worker n , then due to Assumption 1 , { x ( k−K0−1 ) , x ( k−K0 ) , ... , x ( k−1 ) } will contain at least another sample drawn by worker n. Therefore , { x ( k− ( K0+1 ) m ) , x ( k− ( K0+1 ) m+1 ) , ... , x ( k−1 ) } will contain at least m samples from worker n. Consider the Markov chain formed by m+ 1 samples in { x ( k− ( K0+1 ) m ) , x ( k− ( K0+1 ) m+1 ) , ... , x ( k ) } : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . Suppose θk−dm was used to do the kmth update , then we have xt−m = x ( km ) . Following Assumption 1 , we have τkm = km − ( k − dm ) ≤ K0 . Since x ( km ) is in { x ( k− ( K0+1 ) m ) , ... , x ( k ) } , we have km ≥ k − ( K0 + 1 ) m. Combining these two inequalities , we have dm ≤ ( K0 + 1 ) m+K0 . ( 92 ) Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct an auxiliary Markov chain as that in Lemma 2 : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . For brevity , we define ∆1 ( x , θ , ω ) : = 〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 . Throughout this proof , we use θ , θ′ , ω , ω′ , x and x̃ as shorthand notations of θk , θk−dm , ωk , ωk−dm , xt and x̃t respectively . First we decompose ∆1 ( x , θ , ω ) as ∆1 ( x , θ , ω ) = ∆1 ( x , θ , ω ) −∆1 ( x , θ′ , ω ) I1 + ∆1 ( x , θ ′ , ω ) −∆1 ( x , θ′ , ω′ ) I2 + ∆1 ( x , θ ′ , ω′ ) −∆1 ( x̃ , θ′ , ω′ ) I3 + ∆1 ( x̃ , θ ′ , ω′ ) I4 . ( 93 ) We bound I1 in ( 93 ) as ∆1 ( x , θ , ω ) −∆1 ( x , θ′ , ω ) = 〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 ≤ |〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉| + |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉| . ( 94 ) For the first term in ( 94 ) , we have |〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉| = |〈ω∗θ − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉| ≤ ‖ω∗θ − ω∗θ′‖2‖g ( x , ω ) − g ( θ , ω ) ‖ ≤ 2Cδ‖ω∗θ − ω∗θ′‖2 ≤ 2CδLω‖θ − θ′‖2 , where the last inequality is due to Proposition 2 . We use x ∼ θ′ as shorthand notations to represent that s ∼ µθ′ , a ∼ πθ′ , s′ ∼ P̃ . For the second term in ( 94 ) , we have |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉| = |〈ω − ω∗θ′ , g ( θ′ , ω ) − g ( θ , ω ) 〉| ≤ ‖ω − ω∗θ′‖2‖g ( θ′ , ω ) − g ( θ , ω ) ‖2 ≤ 2Rω‖g ( θ′ , ω ) − g ( θ , ω ) ‖2 = 2Rω ∥∥∥∥ Ex∼θ′ [ g ( x , ω ) ] − Ex∼θ [ g ( x , ω ) ] ∥∥∥∥ 2 ≤ 2Rω sup x ‖g ( x , ω ) ‖2‖µθ′ ⊗ πθ′ ⊗ P̃ − µθ ⊗ πθ ⊗ P̃‖TV ≤ 2RωCδ‖µθ′ ⊗ πθ′ ⊗ P̃ − µθ ⊗ πθ ⊗ P̃‖TV = 4RωCδdTV ( µθ′ ⊗ πθ′ ⊗ P̃ , µθ ⊗ πθ ⊗ P̃ ) ≤ 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ‖θ − θ′‖2 , where the third inequality follows the definition of TV norm , the second last inequality follows ( 32 ) , and the last inequality follows Lemma A.1 . in [ 17 ] . Collecting the upper bounds of the two terms in ( 94 ) yields I1 ≤ [ 2CδLω + 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ] ‖θ − θ′‖2 . Next we bound E [ I2 ] in ( 93 ) as E [ I2 ] = E [ ∆1 ( x , θ′ , ω ) −∆1 ( x , θ′ , ω′ ) ] = E 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 − 〈ω′ − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉 ≤ E |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| + E |〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉 − 〈ω′ − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| . ( 95 ) We bound the first term in ( 95 ) as E |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| = E |〈ω − ω∗θ′ , g ( x , ω ) − g ( x , ω′ ) + g ( θ′ , ω′ ) − g ( θ′ , ω ) 〉| ≤ 2Rω ( E ‖g ( x , ω ) − g ( x , ω′ ) ‖2 + E ‖g ( θ′ , ω′ ) − g ( θ′ , ω ) ‖2 ) ≤ 2Rω ( E ‖g ( x , ω ) − g ( x , ω′ ) ‖2 + E ∥∥∥∥ Ex∼θ′ [ g ( x , ω′ ) ] − Ex∼θ′ [ g ( x , ω ) ] ∥∥∥∥ 2 ) = 2Rω ( E ‖ ( γφ ( s′ ) − φ ( s ) ) > ( ω − ω′ ) ‖2 + E ∥∥∥∥ Ex∼θ′ [ ( γφ ( s′ ) − φ ( s ) ) > ] ( ω′ − ω ) ∥∥∥∥ 2 ) ≤ 2Rω ( ( 1 + γ ) E ‖ω − ω′‖2 + ( 1 + γ ) E ‖ω − ω′‖2 ) = 4Rω ( 1 + γ ) E ‖ω − ω′‖2 . We bound the second term in ( 95 ) as E |〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉 − 〈ω′ − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| = E |〈ω − ω′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| ≤ 2Cδ E ‖ω − ω′‖2 . Collecting the upper bounds of the two terms in ( 95 ) yields E [ I2 ] ≤ ( 4 ( 1 + γ ) Rω + 2Cδ ) E ‖ω − ω′‖2 . We first bound I3 as E [ I3|θ′ , ω′ , st−m+1 ] = E [ ∆1 ( x , θ′ , ω′ ) −∆1 ( x̃ , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] ≤ |E [ ∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] − E [ ∆1 ( x̃ , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] | ≤ sup x |∆1 ( x , θ′ , ω′ ) | ‖P ( x ∈ ·|θ′ , ω′ , st−m+1 ) − P ( x̃ ∈ ·|θ′ , ω′ , st−m+1 ) ‖TV ≤ 8RωCδdTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) , ( 96 ) where the second last inequality follows the definition of TV norm , and the last inequality follows the fact that |∆1 ( x , θ′ , ω′ ) | ≤ ‖ω′ − ω∗θ′‖2‖g ( x , ω′ ) − g ( θ′ , ω′ ) ‖2 ≤ 4RωCδ . By following ( 22 ) in Lemma 2 , we have dTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) ≤ 1 2 |A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2| θ′ , st−m+1 ] . Substituting the last inequality into ( 96 ) , then taking total expectation on both sides yield E [ I3 ] ≤ 4RωCδ|A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 . Next we bound I4 . Define x : = ( s , a , s′ ) where s ∼ µθ′ , a ∼ πθ′ and s′ ∼ P̃ . It is immediate that E [ ∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] = 〈ω′ − ω∗θ′ , E [ g ( x , ω′ ) |θ′ , ω′ , st−m+1 ] − g ( θ′ , ω′ ) 〉 = 〈ω′ − ω∗θ′ , g ( θ′ , ω′ ) − g ( θ′ , ω′ ) 〉 = 0 . ( 97 ) Then we have E [ I4|θ′ , ω′ , st−m+1 ] = E [ ∆1 ( x̃ , θ′ , ω′ ) −∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] ≤ |E [ ∆1 ( x̃ , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] − E [ ∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] | ≤ sup x |∆1 ( x , θ′ , ω′ ) | ‖P ( x̃ ∈ ·|θ′ , st−m+1 ) − P ( x ∈ ·|θ′ , st−m+1 ) ‖TV ≤ 8RωCδdTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , P ( x ∈ ·|θ′ , st−m+1 ) ) = 8RωCδdTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) , ( 98 ) where the second inequality follows the definition of TV norm , and the third inequality follows ( 97 ) . The auxiliary Markov chain with policy πθ′ starts from initial state st−m+1 , and s̃t is the ( m− 1 ) th state on the chain . Following Lemma 1 , we have : dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) = dTV ( P ( ( s̃t , ãt , s̃t+1 ) ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ≤ κρm−1 . Substituting the last inequality into ( 98 ) and taking total expectation on both sides yield E [ I4 ] ≤ 8RωCδκρm−1 . Taking total expectation on ( 93 ) and collecting bounds of I1 , I2 , I3 , I4 yield E [ ∆1 ( x , θ , ω ) ] ≤ C4 E ‖θk − θk−dm‖2 + C5 dm∑ i=τk E ‖θk−i − θk−dm‖2 + C6 E ‖ωk − ωk−dm‖2 + C7κρm−1 , where C4 : = 2CδLω + 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1 − ρ ) −1 ) , C5 : = 4RωCδ|A|Lπ , C6 : = 4 ( 1 + γ ) Rω + 2Cδ and C7 : = 8RωCδ . C.2 SUPPORTING LEMMAS FOR THEOREM 4 Lemma 5 . For any m ≥ 1 and k ≥ ( K0 + 1 ) m+K0 + 1 , we have E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −D2 E ‖θk−τk − θk−dm‖2 −D3 E ‖θk − θk−dm‖2 −D4 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θ ) ‖2 , where D2 : = 2LV LψCδ , D3 : = ( 2CδCψLJ + LV Cψ ( Lω + LV ) ( 1 + γ ) + 2CψLJ app ) , D4 : = 2LV CψCδ|A|Lπ and D5 : = 4LV CψCδ . Proof . For the worker that contributes to the kth update , we construct its Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . By ( 92 ) in Lemma 4 , we have dm ≤ ( K0 + 1 ) m+K0 . Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct an auxiliary Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . First we have〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 + 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−dm ( s ( k ) , a ( k ) ) 〉 . ( 99 ) We first bound the fist term in ( 99 ) as〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 ≥ −‖J ( θk ) ‖2|δ̂ ( x ( k ) , ω∗k ) − δ ( x ( k ) , θk ) |‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −‖J ( θk ) ‖2 ( |δ̂ ( x ( k ) , ω∗k ) |+ |δ ( x ( k ) , θk ) | ) ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV ( |δ̂ ( x ( k ) , ω∗k ) |+ |δ ( x ( k ) , θk ) | ) ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −2LV Cδ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −2LV LψCδ‖θk−τk − θk−dm‖2 , ( 100 ) where the last inequality follows Assumption 3 and second last inequality follows |δ̂ ( x , ω∗θ ) | ≤ |r ( x ) |+ γ‖φ ( s′ ) ‖2‖ω∗θ‖2 + ‖φ ( s ) ‖2‖ω∗θ‖2 ≤ rmax + ( 1 + γ ) Rω ≤ Cδ , |δ ( x , θ ) | ≤ |r ( x ) |+ γ|Vπθ ( s′ ) |+ |Vπθ ( s ) | ≤ rmax + ( 1 + γ ) rmax 1− γ ≤ Cδ . Substituting ( 100 ) into ( 99 ) gives〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −2LV LψCδ‖θk−τk − θk−dm‖2 + 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−dm ( s ( k ) , a ( k ) ) 〉 . ( 101 ) Then we start to bound the second term in ( 101 ) . For brevity , we define ∆2 ( x , θ ) : = 〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθk−dm ( s , a ) 〉 . In the following proof , we use θ , θ′ , ω∗θ , ω ∗ θ′ , x and x̃ as shorthand notations for θk , θk−dm , ω ∗ k , ω∗k−dm , xt and x̃t respectively . We also define x : = ( s , a , s ′ ) , where s ∼ µθ′ , a ∼ πθ′ and s′ ∼ P̃ . We decompose the second term in ( 101 ) as ∆2 ( x , θ ) = ∆2 ( x , θ ) −∆2 ( x , θ′ ) I1 + ∆2 ( x , θ ′ ) −∆2 ( x̃ , θ′ ) I2 + ∆2 ( x̃ , θ ′ ) −∆2 ( x , θ′ ) I3 + ∆2 ( x , θ ′ ) I4 . We bound the term I1 as I1 = 〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉 = 〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 + 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉 . For the first term in I1 , we have〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 = 〈 ∇J ( θ ) −∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 ≥ −‖∇J ( θ ) −∇J ( θ′ ) ‖2‖δ̂ ( x , ω∗θ ) − δ ( x , θ ) ‖2‖ψθ′ ( s , a ) ‖2 ≥ −2CδCψ‖∇J ( θ ) −∇J ( θ′ ) ‖2 ≥ −2CδCψLJ‖θ − θ′‖2 , where the last inequality is due to the LJ -Lipschitz of policy gradient shown in Proposition 1 . For the second term in I1 , we have〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉 = 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ̂ ( x , ω∗θ′ ) + δ ( x , θ′ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 ≥ −LV Cψ ∣∣∣δ̂ ( x , ω∗θ ) − δ̂ ( x , ω∗θ′ ) + δ ( x , θ′ ) − δ ( x , θ ) ∣∣∣ ≥ −LV Cψ ∣∣γφ ( s′ ) > ( ω∗θ − ω∗θ′ ) + φ ( s ) > ( ω∗θ′ − ω∗θ ) + γVπθ′ ( s′ ) − γVπθ ( s′ ) + Vπθ ( s ) − Vπθ′ ( s ) ∣∣ ≥ −LV Cψ ( γ‖ω∗θ − ω∗θ′‖2 + ‖ω∗θ′ − ω∗θ‖2 + γ|Vπθ′ ( s ′ ) − Vπθ ( s′ ) |+ |Vπθ ( s ) − Vπθ′ ( s ) | ) ≥ −LV Cψ ( γLω‖θ − θ′‖2 + Lω‖θ − θ′‖2 + γLV ‖θ − θ′‖2 + LV ‖θ − θ′‖2 ) = −LV Cψ ( Lω + LV ) ( 1 + γ ) ‖θ − θ′‖2 , where the last inequality is due to the Lω-Lipschitz continuity of ω∗θ shown in Proposition 2 and LV -Lipschitz continuity of Vπθ ( s ) shown in Lemma 3 . Collecting the upper bounds of I1 yields I1 ≥ − ( 2CδCψLJ + LV Cψ ( Lω + LV ) ( 1 + γ ) ) ‖θ − θ′‖2 . First we bound I2 as E [ I2|θ′ , st−m+1 ] = E [ ∆2 ( x , θ′ ) −∆2 ( x̃ , θ′ ) |θ′ , st−m+1 ] ≥ − |E [ ∆2 ( x , θ′ ) | θ′ , st−m+1 ] − E [ ∆2 ( x̃ , θ′ ) | θ′ , st−m+1 ] | ≥ − sup x |∆2 ( x , θ′ ) | ‖P ( x ∈ ·|θ′ , st−m+1 ) − P ( x̃ ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −4LV CψCδdTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) ≥ −2LV CψCδ|A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2| θ′ , st−m+1 ] , ( 102 ) where the second inequality is due to the definition of TV norm , the last inequality follows ( 22 ) in Lemma 2 , and the second last inequality follows the fact that |∆2 ( x , θ′ ) | ≤ ‖∇J ( θ′ ) ‖2|δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) |‖ψθ′ ( s , a ) ‖2 ≤ 2LV CδCψ . ( 103 ) Taking total expectation on both sides of ( 102 ) yields E [ I2 ] ≥ −2LV CψCδ|A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 . Next we bound I3 as E [ I3|θ′ , st−m+1 ] = E [ ∆2 ( x̃ , θ′ ) −∆2 ( x , θ′ ) | θ′ , st−m+1 ] ≥ − |E [ ∆2 ( x̃ , θ′ ) | θ′ , st−m+1 ] − E [ ∆2 ( x , θ′ ) | θ′ , st−m+1 ] | ≥ − sup x |∆2 ( x , θ′ ) | ‖P ( x̃ ∈ ·|θ′ , st−m+1 ) − P ( x ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −4LV CψCδdTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) , ( 104 ) where the second inequality is due to the definition of TV norm , and the last inequality follows ( 103 ) . The auxiliary Markov chain with policy πθ′ starts from initial state st−m+1 , and s̃t is the ( m− 1 ) th state on the chain . Following Lemma 1 , we have : dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) =dTV ( P ( ( s̃t , ãt , s̃t+1 ) ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ≤ κρm−1 . Substituting the last inequality into ( 104 ) and taking total expectation on both sides yield E [ I3 ] ≥ −4LV CψCδκρm−1 We bound I4 as E [ I4|θ′ ] = E [ 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉∣∣∣ θ′ ] ≥ −Cψ‖∇J ( θ′ ) ‖2 E [ ∣∣∣δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ∣∣∣∣∣∣ θ′ ] = −Cψ‖∇J ( θ′ ) ‖2 E [ ∣∣γ ( φ ( s′ ) > ω∗θ′ − Vπθ′ ( s′ ) ) + Vπθ′ ( s ) − φ ( s ) > ω∗θ′ ∣∣∣∣ θ′ ] ≥ −Cψ‖∇J ( θ′ ) ‖2 ( γ E [ |φ ( s′ ) > ω∗θ′ − Vπθ′ ( s ′ ) | ∣∣ θ′ ] + E [ |Vπθ′ ( s ) − φ ( s ) > ω∗θ′ |∣∣ θ′ ] ) ≥ −Cψ‖∇J ( θ′ ) ‖2 ( γ √ E [ |φ ( s′ ) > ω∗θ′ − Vπθ′ ( s ′ ) |2 ∣∣ θ′ ] +√E [ |Vπθ′ ( s ) − φ ( s ) > ω∗θ′ |2∣∣ θ′ ] ) = −Cψ‖∇J ( θ′ ) ‖2 ( γ √ E s′∼µθ′ |φ ( s′ ) > ω∗θ′ − Vπθ′ ( s ′ ) |2 + √ E s∼µθ′ |Vπθ′ ( s ) − φ ( s ) > ω∗θ′ |2 ) ≥ −2Cψ‖∇J ( θ′ ) ‖2 app , where the second last inequality follows Jensen ’ s inequality . The last inequality further implies E [ I4 ] ≥ −2Cψ E ‖∇J ( θ′ ) −∇J ( θ ) +∇J ( θ ) ‖2 app ≥ −2Cψ app E ‖∇J ( θ′ ) −∇J ( θ ) ‖2 − 2Cψ app E ‖∇J ( θ ) ‖2 ≥ −2Cψ app E ‖∇J ( θ′ ) −∇J ( θ ) ‖2 − 2Cψ fa E ‖∇J ( θ ) ‖2 − 2CψLV sp ≥ −2CψLJ app E ‖θ − θ′‖2 − 2Cψ fa E ‖∇J ( θ ) ‖2 − 2CψLV sp , where the last inequality follows Proposition 1 . Taking total expectation on both sides of ( 101 ) , and collecting lower bounds of I1 , I2 , I3 and I4 yield E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −D2 E ‖θk−τk − θk−dm‖2 −D3 E ‖θk − θk−dm‖2 −D4 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 , where D2 : = 2LV LψCδ , D3 : = ( 2CδCψLJ + LV Cψ ( Lω + LV ) ( 1 + γ ) + 2CψLJ app ) , D4 : = 2LV CψCδ|A|Lπ and D5 : = 4LV CψCδ . Lemma 6 . For any m ≥ 1 and k ≥ ( K0 + 1 ) m+K0 + 1 , we have E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −D6 E ‖θk−τk − θk−dm‖2 −D7 E ‖θk − θk−dm‖2−D8 dm∑ i=τk E ‖θk−i − θk−dm‖2−D9κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 , where D6 : = LV CδLψ , D7 : = CpLJ + ( 1 + γ ) L2V Cψ + 2LV LJ + 8CψrmaxLJ ( 1 − γ ) , D8 : = LV ( Cp + LV ) |A|Lπ , D9 : = 2LV ( Cp + LV ) . Proof . For the worker that contributes to the kth update , we construct its Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . By ( 92 ) in Lemma 4 , we have dm ≤ ( K0 + 1 ) m+K0 . Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct an auxiliary Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . First we have 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 = 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 + 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−dm ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 . ( 105 ) We bound the first term in ( 105 ) as〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 ≥ −‖∇J ( θk ) ‖2 ‖δ ( x ( k ) , θk ) ‖2‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV ‖δ ( x ( k ) , θk ) ‖2‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV Cδ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV CδLψ‖θk−τk − θk−dm‖2 , ( 106 ) where the last inequality follows Assumption 3 , and the second last inequality follows the fact that |δ ( x , θ ) | ≤ |r ( x ) |+ γ|Vπθ ( s′ ) |+ |Vπθ ( s ) | ≤ rmax + ( 1 + γ ) rmax 1− γ ≤ Cδ . Substituting ( 106 ) into ( 105 ) gives〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −LV CδLψ‖θk−τk − θk−dm‖2 + 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−dm ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 . ( 107 ) Then we start to bound the second term in ( 107 ) . For brevity , we define ∆3 ( x , θ ) : = 〈 ∇J ( θ ) , δ ( x , θ ) ψθk−dm ( s , a ) −∇J ( θ ) 〉 . Throughout the following proof , we use θ , θ′ , x and x̃ as shorthand notations of θk , θk−dm , xt and x̃t respectively . We decompose ∆3 ( x , θ ) as ∆3 ( x , θ ) = ∆3 ( x , θ ) −∆3 ( x , θ′ ) I1 + ∆3 ( x , θ ′ ) −∆3 ( x̃ , θ′ ) I2 + ∆3 ( x̃ , θ ′ ) I3 . We first bound I1 as |I1| = |∆3 ( x , θ ) −∆3 ( x , θ′ ) | = ∣∣〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − ‖∇J ( θ ) ‖22 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉+ ‖∇J ( θ′ ) ‖22∣∣ ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉|+ ∣∣‖∇J ( θ′ ) ‖22 − ‖∇J ( θ ) ‖22∣∣ ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉|+ ‖∇J ( θ′ ) +∇J ( θ ) ‖2‖∇J ( θ′ ) −∇J ( θ ) ‖2 ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉|+ 2LV LJ‖θ − θ′‖2 , ( 108 ) where the last equality is due to LV -Lipschitz of value function and LJ -Lipschitz of policy gradient . We bound the first term in ( 108 ) as |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| + |〈∇J ( θ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| = |〈∇J ( θ ) , ( δ ( x , θ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉|+ |〈∇J ( θ ) −∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| ≤ LV Cψ |δ ( x , θ ) − δ ( x , θ′ ) |+ Cp‖∇J ( θ ) −∇J ( θ′ ) ‖2 = LV Cψ ∣∣γ ( Vπθ ( s′ ) − Vπθ′ ( s′ ) ) + Vπθ′ ( s ) − Vπθ ( s ) ∣∣+ Cp‖∇J ( θ ) −∇J ( θ′ ) ‖2 ≤ LV Cψ ( γ ∣∣Vπθ ( s′ ) − Vπθ′ ( s′ ) ∣∣+ ∣∣Vπθ′ ( s ) − Vπθ ( s ) ∣∣ ) + Cp‖∇J ( θ ) −∇J ( θ′ ) ‖2 ≤ LV Cψ ( γLV ‖θ − θ′‖2 + LV ‖θ′ − θ‖ ) + CpLJ‖θ − θ′‖2 = ( CpLJ + ( 1 + γ ) L 2 V Cψ ) ‖θ − θ′‖2 . Substituting the above inequality into ( 108 ) gives the lower bound of I1 : I1 ≥ − ( CpLJ + ( 1 + γ ) L 2 V Cψ + 2LV LJ ) ‖θ − θ′‖2 . First we bound I2 as E [ I2|θ′ , st−m+1 ] = E [ ∆3 ( x , θ′ ) −∆3 ( x̃ , θ′ ) |θ′ , st−m+1 ] ≥ − |E [ ∆3 ( x , θ′ ) |θ′ , st−m+1 ] − E [ ∆3 ( x̃ , θ′ ) |θ′ , st−m+1 ] | ≥ − sup x |∆3 ( x , θ′ ) | ‖P ( x ∈ ·|θ′ , st−m+1 ) − P ( x̃ ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −2LV ( Cp + LV ) dTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) ≥ −LV ( Cp + LV ) |A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2|θ′ , st−m+1 ] , ( 109 ) where the second inequality is due to the definition of TV norm , the last inequality is due to ( 22 ) in Lemma 2 , and thesecond last inequality follows the fact that |∆3 ( x , θ′ ) | ≤ ‖∇J ( θ ) ‖2 ( ‖δ ( x , θ ) ψθk−dm ( s , a ) ‖2 + ‖∇J ( θ ) ‖2 ) ≤ LV ( Cp + LV ) . ( 110 ) Taking total expectation on both sides of ( 109 ) yields E [ I2 ] ≥ −LV ( Cp + LV ) |A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 . Define x : = ( s , a , s′ ) , where s ∼ dθ′ , a ∼ πθ′ and s′ ∼ P̃ . Then we have E [ I3 ] = E [ ∆3 ( x̃ , θ′ ) −∆3 ( x , θ′ ) ] + E [ ∆3 ( x , θ′ ) ] . ( 111 ) We bound the first term in ( 111 ) as E [ ∆3 ( x̃ , θ′ ) −∆3 ( x , θ′ ) |θ′ , st−m+1 ] ≥ − |E [ ∆3 ( x̃ , θ′ ) |θ′ , st−m+1 ] − E [ ∆3 ( x , θ′ ) |θ′ , st−m+1 ] | ≥ − sup x |∆3 ( x , θ′ ) | ‖P ( x̃ ∈ ·|θ′ , st−m+1 ) − P ( x ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −2LV ( Cp + LV ) dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , P ( x ∈ ·|θ′ , st−m+1 ) ) = −2LV ( Cp + LV ) dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , dθ′ ⊗ πθ′ ⊗ P̃ ) = −2LV ( Cp + LV ) dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ( 112 ) where the second inequality follows the definition of total variation norm , and the third inequality follows ( 110 ) . The last equality is due to the fact shown by [ 6 ] that µθ′ ( · ) = dθ′ ( · ) , where µθ′ is the stationary distribution of an artificial MDP with transition kernel P̃ ( ·|s , a ) and policy πθ′ . The auxiliary Markov chain with policy πθ′ starts from initial state st−m+1 , and s̃t is the ( m− 1 ) th state on the chain . Following Lemma 1 , we have : dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) = dTV ( P ( ( s̃t , ãt , s̃t+1 ) ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ≤ κρm−1 . Substituting the last inequality into ( 112 ) and taking total expectation on both sides yield E [ ∆3 ( x̃ , θ′ ) −∆3 ( x , θ′ ) ] ≥ −2LV ( Cp + LV ) κρm−1 . Consider the second term in ( 111 ) . Note its form is similar to ( 59 ) , so by following the derivation of ( 63 ) , we directly have E [ ∆3 ( x , θ′ ) ] = E 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) −∇J ( θ′ ) 〉 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θ′ ) ‖2 , which further implies E [ ∆3 ( x , θ′ ) ] ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θ′ ) ‖2 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θ′ ) −∇J ( θ ) ‖2 − 8Cψrmax ( 1− γ ) E ‖∇J ( θ ) ‖2 ≥ −8CψrmaxLJ ( 1− γ ) E ‖θ′ − θ‖2−8Cψrmax ( 1− γ ) E ‖∇J ( θ ) ‖2 where the last inequality follows from Proposition 1 . Collecting the lower bounds gives E [ I3 ] ≥ −2LV ( Cp + LV ) κρm−1 − 8Cψrmax ( 1− γ ) ( LJ E ‖θ′ − θ‖2 − E ‖∇J ( θ ) ‖2 ) . Taking total expectation on ∆3 ( x , θ ) and collecting lower bounds of I1 , I2 , I3 yield E [ ∆3 ( x , θ ) ] ≥ − ( CpLJ + ( 1 + γ ) L 2 V Cψ + 2LV LJ + 8CψrmaxLJ ( 1− γ ) ) E ‖θk − θk−dm‖2 − LV ( Cp + LV ) |A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 − 2LV ( Cp + LV ) κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 . Taking total expectation on ( 107 ) and substituting the above inequality into it yield E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −D6 E ‖θk−τk − θk−dm‖2 −D7 E ‖θk − θk−dm‖2 −D8 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D9κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 , where D6 : = LV CδLψ , D7 : = CpLJ + ( 1 + γ ) L2V Cψ + 2LV LJ + 8CψrmaxLJ ( 1 − γ ) , D8 : = LV ( Cp + LV ) |A|Lπ , D9 : = 2LV ( Cp + LV ) . C.3 EXPLANATION OF THE APPROXIMATION ERROR In this section , we will provide a justification for the circumstances when the approximation error app defined in ( 14 ) is small . Lemma 7 . Suppose Assumption 2 and 4 hold . Then it holds that app ≤ max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω̄∗θ ( s ) |2 + 4rmax ( λ −1 + λ−2rmax ) ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) ( 113 ) where ω̄∗θ the critic stationary point of original Markov chain with policy πθ and transition kernel P . In ( 113 ) , the first term captures the quality of critic function parameterization method which also appears in previous works [ 14 , 15 , 17 ] . When using linear critic function approximation , it becomes zero when the value function Vπθ belongs to the linear function space for any θ . The second term corresponds to the error introduced by sampling from the artificial transition kernel P̃ ( ·|s , a ) = ( 1− γ ) P ( ·|s , a ) + γη ( · ) . For a large γ close to 1 , the artificial Markov chain is close to the original one . In this case , the second error term is therefore small . This fact also consists with practice where large γ is commonly used in two time-scale actor critic algorithms [ 3 ] . Before going into the proof , we first define that : Āθ , φ : = E s∼µ̄θ , s′∼Pπθ [ φ ( s ) ( γφ ( s′ ) − φ ( s ) ) > ] , b̄θ , φ : = E s∼µ̄θ , a∼πθ , s′∼P [ r ( s , a , s′ ) φ ( s ) ] , where µ̄θ as the stationary distribution of the original Markov chain with πθ and transition kernel P . Proof . Recall the definition of the approximation error : app = max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω∗θ ( s ) |2 , where µθ is the stationary distribution of the artificial Markov chain with πθ and transition kernel P̃ , and ω∗θ is the stationary point of critic update under the artificial Markov chain . We decompose app as app = max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω̄∗θ ( s ) + V̂ω̄∗θ ( s ) − V̂ω∗θ ( s ) |2 ≤ max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω̄∗θ ( s ) |2 fa + max θ∈Rd √ E s∼µθ |V̂ω̄∗θ ( s ) − V̂ω∗θ ( s ) |2 sp , ( 114 ) where the first term corresponds to the function approximation error fa , and second term corresponds to the sampling error sp . With A , b and Ā , b̄ as shorthand notations for Aθ , ψ , bθ , ψ and Āθ , ψ , b̄θ , ψ respectively , we bound the second term in ( 114 ) as |V̂ω̄∗θ ( s ) − V̂ω∗θ ( s ) | = ∣∣φ ( s ) > ω∗θ − φ ( s ) > ω̄∗θ ∣∣ ≤ ∥∥A−1b− Ā−1b̄∥∥ 2 = ∥∥A−1b−A−1b̄+A−1b̄− Ā−1b̄∥∥ 2 ≤ ∥∥A−1 ( b− b̄ ) ∥∥ 2 + ∥∥ ( A−1 − Ā−1 ) b̄∥∥ 2 ≤ λ−1‖b− b̄‖2 + rmax ∥∥A−1 − Ā−1∥∥ 2 = λ−1‖b− b̄‖2 + rmax ∥∥A−1 ( Ā−A ) Ā−1∥∥ 2 ≤ λ−1‖b− b̄‖2 + λ−2rmax ∥∥Ā−A∥∥ 2 . ( 115 ) We bound the first term in last inequality as ‖b− b̄‖2 = ∥∥∥∥∥ Es∼µθ , a∼πθ , s′∼P̃ [ r ( s , a , s′ ) φ ( s ) ] − Es∼µ̄θ , a∼πθ , s′∼P [ r ( s , a , s′ ) φ ( s ) ] ∥∥∥∥∥ ≤ sup ‖r ( s , a , s′ ) φ ( s ) ‖2‖µθ ⊗ πθ ⊗ P̃ − µ̄θ ⊗ πθ ⊗ P‖TV ≤ 2rmaxdTV ( µθ ⊗ πθ ⊗ P̃ , µ̄θ ⊗ πθ ⊗ P ) . ( 116 ) We now bound the divergence term in the last inequality as dTV ( µθ ⊗ πθ ⊗ P̃ , µ̄θ ⊗ πθ ⊗ P ) = ∫ s∈S ∑ a∈A ∫ s′∈S ∣∣∣µθ ( s ) πθ ( a|s ) P̃ ( s′|s , a ) − µ̄θ ( s ) πθ ( a|s ) P ( s′|s , a ) ∣∣∣ = ∫ s∈S ∑ a∈A ∫ s′∈S |µθ ( s ) πθ ( a|s ) P̃ ( s′|s , a ) − µθ ( s ) πθ ( a|s ) P ( s′|s , a ) + µθ ( s ) πθ ( a|s ) P ( s′|s , a ) − µ̄θ ( s ) πθ ( a|s ) P ( s′|s , a ) | ≤ ∫ s∈S ∑ a∈A µθ ( s ) πθ ( a|s ) ∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣+ ∫ s∈S |µθ ( s ) − µ̄θ ( s ) | . ( 117 ) We bound the first term in ( 117 ) as∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣ = ( 1− γ ) ∫ s′∈S |P ( s′|s , a ) − η ( s′ ) | ≤ 2 ( 1− γ ) . ( 118 ) Following [ 39 , Theorem 3.1 ] , the second term in ( 117 ) can be bounded as∫ s∈S |µθ ( s ) − µ̄θ ( s ) | ≤ ( logρ κ −1 + 1 1− ρ ) sup s ∫ s′∈S ∣∣∣∣∣∑ a πθ ( a|s ) ( P̃ ( s′|s , a ) − P ( s′|s , a ) ) ∣∣∣∣∣ ≤ ( logρ κ −1 + 1 1− ρ ) sup s ∑ a πθ ( a|s ) ∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣ ≤ 2 ( logρ κ −1 + 1 1− ρ ) ( 1− γ ) , ( 119 ) where the last inequality follows ( 118 ) . Substituting ( 118 ) and ( 119 ) into ( 117 ) gives dTV ( µθ ⊗ πθ ⊗ P̃ , µ̄θ ⊗ πθ ⊗ P ) ≤ 2 ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) . Substituting the above inequality into ( 116 ) gives ‖b− b̄‖2 ≤ 4rmax ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) . ( 120 ) Similarly , we also have ‖A− Ā‖2 ≤ 4rmax ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) . ( 121 ) Substituting ( 120 ) and ( 121 ) into ( 115 ) , then substituting ( 115 ) into ( 114 ) completes the proof . D EXPERIMENT DETAILS Hardware device . The tests on synthetic environment and CartPole was performed in a 16-core CPU computer . The test on Atari game was run in a 4 GPU computer . Parameterization . For the synthetic environment , we used linear value function approximation and tabular softmax policy [ 36 ] . For CartPole , we used a 3-layer MLP with 128 neurons and sigmoid activation function in each layer . The first two layers are shared for both actor and critic network . For the Atari seaquest game , we used a convolution-LSTM network . For network details , see [ 40 ] . Hyper-parameters . For the synthetic environment tests , we run Algorithm 1 with actor step size αk = 0.05 ( 1+k ) 0.6 and critic step size βk = 0.05 ( 1+k ) 0.4 . In tests of CartPole , we run Algorithm 1 with a minibatch of 20 samples . We update the actor network with a step size of αk = 0.01 ( 1+k ) 0.6 and critic network with a step size of βk = 0.01 ( 1+k ) 0.4 . See Table 1 for hyper-parameters to generate the Atari game results in Figure 4 . | This paper studied the two time scale A3C in discounted MDP based on recent development in the finite sample analysis of A2C. The sample complexity result in this paper matches previous result in two time-scale A2C in terms of the dependence of \epsilon, and this paper further shows the benefit of "linear speed up" brough by the structure of A3C. Given the practical usefulness of A3C, the result established in this paper is meaninful. | SP:7a1fd3da1fb6af86b3a25f133d0cfe1fa23b71fa |
Asynchronous Advantage Actor Critic: Non-asymptotic Analysis and Linear Speedup | Asynchronous and parallel implementation of standard reinforcement learning ( RL ) algorithms is a key enabler of the tremendous success of modern RL . Among many asynchronous RL algorithms , arguably the most popular and effective one is the asynchronous advantage actor-critic ( A3C ) algorithm . Although A3C is becoming the workhorse of RL , its theoretical properties are still not well-understood , including the non-asymptotic analysis and the performance gain of parallelism ( a.k.a . speedup ) . This paper revisits the A3C algorithm with TD ( 0 ) for the critic update , termed A3C-TD ( 0 ) , with provable convergence guarantees . With linear value function approximation for the TD update , the convergence of A3C-TD ( 0 ) is established under both i.i.d . and Markovian sampling . Under i.i.d . sampling , A3CTD ( 0 ) obtains sample complexity ofO ( −2.5/N ) per worker to achieve accuracy , where N is the number of workers . Compared to the best-known sample complexity of O ( −2.5 ) for two-timescale AC , A3C-TD ( 0 ) achieves linear speedup , which justifies the advantage of parallelism and asynchrony in AC algorithms theoretically for the first time . Numerical tests on synthetically generated instances and OpenAI Gym environments have been provided to verify our theoretical analysis . 1 INTRODUCTION Reinforcement learning ( RL ) has achieved impressive performance in many domains such as robotics [ 1 , 2 ] and video games [ 3 ] . However , these empirical successes are often at the expense of significant computation . To unlock high computation capabilities , the state-of-the-art RL approaches rely on sampling data from massive parallel simulators on multiple machines [ 3 , 4 , 5 ] . Empirically , these approaches can stabilize the learning processes and reduce training time when they are implemented in an asynchronous manner . One popular RL method that often achieves the best empirical performance is the asynchronous variant of the actor-critic ( AC ) algorithm , which is referred to as A3C [ 3 ] . A3C builds on the original AC algorithm [ 6 ] . At a high level , AC simultaneously performs policy optimization ( a.k.a . the actor step ) using the policy gradient method [ 7 ] and policy evaluation ( a.k.a . the critic step ) using the temporal difference learning ( TD ) algorithm [ 8 ] . To ensure scalability , both actor and critic steps can combine with various function approximation techniques . To ensure stability , AC is often implemented in a two time-scale fashion , where the actor step runs in the slow timescale and the critic step runs in the fast timescale . Similar to other on-policy RL algorithms , AC uses samples generated from the target policy . Thus , data sampling is entangled with the learning procedure , which generates significant overhead . To speed up the sampling process of AC , A3C introduces multiple workers with a shared policy , and each learner has its own simulator to perform data sampling . The shared policy can be then updated using samples collected from multiple learners . Despite the tremendous empirical success achieved by A3C , to the best of our knowledge , its theoretical property is not well-understood . The following theoretical questions remain unclear : Q1 ) Under what assumption does A3C converge ? Q2 ) What is its convergence rate ? Q3 ) Can A3C obtain benefit ( or speedup ) using parallelism and asynchrony ? For Q3 ) , we are interested in the training time linear speedup with N workers , which is the ratio between the training time using a single worker and that using N workers . Since asynchronous parallelism mitigates the effect of stragglers and keeps all workers busy , the training time speedup can be measured roughly by the sample ( i.e. , computational ) complexity linear speedup [ 9 ] , given by Speedup ( N ) = sample complexity when using one worker average sample complexity per worker when using N workers . ( 1 ) If Speedup ( N ) = Θ ( N ) , the speedup is linear , and the training time roughly reduces linearly as the number of workers increases . This paper aims to answer these questions , towards the goal of providing theoretical justification for the empirical successes of parallel and asynchronous RL . 1.1 RELATED WORKS Analysis of actor critic algorithms . AC method was first proposed by [ 6 , 10 ] , with asymptotic convergence guarantees provided in [ 6 , 10 , 11 ] . It was not until recently that the non-asymptotic analyses of AC have been established . The finite-sample guarantee for the batch AC algorithm has been established in [ 12 , 13 ] with i.i.d . sampling . Later , in [ 14 ] , the finite-sample analysis was established for the double-loop nested AC algorithm under the Markovian setting . An improved analysis for the Markovian setting with minibatch updates has been presented in [ 15 ] for the nested AC method . More recently , [ 16 , 17 ] have provided the first finite-time analyses for the two-timescale AC algorithms under Markov sampling , with both Õ ( −2.5 ) sample complexity , which is the bestknown sample complexity for two-timescale AC . Through the lens of bi-level optimization , [ 18 ] has also provided finite-sample guarantees for this two-timescale Markov sampling setting , with global optimality guarantees when a natural policy gradient step is used in the actor . However , none of the existing works has analyzed the effect of the asynchronous and parallel updates in AC . Empirical parallel and distributed AC . In [ 3 ] , the original A3C method was proposed and became the workhorse in empirical RL . Later , [ 19 ] has provided a GPU-version of A3C which significantly decreases training time . Recently , the A3C algorithm is further optimized in modern computers by [ 20 ] , where a large batch variant of A3C with improved efficiency is also proposed . In [ 21 ] , an importance weighted distributed AC algorithm IMPALA has been developed to solve a collection of problems with one single set of parameters . Recently , a gossip-based distributed yet synchronous AC algorithm has been proposed in [ 5 ] , which has achieved the performance competitive to A3C . Asynchronous stochastic optimization . For solving general optimization problems , asynchronous stochastic methods have received much attention recently . The study of asynchronous stochastic methods can be traced back to 1980s [ 22 ] . With the batch size M , [ 23 ] analyzed asynchronous SGD ( async-SGD ) for convex functions , and derived a convergence rate of O ( K− 12M− 12 ) if delay K0 is bounded by O ( K 1 4M− 3 4 ) . This result implies linear speedup . [ 24 ] extended the analysis of [ 23 ] to smooth convex with nonsmooth regularization and derived a similar rate . Recent studies by [ 25 ] improved upper bound of K0 to O ( K 1 2M− 1 2 ) . However , all these works have focused on the single-timescale SGD with a single variable , which can not capture the stochastic recursion of the AC and A3C algorithms . To best of our knowledge , non-asymptotic analysis of asynchronous two-timescale SGD has remained unaddressed , and its speedup analysis is even an uncharted territory . 1.2 THIS WORK In this context , we revisit A3C with TD ( 0 ) for the critic update , termed A3C-TD ( 0 ) . The hope is to provide non-asymptotic guarantee and linear speedup justification for this popular algorithm . Our contributions . Compared to the existing literature on both the AC algorithms and the asyncSGD , our contributions can be summarized as follows . c1 ) We revisit two-timescale A3C-TD ( 0 ) and establish its convergence rates with both i.i.d . and Markovian sampling . To the best of our knowledge , this is the first non-asymptotic convergence result for asynchronous parallel AC algorithms . c2 ) We characterize the sample complexity of A3C-TD ( 0 ) . In i.i.d . setting , A3C-TD ( 0 ) achieves a sample complexity of O ( −2.5/N ) per worker , where N is the number of workers . Compared to the best-known complexity of O ( −2.5 ) for i.i.d . two-timescale AC [ 18 ] , A3C-TD ( 0 ) achieves linear speedup , thanks to the parallelism and asynchrony . In the Markovian setting , if delay is bounded , the sample complexity of A3C-TD ( 0 ) matches the order of the non-parallel AC algorithm [ 17 ] . c3 ) We test A3C-TD ( 0 ) on the synthetically generated environment to verify our theoretical guarantees with both i.i.d . and Markovian sampling . We also test A3C-TD ( 0 ) on the classic control tasks and Atari Games from OpenAI Gym . Code is available in the supplementary material . Technical challenges . Compared to the recent convergence analysis of nonparallel two-timescale AC in [ 16 , 17 , 18 ] , several new challenges arise due to the parallelism and asynchrony . Markovian noise coupled with asynchrony and delay . The analysis of two-timescale AC algorithm is non-trivial because of the Markovian noise coupled with both the actor and critic steps . Different from the nonparallel AC that only involves a single Markov chain , asynchronous parallel AC introduces multiple Markov chains ( one per worker ) that mix at different speed . This is because at a given iteration , workers collect different number of samples and thus their chains mix to different degrees . As we will show later , the worker with the slowest mixing chain will determine the convergence . Linear speedup for SGD with two coupled sequences . Parallel async-SGD has been shown to achieve linear speedup recently [ 9 , 26 ] . Different from async-SGD , asynchronous AC is a two-timescale stochastic semi-gradient algorithm for solving the more challenging bilevel optimization problem ( see [ 18 ] ) . The errors induced by asynchrony and delay are intertwined with both actor and critic updates via a nested structure , which makes the sharp analysis more challenging . Our linear speedup analysis should be also distinguished from that of mini-batch async-SGD [ 27 ] , where the speedup is a result of variance reduction thanks to the larger batch size generated by parallel workers . 2 PRELIMINARIES 2.1 MARKOV DECISION PROCESS AND POLICY GRADIENT THEOREM RL problems are often modeled as an MDP described byM = { S , A , P , r , γ } , where S is the state space , A is the action space , P ( s′|s , a ) is the probability of transitioning to s′ ∈ S given current state s ∈ S and action a ∈ A , and r ( s , a , s′ ) is the reward associated with the transition ( s , a , s′ ) , and γ ∈ [ 0 , 1 ) is a discount factor . Throughout the paper , we assume the reward r is upper-bounded by a constant rmax . A policy π : S → ∆ ( A ) is defined as a mapping from the state space S to the probability distribution over the action space A . Considering discrete time t in an infinite horizon , a policy π can generate a trajectory of state-action pairs ( s0 , a0 , s1 , a1 , . . . ) with at ∼ π ( ·|st ) and st+1 ∼ P ( ·|st , at ) . Given a policy π , we define the state and state action value functions as Vπ ( s ) : = E [ ∞∑ t=0 γtr ( st , at , st+1 ) | s0 = s ] , Qπ ( s , a ) : = E [ ∞∑ t=0 γtr ( st , at , st+1 ) | s0 = s , a0 = a ] ( 2 ) where E is taken over the trajectory ( s0 , a0 , s1 , a1 , . . . ) generated under policy π . With the above definitions , the advantage function is Aπ ( s , a ) : = Qπ ( s , a ) − Vπ ( s ) . With η denoting the initial state distribution , the discounted state visitation measure induced by policy π is defined as dπ ( s ) : = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s | s0 ∼ η , π ) , and the discounted state action visitation measure is d′π ( s , a ) = ( 1− γ ) ∑∞ t=0 γ tP ( st = s | s0 ∼ η , π ) π ( a|s ) . The goal of RL is to find a policy that maximizes the expected accumulative reward J ( π ) : = Es∼η [ Vπ ( s ) ] . When the state and action spaces are large , finding the optimal policy π becomes computationally intractable . To overcome the inherent difficulty of learning a function , the policy gradient methods search the best performing policy over a class of parameterized policies . We parameterize the policy with parameter θ ∈ Rd , and solve the optimization problem as max θ∈Rd J ( θ ) with J ( θ ) : = E s∼η [ Vπθ ( s ) ] . ( 3 ) To maximize J ( θ ) with respect to θ , one can update θ using the policy gradient direction given by [ 7 ] ∇J ( θ ) = E s , a∼d′ θ [ Aπθ ( s , a ) ψθ ( s , a ) ] , ( 4 ) where ψθ ( s , a ) : = ∇ log πθ ( a|s ) , and d′θ : = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s | s0 , πθ ) πθ ( a|s ) . Since computing E in ( 4 ) is expensive if not impossible , popular policy gradient-based algorithms iteratively update θ using stochastic estimate of ( 4 ) such as REINFORCE [ 28 ] and G ( PO ) MDP [ 29 ] . 2.2 ACTOR-CRITIC ALGORITHM WITH VALUE FUNCTION APPROXIMATION Both REINFORCE and G ( PO ) MDP-based policy gradient algorithms rely on a Monte-Carlo estimate of the value function Vπθ ( s ) and thus ∇J ( θ ) by generating a trajectory per iteration . However , policy gradient methods based on Monte-Carlo estimate typically suffer from high variance and large sampling cost . An alternative way is to recursively refine the estimate of Vπθ ( s ) . For a policy πθ , it is known that Vπθ ( s ) satisfies the Bellman equation [ 30 ] , that is Vπθ ( s ) = E a∼πθ ( ·|s ) , s′∼P ( ·|s , a ) [ r ( s , a , s′ ) + γVπθ ( s ′ ) ] , ∀s ∈ S. ( 5 ) In practice , when the state space S is prohibitively large , one can not afford the computational and memory complexity of computing Vπθ ( s ) and Aπθ ( s , a ) . To overcome this curse-of-dimensionality , a popular method is to approximate the value function using function approximation techniques . Given the state feature mapping φ ( · ) : S −→ Rd′ for some d′ > 0 , we approximate the value function linearly as Vπθ ( s ) ≈ V̂ω ( s ) : = φ ( s ) > ω , where ω ∈ Rd ′ is the critic parameter . Given a policy πθ , the task of finding the best ω such that Vπθ ( s ) ≈ V̂ω ( s ) is usually addressed by TD learning [ 8 ] . Defining the kth transition as xk : = ( sk , ak , sk+1 ) and the corresponding TD-error as δ̂ ( xk , ωk ) : = r ( sk , ak , sk+1 ) + γφ ( sk+1 ) > ωk − φ ( sk ) > ωk , the parameter ω is updated via ωk+1 = ΠRω ( ωk + βkg ( xk , ωk ) ) with g ( xk , ωk ) : = δ̂ ( xk , ωk ) ∇ωk V̂ωk ( sk ) ( 6 ) where βk is the critic stepsize , and ΠRω is the projection with Rω being a pre-defined constant . The projection step is often used to control the norm of the gradient . In AC , it prevents the actor and critic updates from going a too large step in the ‘ wrong ’ direction ; see e.g. , [ 6 , 16 , 17 ] . Using the definition of advantage function Aπθ ( s , a ) = Es′∼P [ r ( s , a , s′ ) + γVπθ ( s′ ) ] − Vπθ ( s ) , we can also rewrite ( 4 ) as ∇J ( θ ) = Es , a∼d′θ , s′∼P [ ( r ( s , a , s ′ ) + γVπθ ( s ′ ) − Vπθ ( s ) ) ψθ ( s , a ) ] . Leveraging the value function approximation , we can then approximate the policy gradient as ∇̂J ( θ ) = ( r ( s , a , s′ ) + γV̂ω ( s ′ ) − V̂ω ( s ) ) ψθ ( s , a ) = δ̂ ( x , ω ) ψθ ( s , a ) ( 7 ) which gives rise to the policy update θk+1 = θk + αkv ( xk , θk , ωk ) with v ( xk , θk , ωk ) : = δ̂ ( xk , ωk ) ψθk ( sk , ak ) ( 8 ) where αk is the stepsize for the actor update . To ensure convergence when simultaneously performing critic and actor updates , the stepsizes αk and βk often decay at two different rates , which is referred to the two-timescale AC [ 17 , 18 ] . 3 ASYNCHRONOUS ADVANTAGE ACTOR CRITIC WITH TD ( 0 ) To speed up the training process , we implement AC over N workers in a shared memory setting without coordinating among workers — a setting similar to that in A3C [ 3 ] . Each worker has its own simulator to perform sampling , and then collaboratively updates the shared policy πθ using AC updates . As there is no synchronization after each update , the policy used by workers to generate samples may be outdated , which introduces staleness . Notations on transition ( s , a , s′ ) . Since each worker will maintain a separate Markov chain , we thereafter use subscription t in ( st , at , st+1 ) to indicate the tth transition on a Markov chain . We use k to denote the global counter ( or iteration ) , which increases by one whenever a worker finishes the actor and critic updates in the shared memory . We use subscription ( k ) in ( s ( k ) , a ( k ) , s′ ( k ) ) to indicate the transition used in the kth update . Specifically , we initialize θ0 , ω0 in the shared memory . Each worker will initialize the simulator with initial state s0 . Without coordination , workers will read θ , ω in the shared memory . From each worker ’ s view , it then generates sample ( st , at , st+1 ) by either sampling st from µθ ( · ) , where µθ ( · ) is the stationary distribution of an artificial MDP with transition probability measure P̃ ( ·|st , at ) : = γP ( ·|st , at ) + ( 1 − γ ) η ( · ) , or , sampling st from a Markov chain under policy πθ . In both cases , each worker obtains at ∼ πθ ( ·|st ) and st+1 ∼ P̃ ( ·|st , at ) . Sampling st+1 from P̃ ( ·|st , at ) can be achieved by sampling st+1 from η ( · ) with probability 1− γ and from P ( ·|st , at ) otherwise . Once Algorithm 1 Asynchronous advantage AC with TD ( 0 ) : each worker ’ s view . 1 : Global initialize : Global counter k = 0 , initial θ0 , ω0 in the shared memory . 2 : Worker initialize : Local counter t = 0 . Obtain initial state s0 . 3 : for t = 0 , 1 , 2 , · · · do 4 : Read θ , ω in the shared memory . 5 : Option 1 ( i.i.d . sampling ) : 6 : Sample st ∼ µθ ( · ) , at ∼ πθ ( ·|s ) , st+1 ∼ P̃ ( ·|st , at ) . 7 : Option 2 ( Markovian sampling ) : 8 : Sample at ∼ πθ ( ·|st ) , st+1 ∼ P̃ ( ·|st , at ) . 9 : Compute δ̂ ( xt , ω ) = r ( st , at , st+1 ) + γV̂ω ( st+1 ) − V̂ω ( st ) . 10 : Compute g ( xt , ω ) = δ̂ ( xt , ω ) ∇ωV̂ω ( st ) . 11 : Compute v ( xt , θ , ω ) = δ̂ ( xt , ω ) ψθ ( st , at ) . 12 : In the shared memory , perform update ( 9 ) . 13 : end for obtaining xt : = ( st , at , st+1 ) , each worker locally computes the policy gradient v ( xt , θ , ω ) and the TD ( 0 ) update g ( xt , ω ) , and then updates the parameters in shared memory asynchronously by ωk+1 = ΠRω ( ωk + βkg ( x ( k ) , ωk−τk ) ) , ( 9a ) θk+1 = θk + αkv ( x ( k ) , θk−τk , ωk−τk ) , ( 9b ) where τk is the delay in the kth actor and critic updates . See the A3C with TD ( 0 ) in Algorithm 1 . Sampling distributions . Since the transition kernel required by the actor and critic updates are different in the discounted MDP , it is difficult to design a two-timescale AC algorithm . To address this issue , we adopt the sampling method introduced in the seminal work [ 6 , 31 ] and the recent work [ 15 , 16 ] , which inevitably introduces bias by sampling from the artificial transition P̃ instead of P . However , as we will mention later , this extra bias is small when the discount factor γ is close to 1 . Parallel sampling . The AC update ( 6 ) and ( 8 ) uses samples generated “ on-the-fly ” from the target policy πθ , which brings overhead . Compared with ( 6 ) and ( 8 ) , the A3C-TD ( 0 ) update ( 9 ) allows parallel sampling from N workers , which is the key to linear speedup . We consider the case where only one worker can update parameters in the shared memory at the same time and the update can not be interrupted . In practice , ( 9 ) can also be performed in a mini-batch fashion . Minor differences from A3C [ 3 ] . The A3C-TD ( 0 ) algorithm resembles the popular A3C method [ 3 ] . With nmax denoting the horizon of steps , for n ∈ { 1 , ... , nmax } , A3C iteratively uses n-step TD errors to compute actor and critic gradients . In A3C-TD ( 0 ) , we use the TD ( 0 ) method which is the 1-step TD method for actor and critic update . When nmax = 1 , A3C method reduces to A3C-TD ( 0 ) . The n-step TD method is a hybrid version of the TD ( 0 ) method and the Monte-Carlo method . The A3C method with Monte-Carlo sampling is essentially the delayed policy gradient method , and thus its convergence follows directly from the delayed SGD . Therefore , we believe that the convergence of the A3C method based on TD ( 0 ) in this paper can be easily extended to the convergence of the A3C method with n-step TD . We here focus on A3C with TD ( 0 ) just for ease of exposition . 4 CONVERGENCE ANALYSIS OF TWO-TIMESCALE A3C-TD ( 0 ) In this section , we analyze the convergence of A3C-TD ( 0 ) in both i.i.d . and Markovian settings . Throughout this section , the notation O ( · ) contains constants that are independent of N and K0 . To analyze the performance of A3C-TD ( 0 ) , we make the following assumptions . Assumption 1 . There exists K0 such that the delay at each iteration is bounded by τk ≤ K0 , ∀k . Assumption 1 ensures the viability of analyzing the asynchronous update ; see the same assumption in e.g. , [ 5 , 25 ] . In practice , the delay usually scales as the number of workers , that is K0 = Θ ( N ) . With P̃πθ ( s′|s ) = ∑ a∈A P̃ ( s′|s , a ) πθ ( a|s ) , we define that : Aθ , φ : = E s∼µθ , s′∼P̃πθ [ φ ( s ) ( γφ ( s′ ) − φ ( s ) ) > ] , bθ , φ : = E s∼µθ , a∼πθ , s′∼P̃ [ r ( s , a , s′ ) φ ( s ) ] . ( 10 ) It is known that for a given θ , the stationary point ω∗θ of the TD ( 0 ) update in Algorithm 1 satisfies Aθ , φω ∗ θ + bθ , φ = 0 . ( 11 ) Assumption 2 . For all s ∈ S , the feature vector φ ( s ) is normalized so that ‖φ ( s ) ‖2 ≤ 1 . For all γ ∈ [ 0 , 1 ] and θ ∈ Rd , Aθ , φ is negative definite and its max eigenvalue is upper bounded by −λ . Assumption 2 is common in analyzing TD with linear function approximation ; see e.g. , [ 17 , 32 , 33 ] . With this assumption , Aθ , φ is invertible , so we have ω∗θ = −A −1 θ , φbθ , φ . Define Rω : = rmax/λ , then we have ‖ω∗θ‖2 ≤ Rω . It justifies the projection introduced in Algorithm 1 . In practice , the projection radius Rω can be estimated online by methods proposed in [ 32 , Section 8.2 ] or [ 34 , Lemma 1 ] . Assumption 3 . For any θ , θ′ ∈ Rd , s ∈ S and a ∈ A , there exist constants such that : i ) ‖ψθ ( s , a ) ‖2 ≤ Cψ ; ii ) ‖ψθ ( s , a ) − ψθ′ ( s , a ) ‖2 ≤ Lψ‖θ − θ′‖2 ; iii ) |πθ ( a|s ) − πθ′ ( a|s ) | ≤ Lπ‖θ − θ′‖2 . Assumption 3 is common in analyzing policy gradient-type algorithms which has also been made by e.g. , [ 34 , 35 , 36 ] . This assumption holds for many policy parameterization methods such as tabular softmax policy [ 36 ] , Gaussian policy [ 37 ] and Boltzmann policy [ 31 ] . Assumption 4 . For any θ ∈ Rd , the Markov chain under policy πθ and transition kernel P ( ·|s , a ) or P̃ ( ·|s , a ) is irreducible and aperiodic . Then there exist constants κ > 0 and ρ ∈ ( 0 , 1 ) such that sup s∈S dTV ( P ( st ∈ ·|s0 = s , πθ ) , µθ ) ≤ κρt , ∀t ( 12 ) where µθ is the stationary state distribution under πθ , and st is the state of Markov chain at time t. Assumption 4 assumes the Markov chain mixes at a geometric rate ; see also [ 32 , 33 ] . The stationary distribution µθ of an artificial Markov chain with transition P̃ is the same as the discounted visitation measure dθ of the Markov chain with transition P [ 6 ] . This means that if we sample according to at ∼ πθ ( ·|st ) , st+1 ∼ P̃ ( ·|st , at ) , the marginal distribution of ( st , at ) will converge to the discounted state-action visitation measure d′θ ( s , a ) , which allows us to control the gradient bias . 4.1 LINEAR SPEEDUP RESULT WITH I.I.D . SAMPLING In this section , we consider A3C-TD ( 0 ) under the i.i.d . sampling setting , which is widely used for analyzing RL algorithms ; see e.g. , [ 13 , 18 , 38 ] . We first give the convergence result of critic update as follows . Theorem 1 ( Critic convergence ) . Suppose Assumptions 1–4 hold . Consider Algorithm 1 with i.i.d . sampling and V̂ω ( s ) = φ ( s ) > ω . Select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ∥∥ωk − ω∗θk∥∥22=O ( 1K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K0 Kσ1 ) +O ( 1 Kσ2 ) . ( 13 ) Different from async-SGD ( e.g. , [ 9 ] ) , the optimal critic parameter ω∗θ is constantly drifting as θ changes at each iteration . This necessitates setting σ1 > σ2 to make the policy change slower than the critic , which can be observed in the second term in ( 13 ) . If σ1 > σ2 , then the policy is static relative to the critic in an asymptotic sense . To introduce the convergence of actor update , we first define the critic approximation error as app : = max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω∗θ ( s ) | 2 ≤ fa + sp , ( 14 ) where µθ is the stationary distribution under πθ and P̃ . The error app captures the quality of the critic approximation under Algorithm 1 . It can be further decomposed into the function approximation error fa , which is common in analyzing AC with function approximation [ 14 , 15 , 17 ] , and the sampling error sp = O ( 1− γ ) , which is unique in analyzing two-timescale AC for a discounted MDP . The error app is small when the value function approximation is accurate and the discounting factor γ is close to 1 ; see the detailed derivations in Lemma 7 of supplementary material . Now we are ready to give the actor convergence . Theorem 2 ( Actor convergence ) . Under the same assumptions of Theorem 1 , select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K K∑ k=1 E ‖ωk − ω∗θk‖ 2 2 ) +O ( app ) . ( 15 ) Different from the analysis of async-SGD , in actor update , the stochastic gradient v ( x , θ , ω ) is biased because of inexact value function approximation . The bias introduced by the critic optimality gap and the function approximation error correspond to the last two terms in ( 15 ) . In Theorem 1 and Theorem 2 , optimizing σ1 and σ2 gives the following convergence rate . Corollary 1 ( Linear speedup ) . Given Theorem 1 and Theorem 2 , select σ1 = 35 and σ2 = 2 5 . If we further assume K0 = O ( K 1 5 ) , then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = O ( K− 2 5 ) +O ( app ) ( 16 ) where O ( · ) contains constants that are independent of N and K0 . By setting the first term in ( 16 ) to , we get the total iteration complexity to reach -accuracy is O ( −2.5 ) . Since each iteration only uses one sample ( one transition ) , it also implies a total sample complexity of O ( −2.5 ) . Then the average sample complexity per worker is O ( −2.5/N ) which indicates linear speedup in ( 1 ) . Intuitively , the negative effect of parameter staleness introduced by parallel asynchrony vanishes asymptotically , which implies linear speedup in terms of convergence . 4.2 CONVERGENCE RESULT WITH MARKOVIAN SAMPLING Theorem 3 ( Critic convergence ) . Suppose Assumptions 1–4 hold . Consider Algorithm 1 with Markovian sampling and V̂ω ( s ) = φ ( s ) > ω . Select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ∥∥ωk − ω∗θk∥∥22 = O ( 1K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K20 log 2K Kσ1 ) +O ( K0 logK Kσ2 ) . ( 17 ) The following theorem gives the convergence rate of actor update in Algorithm 1 . Theorem 4 ( Actor convergence ) . Under the same assumptions of Theorem 3 , select step size αk = c1 ( 1+k ) σ1 , βk = c2 ( 1+k ) σ2 , where 0 < σ2 < σ1 < 1 and c1 , c2 are positive constants . Then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 =O ( 1 K1−σ1 ) +O ( K20 log 2K Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K K∑ k=1 E ‖ωk − ω∗θk‖ 2 2 ) +O ( app ) . ( 18 ) Assume K0 = O ( K 1 5 ) . Given Theorem 3 , select σ1 = 35 and σ2 = 2 5 , then it holds that 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = Õ ( K0K − 2 5 ) +O ( app ) , ( 19 ) where Õ ( · ) hides constants and the logarithmic order of K. With Markovian sampling , the stochastic gradients g ( x , ω ) and v ( x , θ , ω ) are biased , and the bias decreases as the Markov chain mixes . The mixing time corresponds to the logarithmic term logK in ( 17 ) and ( 18 ) . Because of asynchrony , at a given iteration , workers collect different number of samples and their chains mix to different degrees . The worker with the slowest mixing chain will determine the rate of convergence . The product of K0 and logK in ( 17 ) and ( 18 ) appears due to the slowest mixing chain . As the last term in ( 17 ) dominates other terms asymptotically , the convergence rate reduces as the number of workers increases . While the theoretical linear speedup is difficult to establish in the Markovian setting , we will empirically demonstrate it in Section 5.2 . 5 NUMERICAL EXPERIMENTS We test the speedup performance of A3C-TD ( 0 ) on both synthetically generated and OpenAI Gym environments . The settings , parameters , and codes are provided in supplementary material . 5.1 A3C-TD ( 0 ) IN SYNTHETIC ENVIRONMENT To verify the theoretical result , we tested A3C-TD ( 0 ) with linear value function approximation in a synthetic environment . We use tabular softmax policy parameterization [ 36 ] , which satisfies Assumption 3 . The MDP has a state space |S| = 100 , an discrete action space of |A| = 5 . Each state feature has a dimension of 10 . Elements of the transition matrix , the reward and the state features are randomly sampled from a uniform distribution over ( 0 , 1 ) . We evaluate the convergence of actor and critic respectively with the running average of test reward and critic optimality gap ‖ωk − ω∗θk‖2 . Figures 1 and 2 show the training time and sample complexity of running A3C-TD ( 0 ) with i.i.d . sampling and Markovian sampling respectively . The speedup plot is measured by the number of samples needed to achieve a target running average reward under different number of workers . All the results are average over 10 Monte-Carlo runs . Figure 1 shows that the sample complexity of A3C-TD ( 0 ) stays about the same with different number of workers under i.i.d . sampling . Also , it can be observed from the speedup plot of Figure 1 that the A3C-TD ( 0 ) achieves roughly linear speedup with i.i.d . sampling , which is consistent with Corollary 1 . The speedup of A3C-TD ( 0 ) with Markovian sampling shown in Figure 2 is roughly linear when number of workers is small . 5.2 A3C-TD ( 0 ) IN OPENAI GYM ENVIRONMENTS We have also tested A3C-TD ( 0 ) with neural network parametrization in the classic control ( Carpole ) environment and the Atari game ( Seaquest and Beamrider ) environments . In Figures 3-5 , each curve is generated by averaging over 5 Monte-Carlo runs with 95 % confidence interval . Figures 3–5 show the speedup of A3C-TD ( 0 ) under different number of workers , where the average reward is computed by taking the running average of test rewards . The speedup and runtime speedup plots are respectively measured by the number of samples and training time needed to achieve a target running average reward under different number of workers . Although not justified theoretically , Figures 3–5 suggest that the sample complexity speedup is roughly linear , and the runtime speedup slightly degrades when the number of workers increases . This is partially due to our hardware limit . Similar observation has also been obtained in async-SGD [ 9 ] . 6 CONCLUSIONS This paper revisits the A3C algorithm with TD ( 0 ) for the critic update , termed A3C-TD ( 0 ) . With linear value function approximation , the convergence of the A3C-TD ( 0 ) algorithm has been established under both i.i.d . and Markovian sampling settings . Under i.i.d . sampling , A3C-TD ( 0 ) achieves linear speedup compared to the best-known sample complexity of two-timescale AC , theoretically justifying the benefit of parallelism and asynchrony for the first time . Under Markov sampling , such a linear speedup can be observed in most classic benchmark tasks . REFERENCES [ 1 ] T. P. Lillicrap , J. J . Hunt , A. Pritzel , N. Heess , T. Erez , Y. Tassa , D. Silver , and D. Wierstra , “ Continuous control with deep reinforcement learning , ” in Proc . of International Conference on Learning Representations , 2016 . [ 2 ] V. Mnih , K. Kavukcuoglu , D. Silver , A . A. Rusu , J. Veness , M. G. Bellemare , A. Graves , M. Riedmiller , A. K. Fidjeland , G. Ostrovski et al. , “ Human-level control through deep reinforcement learning , ” Nature , vol . 518 , no . 7540 , p. 529 , 2015 . [ 3 ] V. Mnih , A. P. Badia , M. Mirza , A. Graves , T. P. Lillicrap , T. Harley , D. Silver , and K. Kavukcuoglu , “ Asynchronous methods for deep reinforcement learning. ” in Proc . of International Conference on Machine Learning , 2016 . [ 4 ] A. Nair , P. Srinivasan , S. Blackwell , C. Alcicek , R. Fearon , A . De Maria , V. Panneershelvam , M. Suleyman , C. Beattie , S. Petersen et al. , “ Massively parallel methods for deep reinforcement learning , ” arXiv preprint:1507.04296 , 2015 . [ 5 ] M. Assran , J. Romoff , N. Ballas , J. Pineau , and M. Rabbat , “ Gossip-based actor-learner architectures for deep reinforcement learning , ” in Proc . of Advances in Neural Information Processing Systems , 2019 . [ 6 ] V. Konda , Actor-critic algorithms . PhD thesis , Department of Electrical Engineering and Computer Science , Massachusetts Institute of Technology , 2002 . [ 7 ] R. Sutton , D. McAllester , S. Singh , and Y. Mansour , “ Policy gradient methods for reinforcement learning with function approximation. ” in Proc . of Advances in Neural Information Processing Systems , 2000 . [ 8 ] R. Sutton , “ Learning to predict by the methods of temporal differences , ” Machine Learning , vol . 3 , pp . 9–44 , 1988 . [ 9 ] X. Lian , H. Zhang , C. Hsieh , Y. Yijun Huang , and J. Liu , “ A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order , ” in Proc . of the Advances in Neural Information Processing Systems , 2016 . [ 10 ] V. Borkar and V. Konda , “ The actor-critic algorithm as multi-time-scale stochastic approximation , ” Sadhana , vol . 22 , no . 4 , pp . 525–543 , 1997 . [ 11 ] S. Bhatnagar , R. Sutton , M. Ghavamzadeh , and M. Lee , “ Natural actor critic algorithms , ” Automatica , vol . 45 , pp . 2471–2482 , 2009 . [ 12 ] Z. Yang , K. Zhang , M. Hong , and T. Başar , “ A finite sample analysis of the actor-critic algorithm , ” in Proc . of IEEE Conference on Decision and Control , 2018 , pp . 2759–2764 . [ 13 ] H. Kumar , A. Koppel , and A. Ribeiro , “ On the sample complexity of actor-critic method for reinforcement learning with function approximation , ” arXiv preprint:1910.08412 , 2019 . [ 14 ] S. Qiu , Z. Yang , J. Ye , and Z. Wang , “ On the finite-time convergence of actor-critic algorithm , ” in Optimization Foundations for Reinforcement Learning Workshop at Advances in Neural Information Processing Systems , 2019 . [ 15 ] T. Xu , Z. Wang , and Y. Liang , “ Improving sample complexity bounds for ( natural ) actor-critic algorithms , ” in Proc . of Advances in Neural Information Processing Systems , 2020 . [ 16 ] —— , “ Non-asymptotic convergence analysis of two time-scale ( natural ) actor-critic algorithms , ” arXiv preprint:2005.03557 , 2020 . [ 17 ] Y. Wu , W. Zhang , P. Xu , and Q. Gu , “ A finite time analysis of two time-scale actor critic methods , ” in Proc . of Advances in Neural Information Processing Systems , 2020 . [ 18 ] M. Hong , H.-T. Wai , Z. Wang , and Z. Yang , “ A two-timescale framework for bilevel optimization : Complexity analysis and application to actor-critic , ” arXiv preprint:2007.05170 , 2020 . [ 19 ] M. Babaeizadeh , I. Frosio , S. Tyree , J. Clemons , and J. Kautz , “ Reinforcement learning through asynchronous advantage actor-critic on a gpu , ” in Proc . of International Conference on Learning Representations , 2017 . [ 20 ] A. Stooke and P. Abbeel , “ Accelerated methods for deep reinforcement learning , ” arXiv preprint:1803.02811 , 2019 . [ 21 ] L. Espeholt , H. Soyer , R. Munos , K. Simonyan , V. Mnih , T. Ward , Y. Doron , V. Firoiu , T. Harley , I. Dunning , S. Legg , and K. Kavukcuoglu , “ Impala : Scalable distributed deep-rl with importance weighted actor-learner architectures , ” arXiv preprint:1802.01561 , 2018 . [ 22 ] D. Bertsekas and J. Tsitsiklis , Parallel and distributed computation : numerical methods . Prentice-Hall , 1989 . [ 23 ] A. Agarwal and J. Duchi , “ Distributed delayed stochastic optimization , ” in Proc . of Advances in Neural Information Processing Systems , 2011 . [ 24 ] H. Feyzmahdavian , A. Aytekin , and M. Johansson , “ An asynchronous mini-batch algorithm for regularized stochastic optimization , ” arXiv preprint:1505.04824 , 2015 . [ 25 ] X. Lian , Y. Huang , Y. Li , and J. Liu , “ Asynchronous parallel stochastic gradient for nonconvex optimization. ” in Proc . of Advances in Neural Information Processing Systems , 2015 . [ 26 ] T. Sun , R. Hannah , and W. Yin , “ Asynchronous coordinate descent under more realistic assumptions , ” in Proc . of Advances in Neural Information Processing Systems , 2017 . [ 27 ] X. Lian , C. Zhang , H. Zhang , C.-J . Hsieh , W. Zhang , and J. Liu , “ Can decentralized algorithms outperform centralized algorithms ? a case study for decentralized parallel stochastic gradient descent , ” in Proc . of Advances in Neural Information Processing Systems , 2017 . [ 28 ] R. J. Williams , “ Simple statistical gradient-following algorithms for connectionist reinforcement learning , ” Machine Learning , vol . 8 , no . 3-4 , pp . 229–256 , May 1992 . [ 29 ] J. Baxter and P. L. Bartlett , “ Infinite-horizon policy-gradient estimation , ” J . Artificial Intelligence Res. , vol . 15 , pp . 319–350 , 2001 . [ 30 ] R. S. Sutton and A. G. Barto , Reinforcement learning : An introduction . MIT Press , 2018 . [ 31 ] V. Konda and V. Borkar , “ Actor-critic–type learning algorithms for markov decision processes , ” SIAM Journal on Control and Optimization , vol . 38 , no . 1 , pp . 94–123 , 1999 . [ 32 ] J. Bhandari , D. Russo , and R. Singal , “ A finite time analysis of temporal difference learning with linear function approximation. ” in Proc . of Conference on Learning Theory , 2018 . [ 33 ] T. Xu , Z. Wang , Y. Zhou , and Y. Liang , “ Reanalysis of variance reduced temporal difference learning , ” in Proc . of International Conference on Learning Representations , 2020 . [ 34 ] S. Zou , T. Xu , and Y. Liang , “ Finite-sample analysis for SARSA with linear function approximation , ” in Proc . of Advances in Neural Information Processing Systems , 2019 . [ 35 ] K. Zhang , A. Koppel , H. Zhu , and T. Başar , “ Global convergence of policy gradient methods to ( almost ) locally optimal policies , ” arXiv preprint:1906.08383 , 2019 . [ 36 ] A. Agarwal , S. M. Kakade , J. D. Lee , and G. Mahajan , “ Optimality and approximation with policy gradient methods in markov decision processes. ” in Proc . of Thirty Third Conference on Learning Theory , 2020 . [ 37 ] K. Doya , “ Reinforcement learning in continuous time and space , ” Neural Computation , vol . 12 , no . 1 , pp . 219–245 , 2000 . [ 38 ] R. Sutton , H. Maei , D. Precup , S. Bhatnagar , D. Silver , and E. Szepesvári , C.and Wiewiora , “ Fast gradient-descent methods for temporal-difference learning with linear function approximation , ” in Proc . of International Conference on Machine Learning , 2009 . [ 39 ] A. Y. Mitrophanov , “ Sensitivity and convergence of uniformly ergodic markov chains , ” Journal of Applied Probability , vol . 42 , no . 4 , pp . 1003–1014 , 2005 . [ 40 ] Dgriff , “ Pytorch implementation of a3c , ” https : //github.com/dgriff777/rl_a3c_pytorch , 2018 . Supplementary Material A PRELIMINARY LEMMAS A.1 GEOMETRIC MIXING The operation p⊗ q denotes the tensor product between two distributions p ( x ) and q ( y ) , i.e . ( p⊗ q ) ( x , y ) = p ( x ) · q ( y ) . Lemma 1 . Suppose Assumption 4 holds for a Markov chain generated by the rule at ∼ πθ ( ·|st ) , st+1 ∼ P̃ ( ·|st , at ) . For any θ ∈ Rd , we have sup s0∈S dTV ( P ( ( st , at , st+1 ) ∈ ·|s0 , πθ ) , µθ ⊗ πθ ⊗ P̃ ) ≤ κρt . ( 20 ) where µθ ( · ) is the stationary distribution with policy πθ and transition kernel P̃ ( ·|s , a ) . Proof . We start with sup s0∈S dTV ( P ( ( st , at , st+1 ) = ·|s0 , πθ ) , µθ ⊗ πθ ⊗ P̃ ) = sup s0∈S dTV ( P ( st = ·|s0 , πθ ) ⊗ πθ ⊗ P̃ , µθ ⊗ πθ ⊗ P̃ ) = sup s0∈S 1 2 ∫ s∈S ∑ a∈A ∫ s′∈S |P ( st = ds|s0 , πθ ) πθ ( a|s ) P̃ ( ds′|s , a ) − µθ ( ds ) πθ ( a|s ) P̃ ( ds′|s , a ) ∣∣∣ = sup s0∈S 1 2 ∫ s∈S |P ( st = ds|s0 , πθ ) − µθ ( ds ) | ∑ a∈A πθ ( a|s ) ∫ s′∈S P̃ ( ds′|s , a ) = sup s0∈S dTV ( P ( st ∈ ·|s0 , πθ ) , µθ ) ≤ κρt , which completes the proof . For the use in the later proof , given K > 0 , we first define mK as : mK : = min { m ∈ N+ |κρm−1 ≤ min { αk , βk } } , ( 21 ) where κ and ρ are constants defined in ( 4 ) . mK is the minimum number of samples needed for the Markov chain to approach the stationary distribution so that the bias incurred by the Markovian sampling is small enough . A.2 AUXILIARY MARKOV CHAIN The auxiliary Markov chain is a virtual Markov chain with no policy drifting — a technique developed in [ 34 ] to analyze stochastic approximation algorithms in non-stationary settings . Lemma 2 . Under Assumption 1 and Assumption 3 , consider the update ( 9 ) in Algorithm 1 with Markovian sampling . For a given number of samples m , consider the Markov chain of the worker that contributes to the kth update : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct its auxiliary Markov chain by repeatedly applying πθk−dm : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . Define xt : = ( st , at , st+1 ) , then we have : dTV ( P ( xt ∈ ·|θk−dm , st−m+1 ) , P ( x̃t ∈ ·|θk−dm , st−m+1 ) ) ≤ 1 2 |A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2|θk−dm , st−m+1 ] . ( 22 ) Proof . Throughout the lemma , all expectations and probabilities are conditioned on θk−dm and st−m+1 . We omit this condition for convenience . First we have dTV ( P ( st+1 ∈ · ) , P ( s̃t+1 ∈ · ) ) = 1 2 ∫ s′∈S |P ( st+1 = ds′ ) − P ( s̃t+1 = ds′ ) | = 1 2 ∫ s′∈S ∣∣∣∣∣ ∫ s∈S ∑ a∈A P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) ∣∣∣∣∣ ≤ 1 2 ∫ s′∈S ∫ s∈S ∑ a∈A |P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) | = 1 2 ∫ s∈S ∑ a∈A ∫ s′∈S |P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) | = dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) , ( 23 ) where the last second equality is due to Tonelli ’ s theorem . Next we have dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) = 1 2 ∫ s∈S ∑ a∈A ∫ s′∈S |P ( st = ds , at = a , st+1 = ds′ ) − P ( s̃t = ds , ãt = a , s̃t+1 = ds′ ) | = 1 2 ∫ s∈S ∑ a∈A |P ( st = ds , at = a ) − P ( s̃t = ds , ãt = a ) | ∫ s′∈S P̃ ( st+1 = ds′|st = ds , at = a ) = 1 2 ∫ s∈S ∑ a∈A |P ( st = ds , at = a ) − P ( s̃t = ds , ãt = a ) | = dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) . ( 24 ) Due to the fact that θk−τk is dependent on st , we need to write P ( st , at ) as P ( st , at ) = ∫ θk−τk∈Rd P ( st , θk−τk , at ) = ∫ θ∈Rd P ( st ) P ( θk−τk = dθ|st ) πθk−τk ( at|st ) = P ( st ) ∫ θ∈Rd P ( θk−τk = dθ|st ) πθk−τk ( at|st ) = P ( st ) E [ πθk−τk ( at|st ) |st ] . Then we have dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) = 1 2 ∫ s∈S ∑ a∈A ∣∣∣P ( st = ds ) E [ πθk−τk ( at = a|st = ds ) |st = ds ] − P ( s̃t = ds ) πθk−dm ( ãt = a|s̃t = ds ) ∣∣∣ ≤ 1 2 ∫ s∈S ∑ a∈A ∣∣∣P ( st = ds ) E [ πθk−τk ( at = a|st = ds ) |st = ds ] − P ( st = ds ) πθk−dm ( at = a|st = ds ) ∣∣∣ + 1 2 ∫ s∈S ∑ a∈A ∣∣P ( st = ds ) πθk−dm ( ãt = a|s̃t = ds ) − P ( s̃t = ds ) πθk−dm ( ãt = a|s̃t = ds ) ∣∣ = 1 2 ∫ s∈S P ( st = ds ) ∑ a∈A ∣∣∣E [ πθk−τk ( at = a|st = ds ) |st = ds ] − πθk−dm ( at = a|st = ds ) ∣∣∣ + 1 2 ∫ s∈S |P ( st = ds ) − P ( s̃t = ds ) | . ( 25 ) Using Jensen ’ s inequality , we have dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) ≤ 1 2 ∫ s∈S P ( st = ds ) ∑ a∈A E [ ∣∣∣πθk−τk ( at = a|st = ds ) − πθk−dm ( at = a|st = ds ) ∣∣∣∣∣∣ st = ds ] + 1 2 ∫ s∈S |P ( st = ds ) − P ( s̃t = ds ) | ≤ 1 2 ∫ s∈S P ( st = ds ) ∑ a∈A E [ ‖θk−τk − θk−dm‖2| st = ds ] + 1 2 ∫ s∈S |P ( st = ds ) − P ( s̃t = ds ) | = 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 + dTV ( P ( st ∈ · ) , P ( s̃t ∈ · ) ) ( 26 ) where the last inequality follows Assumption 3 . Now we start to prove ( 22 ) . dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) ( 24 ) = dTV ( P ( ( st , at ) ∈ · ) , P ( ( s̃t , ãt ) ∈ · ) ) ( 25 ) ≤ dTV ( P ( st ∈ · ) , P ( s̃t ∈ · ) ) + 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 ( 23 ) ≤ dTV ( P ( xt−1 ∈ · ) , P ( x̃t−1 ∈ · ) ) + 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 . Now we have dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) ≤ dTV ( P ( xt−1 ∈ · ) , P ( x̃t−1 ∈ · ) ) + 1 2 |A|Lπ E ‖θk−τk − θk−dm‖2 . ( 27 ) Since dTV ( P ( xt−m ∈ · ) , P ( xt−m ∈ · ) ) = 0 , recursively applying ( 27 ) for { t− 1 , ... , t−m } gives dTV ( P ( xt ∈ · ) , P ( x̃t ∈ · ) ) ≤ 1 2 |A|Lπ m∑ j=0 E ‖θk−dj − θk−dm‖2 ≤ 1 2 |A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 , which completes the proof . A.3 LIPSCHITZ CONTINUITY OF VALUE FUNCTION Lemma 3 . Suppose Assumption 3 holds . For any θ1 , θ2 ∈ Rd and s ∈ S , we have ‖∇Vπθ1 ( s ) ‖2 ≤ LV , ( 28a ) |Vπθ1 ( s ) − Vπθ2 ( s ) | ≤ LV ‖θ1 − θ2‖2 , ( 28b ) where the constant is LV : = Cψrmax/ ( 1− γ ) with Cψ defined as in Assumption 3 . Proof . First we have Qπ ( s , a ) = E [ ∞∑ t=0 γtr ( st , at , st+1 ) |s0 = s , a0 = a ] ≤ ∞∑ t=0 γtrmax = rmax 1− γ . By the policy gradient theorem [ 7 ] , we have ‖∇Vπθ1 ( s ) ‖2 = ∥∥E [ Qπθ1 ( s , a ) ψθ1 ( s , a ) ] ∥∥2 ≤ E ∥∥Qπθ1 ( s , a ) ψθ1 ( s , a ) ∥∥2 ≤ E [ |Qπθ1 ( s , a ) |‖ψθ1 ( s , a ) ‖2 ] ≤ rmax 1− γ Cψ , where the first inequality is due to Jensen ’ s inequality , and the last inequality follows Assumption 3 and the fact that Qπ ( s , a ) ≤ rmax1−γ . By the mean value theorem , we immediately have |Vπθ1 ( s ) − Vπθ2 ( s ) | ≤ sup θ1∈Rd ∥∥∇Vπθ1 ( s ) ∥∥2 ‖θ1 − θ2‖2 = LV ‖θ1 − θ2‖2 , which completes the proof . A.4 LIPSCHITZ CONTINUITY OF POLICY GRADIENT We give a proposition regarding the LJ -Lipschitz of the policy gradient under proper assumptions , which has been shown by [ 35 ] . Proposition 1 . Suppose Assumption 3 and 4 hold . For any θ , θ′ ∈ Rd , we have ‖∇J ( θ ) − ∇J ( θ′ ) ‖2 ≤ LJ‖θ − θ′‖2 , where LJ is a positive constant . A.5 LIPSCHITZ CONTINUITY OF OPTIMAL CRITIC PARAMETER We provide a justification for Lipschitz continuity of ω∗θ in the next proposition . Proposition 2 . Suppose Assumption 3 and 4 hold . For any θ1 , θ2 ∈ Rd , we have ‖ω∗θ1 − ω ∗ θ2‖2 ≤ Lω‖θ1 − θ2‖2 , where Lω : = 2rmax|A|Lπ ( λ−1 + λ−2 ( 1 + γ ) ) ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) . Proof . We use A1 , A2 , b1 and b2 as shorthand notations of Aπθ1 , Aπθ2 , bπθ1 and bπθ2 respectively . By Assumption 2 , Aθ , φ is invertible for any θ ∈ Rd , so we can write ω∗θ = −A −1 θ , φbθ , φ . Then we have ‖ω∗1 − ω∗2‖2 = ‖ −A−11 b1 +A −1 2 b2‖2 = ‖ −A−11 b1 −A −1 1 b2 +A −1 1 b2 +A −1 2 b2‖2 = ‖ −A−11 ( b1 − b2 ) − ( A −1 1 −A −1 2 ) b2‖2 ≤ ‖A−11 ( b1 − b2 ) ‖2 + ‖ ( A −1 1 −A −1 2 ) b2‖2 ≤ ‖A−11 ‖2‖b1 − b2‖2 + ‖A −1 1 −A −1 2 ‖2‖b2‖2 = ‖A−11 ‖2‖b1 − b2‖2 + ‖A −1 1 ( A2 −A1 ) A −1 2 ‖2‖b2‖2 ≤ ‖A−11 ‖2‖b1 − b2‖2 + ‖A −1 1 ‖2‖A −1 2 ‖2‖b2‖2‖ ( A2 −A1 ) ‖2 ≤ λ−1 ‖b1 − b2‖2 + λ −2rmax ‖A1 −A2‖2 , ( 29 ) where the last inequality follows Assumption 2 , and the fact that ‖b2‖2 = ‖E [ r ( s , a , s′ ) φ ( s ) ] ‖2 ≤ E ‖r ( s , a , s ′ ) φ ( s ) ‖2 ≤ E [ |r ( s , a , s ′ ) |‖φ ( s ) ‖2 ] ≤ rmax . Denote ( s1 , a1 , s′1 ) and ( s2 , a2 , s′2 ) as samples drawn with θ1 and θ2 respectively , i.e . s1 ∼ µθ1 , a1 ∼ πθ1 , s′1 ∼ P̃ and s2 ∼ µθ2 , a2 ∼ πθ2 , s′2 ∼ P̃ . Then we have ‖b1 − b2‖2 = ∥∥E [ r ( s1 , a1 , s′1 ) φ ( s1 ) ] − E [ r ( s2 , a2 , s′2 ) φ ( s2 ) ] ∥∥ 2 ≤ sup s , a , s′ ‖r ( s , a , s′ ) φ ( s ) ‖2‖P ( ( s1 , a1 , s′1 ) ∈ · ) − P ( ( s2 , a2 , s′2 ) ∈ · ) ‖TV ≤ rmax‖P ( ( s1 , a1 , s′1 ) ∈ · ) − P ( ( s2 , a2 , s′2 ) ∈ · ) ‖TV = 2rmaxdTV ( µθ1 ⊗ πθ1 ⊗ P̃ , µθ2 ⊗ πθ2 ⊗ P̃ ) ≤ 2rmax|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ‖θ1 − θ2‖2 , ( 30 ) where the first inequality follows the definition of total variation ( TV ) norm , and the last inequality follows Lemma A.1 . in [ 17 ] . Similarly we have : ‖A1 −A2‖2 ≤ 2 ( 1 + γ ) dTV ( µθ1 ⊗ πθ1 , µθ2 ⊗ πθ2 ) = ( 1 + γ ) |A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ‖θ1 − θ2‖2 . ( 31 ) Substituting ( 30 ) and ( 31 ) into ( 29 ) completes the proof . B PROOF OF MAIN THEOREMS B.1 PROOF OF THEOREM 1 For brevity , we first define the following notations : x : = ( s , a , s′ ) , δ̂ ( x , ω ) : = r ( s , a , s′ ) + γφ ( s′ ) > ω − φ ( s ) > ω , g ( x , ω ) : = δ̂ ( x , ω ) φ ( s ) , g ( θ , ω ) : = E s∼µθ , a∼πθ , s′∼P̃ [ g ( x , ω ) ] . We also define constant Cδ : = rmax + ( 1 + γ ) max { rmax1−γ , Rω } , and we immediately have ‖g ( x , ω ) ‖2 ≤ |r ( x ) + γφ ( s′ ) > ω − φ ( s ) > ω| ≤ rmax + ( 1 + γ ) Rω ≤ Cδ ( 32 ) and likewise , we have ‖g ( x , ω ) ‖2 ≤ Cδ . The critic update in Algorithm 1 can be written compactly as : ωk+1 = ΠRω ( ωk + βkg ( x ( k ) , ωk−τk ) ) , ( 33 ) where τk is the delay of the parameters used in evaluating the kth stochastic gradient , and x ( k ) : = ( s ( k ) , a ( k ) , s ′ ( k ) ) is the sample used to evaluate the stochastic gradient at kth update . Proof . Using ω∗k as shorthand notation of ω ∗ θk , we start with the optimality gap ‖ωk+1 − ω∗k+1‖22 = ‖ΠRω ( ωk + βkg ( x ( k ) , ωk−τk ) ) − ω∗k+1‖22 ≤ ‖ωk + βkg ( x ( k ) , ωk−τk ) − ω∗k+1‖22 = ‖ωk − ω∗k‖ 2 2 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) 〉 + 2 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + ∥∥ω∗k − ω∗k+1 + βkg ( x ( k ) , ωk−τk ) ∥∥22 = ‖ωk − ω∗k‖ 2 2 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + 2βk 〈ωk − ω∗k , g ( θk , ωk ) 〉+ 2 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + ∥∥ω∗k − ω∗k+1 + βkg ( x ( k ) , ωk−τk ) ∥∥22 ≤ ‖ωk − ω∗k‖ 2 2 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 + 2βk 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + 2βk 〈ωk − ω∗k , g ( θk , ωk ) 〉+ 2 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + 2 ∥∥ω∗k − ω∗k+1∥∥22 + 2C2δβ2k . ( 34 ) We first bound 〈ωk − ω∗k , g ( θk , ωk ) 〉 in ( 34 ) as 〈ωk − ω∗k , g ( θk , ωk ) 〉 = 〈ωk − ω∗k , g ( θk , ωk ) − g ( θk , ω∗k ) 〉 = 〈 ωk − ω∗k , E [ ( γφ ( s′ ) − φ ( s ) ) > ( ωk − ω∗k ) φ ( s ) ] 〉 = 〈 ωk − ω∗k , E [ φ ( s ) ( γφ ( s′ ) − φ ( s ) ) > ] ( ωk − ω∗k ) 〉 = 〈 ωk − ω∗k , Aπθk ( ωk − ω ∗ k ) 〉 ≤ −λ‖ωk − ω∗k‖22 , ( 35 ) where the first equality is due to g ( θ , ω∗θ ) = Aθ , φω ∗ θ + b = 0 , and the last inequality follows Assumption 2 . Substituting ( 35 ) into ( 34 ) , then taking expectation on both sides of ( 34 ) yield E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + 2E 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + 2E ∥∥ω∗k − ω∗k+1∥∥22 + 2C2δβ2k . ( 36 ) We then bound the term E 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 in ( 36 ) as E 〈 ωk − ω∗k , g ( x ( k ) , ωk−τk ) − g ( x ( k ) , ωk ) 〉 = E 〈 ωk − ω∗k , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ωk−τk − ωk ) φ ( s ( k ) ) 〉 ≤ ( 1 + γ ) E [ ‖ωk − ω∗k‖2‖ωk−τk − ωk‖2 ] ≤ ( 1 + γ ) E ‖ωk − ω∗k‖2 ∥∥∥∥∥ k−1∑ i=k−τk ( ωi+1 − ωi ) ∥∥∥∥∥ 2 ≤ ( 1 + γ ) E [ ‖ωk − ω∗k‖2 k−1∑ i=k−τk βi‖g ( xi , ωi−τi ) ‖2 ] ≤ ( 1 + γ ) E [ ‖ωk − ω∗k‖2 k−1∑ i=k−τk βk−K0‖g ( xi , ωi−τi ) ‖2 ] ≤ Cδ ( 1 + γ ) K0βk−K0 E ‖ωk − ω∗k‖2 , ( 37 ) where the second last inequality is due to the monotonicity of step size , and the last inequality follows the definition of Cδ in ( 32 ) . Next we jointly bound the fourth and fifth term in ( 36 ) as 2E 〈 ωk − ω∗k , ω∗k − ω∗k+1 〉 + 2E ∥∥ω∗k − ω∗k+1∥∥22 ≤ 2E [ ‖ωk − ω∗k‖2 ∥∥ω∗k − ω∗k+1∥∥2 ] + 2E∥∥ω∗k − ω∗k+1∥∥22 ≤ 2Lω E [ ‖ωk − ω∗k‖2 ‖θk − θk+1‖2 ] + 2L 2 ω E ‖θk − θk+1‖ 2 2 = 2Lωαk E [ ‖ωk − ω∗k‖2 ∥∥∥δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∥∥∥2 ] + 2L2ωα2k E∥∥∥δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∥∥∥22 ≤ 2LωCpαk E ‖ωk − ω∗k‖2 + 2L 2 ωC 2 pα 2 k , ( 38 ) where constant Cp : = CδCψ . The second inequality is due to the Lω-Lipschitz of ω∗θ shown in Proposition 2 , and the last inequality follows the fact that ‖δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ≤ CδCψ = Cp . ( 39 ) Substituting ( 37 ) and ( 38 ) into ( 36 ) yields E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + Cqβ 2 k , ( 40 ) where C1 : = LωCp , C2 : = Cδ ( 1 + γ ) and Cq : = 2C2δ + 2L 2 ωC 2 p max ( k ) α2k β2k = 2C2δ + 2L 2 ωC 2 p c21 c22 . For brevity , we use x ∼ θ to denote s ∼ µθ , a ∼ πθ and s′ ∼ P̃ in this proof . Consider the third term in ( 40 ) conditioned on θk , ωk , θk−τk . We bound it as E [ 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 |θk , ωk , θk−τk ] = 〈 ωk − ω∗k , E x ( k ) ∼θk−τk [ g ( x ( k ) , ωk ) |ωk ] − g ( θk , ωk ) 〉 = 〈 ωk − ω∗k , g ( θk−τk , ωk ) − g ( θk , ωk ) 〉 ≤ ‖ωk − ω∗k‖2‖g ( θk−τk , ωk ) − g ( θk , ωk ) ‖2 ≤ 2Rω ∥∥∥∥ Ex∼θk−τk [ g ( x , ωk ) ] − Ex∼θk [ g ( x , ωk ) ] ∥∥∥∥ 2 ≤ 2Rω sup x ‖g ( x , ωk ) ‖2 ∥∥∥µθk−τk ⊗ πθk−τk ⊗ P̃ − µθk ⊗ πθk ⊗ P̃∥∥∥TV ≤ 4RωCδdTV ( µθk−τk ⊗ πθk−τk ⊗ P̃ , µθk ⊗ πθk ⊗ P̃ ) , ( 41 ) where second last inequality follows the definition of TV norm and the last inequality uses the definition of Cδ in ( 32 ) . Define constant C3 : = 2RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) . Then by following the third item in Lemma A.1 shown by [ 17 ] , we can write ( 41 ) as E [ 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 |θk , ωk , θk−τk ] ≤ 4RωCδdTV ( µθk−τk ⊗ πθk−τk ⊗ P̃ , µθk ⊗ πθk ⊗ P̃ ) ≤ C3 ‖θk−τk − θk‖2 ≤ C3 k−1∑ i=k−τk αi‖g ( xi , ωi−τi ) ‖2 ≤ C3CδK0αk−K0 , ( 42 ) where we used the monotonicity of αk and Assumption 1 . Taking total expectation on both sides of ( 42 ) and substituting it into ( 40 ) yield E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 + 2C3CδK0βkαk−K0 + Cqβ 2 k. ( 43 ) Taking summation on both sides of ( 43 ) and rearranging yield 2λ K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ≤ K∑ k=K0 1 βk ( E ‖ωk − ω∗k‖ 2 2 − E ∥∥ωk+1 − ω∗k+1∥∥22 ) I1 +Cq K∑ k=K0 βk I2 + 2 K∑ k=K0 2C3CδK0αk−K0 I3 +2 K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 I4 . ( 44 ) We bound I1 as I1 = K∑ k=MK 1 βk ( E ‖ωk − ω∗k‖ 2 2 − E ∥∥ωk+1 − ω∗k+1∥∥22 ) = K∑ k=MK ( 1 βk − 1 βk−1 ) E ‖ωk − ω∗k‖ 2 2 + 1 βMK−1 E ∥∥ωMK − ω∗MK∥∥22 − 1βk E ∥∥ωK+1 − ω∗K+1∥∥22 ≤ K∑ k=MK ( 1 βk − 1 βk−1 ) E ‖ωk − ω∗k‖ 2 2 + 1 βMK−1 E ∥∥ωMK − ω∗MK∥∥22 ≤ 4R2ω ( K∑ k=MK ( 1 βk − 1 βk−1 ) + 1 βMK−1 ) = 4R2ω βk = O ( Kσ2 ) , ( 45 ) where the last inequality is due to the fact that ‖ωk − ω∗θ‖2 ≤ ‖ωk‖2 + ‖ω∗θ‖2 ≤ 2Rω . We bound I2 as K∑ k=MK βk = K∑ k=MK c2 ( 1 + k ) σ2 = O ( K1−σ2 ) ( 46 ) where the inequality follows from the integration rule ∑b k=a k −σ ≤ b 1−σ 1−σ . We bound I3 as I3 = K∑ k=K0 2C3CδK0αk−K0 = 2C3Cδc1K0 K−K0∑ k=0 ( 1 + k ) −σ1 = O ( K0K1−σ1 ) . ( 47 ) For the last term I4 , we have I4 = K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 ≤ √√√√ K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) 2√√√√ K∑ k=K0 ( E ‖ωk − ω∗k‖2 ) 2 ≤ √√√√ K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) 2√√√√ K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 , ( 48 ) where the first inequality follows Cauchy–Schwartz inequality , and the second inequality follows Jensen ’ s inequality . In ( 48 ) , we have K∑ k=K0 ( C1 αk βk + C2K0βk−K0 ) 2 ≤ K−K0∑ k=0 ( C1 αk βk + C2K0βk ) 2 = C21 K−K0∑ k=0 α2k β2k + 2C1C2K0 K−K0∑ k=0 αk + C 2 2K 2 0 K−K0∑ k=0 β2k = O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K −σ1+1 ) +O ( K20K 1−2σ2 ) ( 49 ) where the first inequality is due to the fact that αkβk and βk−K0 are monotonically decreasing . Substituting ( 49 ) into ( 48 ) gives I4 ≤ √ O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K−σ1+1 ) +O ( K20K1−2σ2 ) √√√√ K∑ k=MK E ‖ωk − ω∗k‖ 2 2 . ( 50 ) Substituting ( 45 ) , ( 46 ) , ( 47 ) and ( 50 ) into ( 44 ) , and dividing both sides of ( 44 ) by K −K0 + 1 give 2λ 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ≤ √ O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K−σ1+1 ) +O ( K20K1−2σ2 ) K −K0 + 1 √√√√ K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 +O ( 1 K1−σ2 ) +O ( 1 Kσ2 ) +O ( K0 Kσ1 ) . ( 51 ) We define the following functions : T1 ( K ) : = 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 , T2 ( K ) : = O ( 1 K1−σ2 ) +O ( 1 Kσ2 ) +O ( K0 Kσ1 ) , T3 ( K ) : = O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K −σ1+1 ) +O ( K20K 1−2σ2 ) K −K0 + 1 . Then ( 51 ) can be written as : T1 ( K ) − 1 2λ √ T1 ( K ) √ T3 ( K ) ≤ 1 2λ T2 ( K ) . Solving this quadratic inequality in terms of T1 ( K ) , we obtain T1 ( K ) ≤ 1 λ T2 ( K ) + 1 2λ2 T3 ( K ) , ( 52 ) which implies 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 = O ( 1 K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K0 Kσ1 ) +O ( 1 Kσ2 ) . We further have 1 K K∑ k=1 E ‖ωk − ω∗k‖22 ≤ 1 K ( K0−1∑ k=1 4R2ω + K∑ k=K0 E ‖ωk − ω∗k‖22 ) = K0 − 1 K 4R2ω + K −K0 + 1 K 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖22 = O ( K0 K ) +O ( 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ) = O ( 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖ 2 2 ) ( 53 ) which completes the proof . B.2 PROOF OF THEOREM 2 We first clarify the notations : x : = ( s , a , s′ ) , δ̂ ( x , ω ) : = r ( s , a , s′ ) + γφ ( s′ ) > ω − φ ( s ) > ω , δ ( x , θ ) : = r ( s , a , s′ ) + γVπθ ( s ′ ) − Vπθ ( s ) . The update in Algorithm 1 can be written compactly as : θk+1 = θk + αk δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) . ( 54 ) For brevity , we use ω∗k as shorthand notation of ω ∗ θk . Then we are ready to give the proof . Proof . From LJ -Lipschitz of policy gradient shown in Proposition 1 , we have : J ( θk+1 ) ≥ J ( θk ) + 〈∇J ( θk ) , θk+1 − θk〉 − LJ 2 ‖θk+1 − θk‖22 = J ( θk ) + αk 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 + αk 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 − LJ 2 α2k‖δ̂ ( x ( k ) , ωk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ‖ 2 2 ≥ J ( θk ) + αk 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 + αk 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 − LJ 2 C2pα 2 k , where the last inequality follows the definition of Cp in ( 39 ) . Taking expectation on both sides of the last inequality yields E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] + αk E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I1 + αk E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I2 −LJ 2 C2pα 2 k. ( 55 ) We first decompose I1 as I1 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ωk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 1 ) 1 + E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 2 ) 1 . We bound I ( 1 ) 1 as I ( 1 ) 1 = E 〈 ∇J ( θk ) , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ωk−τk − ωk ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2‖γφ ( s′ ( k ) ) − φ ( s ( k ) ) ‖2‖ωk − ωk−τk‖2‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −2Cψ E [ ‖∇J ( θk ) ‖2‖ωk − ωk−τk‖2 ] ≥ −2CψCδK0βk−1 E ‖∇J ( θk ) ‖2 , where the last inequality follows ‖ωk − ωk−τk‖2 = ∥∥∥∥∥ k−1∑ i=k−τk ( ωi+1 − ωi ) ∥∥∥∥∥ 2 ≤ k−1∑ i=k−τk ‖βig ( xi , ωi−τi ) ‖2 ≤ βk−1 k−1∑ i=k−τk ‖g ( xi , ωi−τi ) ‖2 ≤ βk−1K0Cδ , where the second inequality is due to the monotonicity of step size , and the third one follows ( 32 ) . Then we bound I ( 2 ) 1 as I ( 2 ) 1 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = −E 〈 ∇J ( θk ) , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ω∗k − ωk ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2‖γφ ( s′ ( k ) ) − φ ( s ( k ) ) ‖2‖ωk − ω ∗ k‖2‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −2Cψ E [ ‖∇J ( θk ) ‖2‖ωk − ω∗k‖2 ] . Collecting the lower bounds of I ( 1 ) 1 and I ( 2 ) 1 gives I1 ≥ −2Cψ E [ ‖∇J ( θk ) ‖2 ( CδK0βk−1 + ‖ωk − ω∗k‖2 ) ] . ( 56 ) Now we consider I2 . We first decompose I2 as I2 = E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ̂ ( x ( k ) , ω∗k−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 1 ) 2 + E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k−τk ) − δ ( x ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 2 ) 2 + E 〈 ∇J ( θk ) , δ ( x ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 I ( 3 ) 2 +‖∇J ( θk ) ‖22 . We bound I ( 1 ) 2 as I ( 1 ) 2 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ̂ ( x ( k ) , ω∗k−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ( ω∗k − ω∗k−τk ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2‖ ( γφ ( s′ ( k ) ) − φ ( s ( k ) ) ) > ‖2 ∥∥ω∗k − ω∗k−τk∥∥2 ‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −LV Cψ ( 1 + γ ) E ∥∥ω∗k − ω∗k−τk∥∥2 ≥ −LV LωCψ ( 1 + γ ) E ‖θk − θk−τk‖2 ≥ −LV LωCψCp ( 1 + γ ) K0αk−K0 , where the second last inequality follows from Proposition 2 and the last inequality uses ( 39 ) as ‖θk − θk−τk‖2 ≤ k−1∑ i=k−τk ‖θi+1 − θi‖2 = k−1∑ i=k−τk αi‖δ̂ ( xi , ωi−τi ) ψθi−τi ( si , ai ) ‖2 ≤ k−1∑ i=k−τk αk−τkCp ≤ CpK0αk−K0 . ( 57 ) We bound I ( 2 ) 2 as I ( 2 ) 2 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k−τk ) − δ ( x ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2 ∣∣∣δ̂ ( x ( k ) , ω∗k−τk ) − δ ( x ( k ) , θk−τk ) ∣∣∣ ‖ψθk−τk ( s ( k ) , a ( k ) ) ‖2 ] ≥ −Cψ E [ ‖∇J ( θk ) ‖2 ∣∣∣δ̂ ( x ( k ) , ω∗k−τk ) − δ ( x ( k ) , θk−τk ) ∣∣∣ ] = −Cψ E [ ‖∇J ( θk ) ‖2 ∣∣∣γ ( φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ) + Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣ ] ≥ −Cψ E [ ‖∇J ( θk ) ‖2 ( γ ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣+ ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣ ) ] = −Cψ E [ ‖∇J ( θk ) ‖2 E [ γ ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣+ ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣∣∣∣ θk , θk−τk ] ] ≥ −2Cψ app E ‖∇J ( θk ) ‖2 ≥ −2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 ( 58 ) where the second last inequality follows from the fact that E [ γ ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣+ ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣ ] ≤ γ √ E ∣∣∣φ ( s′ ( k ) ) > ω∗k−τk − Vπθk−τk ( s′ ( k ) ) ∣∣∣2 + √ E ∣∣∣Vπθk−τk ( s ( k ) ) − φ ( s ( k ) ) > ω∗k−τk ∣∣∣2 ≤ 2 app . Define artificial transition x̄ ( k ) : = ( s ( k ) , a ( k ) , s̄′ ( k ) ∼ P ) , then I ( 3 ) 2 can be bounded as I ( 3 ) 2 = E 〈 ∇J ( θk ) , δ ( x ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 = E [ E [ 〈 ∇J ( θk ) , δ ( x ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉∣∣∣ θk−τk , θk ] ] = E 〈 ∇J ( θk ) , E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] 〉 + E 〈 ∇J ( θk ) , E [ δ ( x̄ ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] −∇J ( θk ) 〉 ≥ −E [ ‖∇J ( θk ) ‖2 ∥∥∥E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] ∥∥∥2 ] − E [ ‖∇J ( θk ) ‖2 ∥∥∥E [ δ ( x̄ ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] −∇J ( θk ) ∥∥∥2 ] . ( 59 ) The first term in the last inequality can be bounded as E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] = E [ ( δ ( x ( k ) , θk−τk ) − δ ( x̄ ( k ) , θk−τk ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] = E [ ( r ( x ( k ) ) + γ E [ r ( s′k , a′ , s′′ ) ] − ( r ( x̄ ( k ) ) + γ E [ r ( s̄′k , a′ , s′′ ) ] ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] ≤ 2Cψrmax‖P̃ − P‖TV ≤ 8Cψrmax ( 1− γ ) , ( 60 ) where the last inequality follows ‖P̃ − P‖TV = 2 ∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣ = 2 ( 1− γ ) ∫ s′∈S |P ( s′|s , a ) − η ( s′ ) | ≤ 4 ( 1− γ ) . ( 61 ) The second term in ( 59 ) can be rewritten as E [ δ ( x̄ ( k ) , θk−τk ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣ θk−τk , θk ] = E s ( k ) ∼µθk−τk a ( k ) ∼πθk−τk s̄′ ( k ) ∼P [ ( r ( x̄ ( k ) ) + γVπθk−τk ( s̄′ ( k ) ) − Vπθk−τk ( s ( k ) ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = E s ( k ) ∼µθk−τk a ( k ) ∼πθk−τk [ ( Qπθk−τk ( s ( k ) , a ( k ) ) − Vπθk−τk ( s ( k ) ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = E s ( k ) ∼µθk−τk a ( k ) ∼πθk−τk [ Aπθk−τk ( s ( k ) , a ( k ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = E s ( k ) ∼dθk−τk a ( k ) ∼πθk−τk [ Aπθk−τk ( s ( k ) , a ( k ) ) ψθk−τk ( s ( k ) , a ( k ) ) ∣∣∣∣θk−τk , θk ] = ∇J ( θk−τk ) ( 62 ) where the second last equality follows µθ ( · ) = dθ ( · ) with dθ being a shorthand notation of dπθ [ 6 ] . Substituting ( 60 ) and ( 62 ) into ( 59 ) yields I ( 3 ) 2 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 − E [ ‖∇J ( θk ) ‖2‖∇J ( θk−τk ) −∇J ( θk ) ‖2 ] ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 − LV LJ E ‖θk−τk − θk‖2 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 − LV LJCpK0αk−K0 , ( 63 ) where the second last inequality is due to LJ -Lipschitz of policy gradient shown in Proposition 1 , and the last inequality follows ( 57 ) . Collecting lower bounds of I ( 1 ) 2 , I ( 2 ) 2 and I ( 3 ) 2 gives I2 ≥ −D1K0αk−K0 − ( 2Cψ sp + 8Cψrmax ( 1− γ ) ) E ‖∇J ( θk ) ‖2 − 2CψLV fa + ‖∇J ( θk ) ‖ 2 2 , ( 64 ) where the constant is D1 : = LV LωCψCp ( 1 + γ ) + LV LJCp . Substituting ( 56 ) and ( 64 ) into ( 55 ) yields E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 2αkCψ ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) E ‖∇J ( θk ) ‖2 − αkD1K0αk−K0 − 2αkCψLV fa + αk‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. ( 65 ) By following Cauchy-Schwarz inequality , the second term in ( 65 ) can be bounded as ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) E ‖∇J ( θk ) ‖2 ≤ √ E ‖∇J ( θk ) ‖22 E [ ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) 2 ] ≤ √ E ‖∇J ( θk ) ‖22 √ E [ 4C2δK 2 0β 2 k−1 + 4‖ωk − ω∗k‖22 + 4 2sp + 64r2max ( 1− γ ) 2 ] = 2 √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) , ( 66 ) where the last inequality follows the order of sp in Lemma 7 . Collecting the upper bound gives E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 4αkCψ √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) − αkD1K0αk−K0 − 2αkCψLV fa + αk‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. ( 67 ) Dividing both sides of ( 67 ) by αk , then rearranging and taking summation on both sides give K∑ k=K0 E ‖∇J ( θk ) ‖22 ≤ K∑ k=K0 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) I3 + K∑ k=K0 ( D1K0αk−K0 + LJ 2 C2pαk ) I4 + 4Cψ K∑ k=K0 √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) I5 + 2CψLV ( K −K0 + 1 ) fa . ( 68 ) We bound I3 as I3 = K∑ k=K0 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) = K∑ k=K0 ( 1 αk−1 − 1 αk ) E [ J ( θk ) ] − 1 αMK−1 E [ J ( θMK ) ] + 1 αK E [ J ( θK+1 ) ] ≤ 1 αK E [ J ( θK+1 ) ] ≤ rmax 1− γ 1 αK = O ( Kσ1 ) , ( 69 ) where the first inequality is due to the αk is monotonic decreasing and positive , and last inequality is due to Vπθ ( s ) ≤ rmax1−γ for any s ∈ S and πθ . We bound I4 as I4 = K∑ k=K0 ( D1K0αk−K0 + LJ 2 C2pαk ) ≤ K−K0∑ k=0 ( D1K0αk + LJ 2 C2pαk ) = O ( K0K1−σ1 ) . We bound I5 as I5 = K∑ k=K0 √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) ≤ √√√√ K∑ k=K0 E ‖∇J ( θk ) ‖22 √√√√ K∑ k=K0 ( C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) ) = √√√√ K∑ k=K0 E ‖∇J ( θk ) ‖22 √√√√C2δK20 K∑ k=K0 β2k−1 + K∑ k=K0 E ‖ωk − ω∗k‖22 +O ( K 2sp ) , ( 70 ) where the first inequality follows Cauchy-Schwartz inequality . In ( 70 ) , we have K∑ k=K0 β2k−1 ≤ K−K0∑ k=0 β2k = K−K0∑ k=0 c22 ( 1 + k ) −2σ2 = O ( K1−2σ2 ) . Substituting the last equality into ( 70 ) gives I5 ≤ √√√√ K∑ k=MK E ‖∇J ( θk ) ‖22 √√√√O ( K20K1−2σ2 ) + K∑ k=MK E ‖ωk − ω∗k‖22 +O ( K 2sp ) . ( 71 ) Dividing both sides of ( 67 ) by K −K0 + 1 and collecting upper bounds of I3 , I4 and I5 give 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 ≤ 4Cψ K −K0 + 1 √√√√ K∑ k=K0 E ‖∇J ( θk ) ‖22 √√√√O ( K20K1−2σ2 ) + K∑ k=K0 E ‖ωk − ω∗k‖22 +O ( K 2sp ) +O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( fa ) . ( 72 ) Define the following functions T4 ( K ) : = 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 , T5 ( K ) : = 1 K −K0 + 1 ( O ( K20K1−2σ2 ) + K∑ k=K0 E ‖ωk − ω∗k‖22 +O ( K 2sp ) ) , T6 ( K ) : = O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( fa ) . Then ( 72 ) can be rewritten as T4 ( K ) ≤ T6 ( K ) + √ 2 ( 1 + γ ) Cψ √ T4 ( K ) √ T5 ( K ) . Solving this quadratic inequality in terms of T4 ( K ) , we obtain T4 ( K ) ≤ 2T6 ( K ) + 4 ( 1 + γ ) 2C2ψT5 ( K ) , ( 73 ) which implies 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 = O ( 1 K1−σ1 ) +O ( K0 Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K −K0 + 1 K∑ k=K0 E ‖ωk − ω∗k‖22 ) +O ( app ) . We further have 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 ≤ 1 K ( K0−1∑ k=1 L2V + K∑ k=K0 E ‖∇J ( θk ) ‖22 ) = K0 − 1 K L2V + K −K0 + 1 K 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 = O ( K0 K ) +O ( 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 ) = O ( 1 K −K0 + 1 K∑ k=K0 E ‖∇J ( θk ) ‖22 ) ( 74 ) which completes the proof . B.3 PROOF OF THEOREM 3 Given the definition in Section B.1 , we now give the convergence proof of critic update in Algorithm 1 with linear function approximation and Markovian sampling . By following the derivation of ( 40 ) , we have E ‖ωk+1 − ω∗k+1‖22 ≤ ( 1− 2λβk ) E ‖ωk − ω∗k‖ 2 2 + 2βk ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 + 2βk E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 + Cqβ 2 k , ( 75 ) where C1 : = CpLω , C2 : = Cδ ( 1 + γ ) and Cq : = 2C2δ + 2L 2 ωC 2 p max ( k ) α2k β2k = 2C2δ + 2L 2 ωC 2 p c21 c22 . Now we consider the third item in the last inequality . For some m ∈ N+ , we define M : = ( K0 + 1 ) m+K0 . Following Lemma 4 ( to be presented in Sec . C.1 ) , for some dm ≤M and positive constants C4 , C5 , C6 , C7 , we have E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 ≤ C4 E ‖θk − θk−dm‖2 + C5 dm∑ i=τk E ‖θk−i − θk−dm‖2 + C6 E ‖ωk − ωk−dm‖2 + C7κρm−1 ≤ C4 k−1∑ i=k−dm E ‖θi+1 − θi‖2 + C5 dm−1∑ i=τk k−i−1∑ j=k−dm E ‖θj+1 − θj‖2 + C6 k−1∑ i=k−dm E ‖ωi+1 − ωi‖2 + C7κρm−1 ≤ C4 k−1∑ i=k−dm αiCp + C5 dm−1∑ i=τk k−i−1∑ j=k−dm αjCp + C6 k−1∑ i=k−dm βiCδ + C7κρ m−1 ≤ C4αk−dm k−1∑ i=k−dm Cp + C5αk−dm dm−1∑ i=τk k−i−1∑ j=k−dm Cp + C6βk−dm k−1∑ i=k−dm Cδ + C7κρ m−1 ≤ C4dmCpαk−dm + C5 ( dm − τk ) 2Cpαk−dm + C6dmCδβk−dm + C7κρm−1 ≤ ( C4M + C5M 2 ) Cpαk−M + C6MCδβk−M + C7κρ m−1 , ( 76 ) where the third last inequality is due to the monotonicity of step size , and the last inequality is due to τk ≥ 0 and dm ≤M . Further letting m = mK which is defined in ( 21 ) yields E 〈 ωk − ω∗k , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 = ( C4MK + C5M 2 K ) Cpαk−MK + C6CδMKβk−MK + C7κρ mK−1 ≤ ( C4MK + C5M 2 K ) Cpαk−MK + C6CδMKβk−MK + C7αK , ( 77 ) where MK = ( K0 + 1 ) mK +K0 , and the last inequality follows the definition of mK . Substituting ( 77 ) into ( 75 ) , then rearranging and summing up both sides over k = MK , ... , K yield 2λ K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ≤ K∑ k=MK 1 βk ( E ‖ωk − ω∗k‖ 2 2 − E ∥∥ωk+1 − ω∗k+1∥∥22 ) I1 +Cq K∑ k=MK βk I2 + 2 K∑ k=MK ( ( C4MK + C5M 2 K ) Cpαk−MK + C6CδMKβk−MK + C7αK ) I3 + 2 K∑ k=MK ( C1 αk βk + C2K0βk−K0 ) E ‖ωk − ω∗k‖2 I4 . ( 78 ) where the order of I1 , I2 and I4 have already been given by ( 45 ) , ( 46 ) and ( 50 ) respectively . We bound I3 as I3 = ( C4MK + C5M 2 K ) Cp K∑ k=MK αk + C6CδMK K∑ k=MK βk + C7αK K∑ k=MK 1 ≤ ( C4MK + C5M 2 K ) Cpc1 K1−σ1 1− σ1 + C6CδMKc2 K1−σ2 1− σ2 + C7c1K ( 1 +K ) −σ1 = O ( ( K20 log 2K ) K1−σ1 ) +O ( ( K0 logK ) K 1−σ2 ) , ( 79 ) where the last inequality follows from the integration rule ∑b k=a k −σ ≤ b 1−σ 1−σ , and the last equality is due to O ( MK ) = O ( K0mK ) = O ( K0 logK ) . Collecting the bounds of I1 , I2 , I3 and I4 , and dividing both sides of ( 78 ) by K −MK + 1 yield 2λ 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ≤ √ O ( K2 ( σ2−σ1 ) +1 ) +O ( K0K−σ1+1 ) +O ( K20K1−2σ2 ) K −MK + 1 √√√√ K∑ k=MK E ‖ωk − ω∗k‖ 2 2 +O ( 1 K1−σ2 ) +O ( K20 log 2K Kσ1 ) +O ( K0 logK Kσ2 ) . ( 80 ) Similar to the derivation of ( 52 ) , ( 80 ) implies 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 = O ( 1 K1−σ2 ) +O ( 1 K2 ( σ1−σ2 ) ) +O ( K20 K2σ2 ) +O ( K20 log 2K Kσ1 ) +O ( K0 logK Kσ2 ) . Similar to ( 53 ) , we have 1 K K∑ k=1 E ‖ωk − ω∗k‖22 = O ( K0 logK K ) +O ( 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ) = O ( 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗k‖ 2 2 ) ( 81 ) which completes the proof . B.4 PROOF OF THEOREM 4 Given the definition in section B.2 , we now give the convergence proof of actor update in Algorithm 1 with linear value function approximation and Markovian sampling method . By following the derivation of ( 55 ) , we have E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] + αk E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ωk−τk ) − δ̂ ( x ( k ) , ω∗k ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I1 + αk E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I2 −LJ 2 C2pα 2 k. ( 82 ) The item I1 can be bounded by following ( 56 ) as I1 ≥ −2Cψ E [ ‖∇J ( θk ) ‖2 ( CδK0βk−1 + ‖ωk − ω∗k‖2 ) ] . ( 83 ) Next we consider I2 . We first decompose it as I2 = E 〈 ∇J ( θk ) , δ̂ ( x ( k ) , ω∗k ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 I ( 1 ) 2 + E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 I ( 2 ) 2 +E ‖∇J ( θk ) ‖22 . ( 84 ) For some m ∈ N+ , define M : = ( K0 + 1 ) m + K0 . Following Lemma 5 , for some dm ≤ M and positive constants D2 , D3 , D4 , D5 , I ( 1 ) 2 can be bounded as I ( 1 ) 2 = E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −D2 E ‖θk−τk − θk−dm‖2 −D3 E ‖θk − θk−dm‖2 −D4 k−τk∑ i=k−dm E ‖θi − θk−dm‖2 −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 ≥ −D2 ( dm − τk ) Cpαk−dm −D3dmCpαk−dm −D4 ( dm − τk ) 2Cpαk−dm −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 , ( 85 ) where the derivation of the last inequality is similar to that of ( 76 ) . By setting m = mK in ( 85 ) , and following the fact that dmK ≤MK and τk ≥ 0 , we have I ( 1 ) 2 ≥ −D2MKCpαk−MK −D3MKCpαk−MK −D4M2KCpαk−MK −D5κρmK−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θ ) ‖2 = − ( ( D2 +D3 ) CpMK +D4CpM 2 K ) αk−MK −D5κρmK−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 ≥ − ( ( D2 +D3 ) CpMK +D4CpM 2 K ) αk−MK −D5αK − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 , ( 86 ) where the last inequality is due to the definition of mK . Following Lemma 6 , for some positive constants D6 , D7 , D8 and D9 , we bound I ( 2 ) 2 as I ( 2 ) 2 = E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −D6 E ‖θk−τk − θk−dm‖2 −D7 E ‖θk − θk−dm‖2 −D8 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D9κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 . Similar to the derivation of ( 86 ) , we have I ( 2 ) 2 ≥ − ( D6 +D7 +D8MK ) CpMKαk−MK −D9αK − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 . ( 87 ) Collecting the lower bounds of I ( 1 ) 2 and I ( 2 ) 2 yields I2 ≥ −2CψLV fa − 2Cψ ( sp + 4rmax ( 1− γ ) ) E ‖∇J ( θk ) ‖2 + E ‖∇J ( θk ) ‖22 −DKαk−MK − ( D5 +D9 ) αK , ( 88 ) where we define DK : = ( D4 +D8 ) CpM2K + ( D2 +D3 +D6 +D7 ) CpMK for brevity . Substituting ( 83 ) and ( 88 ) into ( 82 ) yields E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 2αkCψ E [ ‖∇J ( θk ) ‖2 ( sp + 4rmax ( 1− γ ) + CδK0βk−1 + ‖ωk − ω∗k‖2 ) ] − αk ( DKαk−MK + ( D5 +D9 ) αK ) − 2CψLV faαk + αk E ‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. Similar to the derivation of ( 67 ) , the last inequality implies E [ J ( θk+1 ) ] ≥ E [ J ( θk ) ] − 4αkCψ √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) − αk ( DKαk−MK + ( D5 +D9 ) αK ) − 2CψLV faαk + αk E ‖∇J ( θk ) ‖22 − LJ 2 C2pα 2 k. Rearranging and dividing both sides by αk yield E ‖∇J ( θk ) ‖22 ≤ 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) +DKαk−MK + ( D5 +D9 ) αK + LJ 2 C2pαk + 4Cψ √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) + 2CψLV fa . Taking summation gives K∑ k=MK E ‖∇J ( θk ) ‖22 ≤ K∑ k=MK 1 αk ( E [ J ( θk+1 ) ] − E [ J ( θk ) ] ) I3 + K∑ k=MK ( DKαk−MK + LJ 2 C2pαk + ( D5 +D9 ) αK ) I4 + 4Cψ K∑ k=MK √ E ‖∇J ( θk ) ‖22 √ C2δK 2 0β 2 k−1 + E ‖ωk − ω∗k‖22 +O ( 2sp ) I5 + 2CψLV ( K −MK + 1 ) fa . ( 89 ) in which the upper bounds of I3 and I5 have already been given by ( 69 ) and ( 71 ) respectively . We bound I4 as I4 = K∑ k=MK ( DKαk−MK + LJ 2 C2pαk + ( D5 +D9 ) αK ) ≤ K∑ k=MK ( DKαk−MK + LJ 2 C2pαk−MK + ( D5 +D9 ) αK ) = ( DK + LJ 2 C2p ) K∑ k=MK αk−MK + ( D5 +D9 ) ( K −MK + 1 ) αK = ( DK + LJ 2 C2p ) K−MK∑ k=0 αk + ( D5 +D9 ) ( K −MK + 1 ) αK ≤ ( DK + LJ 2 C2p ) c1 1− σ1 K1−σ1 + c1 ( D5 +D9 ) ( K + 1 ) 1−σ1 = O ( ( K20 log 2K ) K1−σ1 ) ( 90 ) where the last inequality uses ∑b k=a k −σ ≤ b 1−σ 1−σ , and the last equality is due to the fact that O ( DK ) = O ( M2K +MK ) = O ( ( K0mK ) 2 +K0mK ) = O ( K20 log 2K ) . Substituting the upper bounds of I3 , I4 and I5 into ( 89 ) , and dividing both sides by K−MK + 1 give 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 ≤ 4Cψ K −MK + 1 √√√√ K∑ k=MK E ‖∇J ( θk ) ‖22 √√√√O ( K20K1−2σ2 ) + K∑ k=MK E ‖ωk − ω∗k‖22 +O ( K 2sp ) +O ( 1 K1−σ1 ) +O ( K20 log 2K Kσ1 ) +O ( fa ) . ( 91 ) Following the similar steps of those in ( 73 ) , ( 91 ) essentially implies 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 = O ( 1 K1−σ1 ) +O ( K20 log 2K Kσ1 ) +O ( K20 K2σ2 ) +O ( 1 K −MK + 1 K∑ k=MK E ‖ωk − ω∗θk‖ 2 2 ) +O ( app ) . Similar to ( 74 ) , we have 1 K K∑ k=1 E ‖∇J ( θk ) ‖22 = O ( K0 logK K ) +O ( 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 ) = O ( 1 K −MK + 1 K∑ k=MK E ‖∇J ( θk ) ‖22 ) which completes the proof . C SUPPORTING LEMMAS C.1 SUPPORTING LEMMAS FOR THEOREM 3 Lemma 4 . For any m ≥ 1 and k ≥ ( K0 + 1 ) m+K0 + 1 , we have E 〈 ωk − ω∗θk , g ( x ( k ) , ωk ) − g ( θk , ωk ) 〉 ≤ C4 E ‖θk − θk−dm‖2 + C5 dm∑ i=τk E ‖θk−i − θk−dm‖2 + C6 E ‖ωk − ωk−dm‖2 + C7κρm−1 , where dm ≤ ( K0 + 1 ) m + K0 , and C4 : = 2CδLω + 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1 − ρ ) −1 ) , C5 : = 4RωCδ|A|Lπ and C6 : = 4 ( 1 + γ ) Rω + 2Cδ , C7 : = 8RωCδ . Proof . Consider the collection of random samples { x ( k−K0−1 ) , x ( k−K0 ) , ... , x ( k ) } . Suppose x ( k ) is sampled by worker n , then due to Assumption 1 , { x ( k−K0−1 ) , x ( k−K0 ) , ... , x ( k−1 ) } will contain at least another sample drawn by worker n. Therefore , { x ( k− ( K0+1 ) m ) , x ( k− ( K0+1 ) m+1 ) , ... , x ( k−1 ) } will contain at least m samples from worker n. Consider the Markov chain formed by m+ 1 samples in { x ( k− ( K0+1 ) m ) , x ( k− ( K0+1 ) m+1 ) , ... , x ( k ) } : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . Suppose θk−dm was used to do the kmth update , then we have xt−m = x ( km ) . Following Assumption 1 , we have τkm = km − ( k − dm ) ≤ K0 . Since x ( km ) is in { x ( k− ( K0+1 ) m ) , ... , x ( k ) } , we have km ≥ k − ( K0 + 1 ) m. Combining these two inequalities , we have dm ≤ ( K0 + 1 ) m+K0 . ( 92 ) Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct an auxiliary Markov chain as that in Lemma 2 : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . For brevity , we define ∆1 ( x , θ , ω ) : = 〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 . Throughout this proof , we use θ , θ′ , ω , ω′ , x and x̃ as shorthand notations of θk , θk−dm , ωk , ωk−dm , xt and x̃t respectively . First we decompose ∆1 ( x , θ , ω ) as ∆1 ( x , θ , ω ) = ∆1 ( x , θ , ω ) −∆1 ( x , θ′ , ω ) I1 + ∆1 ( x , θ ′ , ω ) −∆1 ( x , θ′ , ω′ ) I2 + ∆1 ( x , θ ′ , ω′ ) −∆1 ( x̃ , θ′ , ω′ ) I3 + ∆1 ( x̃ , θ ′ , ω′ ) I4 . ( 93 ) We bound I1 in ( 93 ) as ∆1 ( x , θ , ω ) −∆1 ( x , θ′ , ω ) = 〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 ≤ |〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉| + |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉| . ( 94 ) For the first term in ( 94 ) , we have |〈ω − ω∗θ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉| = |〈ω∗θ − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉| ≤ ‖ω∗θ − ω∗θ′‖2‖g ( x , ω ) − g ( θ , ω ) ‖ ≤ 2Cδ‖ω∗θ − ω∗θ′‖2 ≤ 2CδLω‖θ − θ′‖2 , where the last inequality is due to Proposition 2 . We use x ∼ θ′ as shorthand notations to represent that s ∼ µθ′ , a ∼ πθ′ , s′ ∼ P̃ . For the second term in ( 94 ) , we have |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉| = |〈ω − ω∗θ′ , g ( θ′ , ω ) − g ( θ , ω ) 〉| ≤ ‖ω − ω∗θ′‖2‖g ( θ′ , ω ) − g ( θ , ω ) ‖2 ≤ 2Rω‖g ( θ′ , ω ) − g ( θ , ω ) ‖2 = 2Rω ∥∥∥∥ Ex∼θ′ [ g ( x , ω ) ] − Ex∼θ [ g ( x , ω ) ] ∥∥∥∥ 2 ≤ 2Rω sup x ‖g ( x , ω ) ‖2‖µθ′ ⊗ πθ′ ⊗ P̃ − µθ ⊗ πθ ⊗ P̃‖TV ≤ 2RωCδ‖µθ′ ⊗ πθ′ ⊗ P̃ − µθ ⊗ πθ ⊗ P̃‖TV = 4RωCδdTV ( µθ′ ⊗ πθ′ ⊗ P̃ , µθ ⊗ πθ ⊗ P̃ ) ≤ 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ‖θ − θ′‖2 , where the third inequality follows the definition of TV norm , the second last inequality follows ( 32 ) , and the last inequality follows Lemma A.1 . in [ 17 ] . Collecting the upper bounds of the two terms in ( 94 ) yields I1 ≤ [ 2CδLω + 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1− ρ ) −1 ) ] ‖θ − θ′‖2 . Next we bound E [ I2 ] in ( 93 ) as E [ I2 ] = E [ ∆1 ( x , θ′ , ω ) −∆1 ( x , θ′ , ω′ ) ] = E 〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 − 〈ω′ − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉 ≤ E |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| + E |〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉 − 〈ω′ − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| . ( 95 ) We bound the first term in ( 95 ) as E |〈ω − ω∗θ′ , g ( x , ω ) − g ( θ′ , ω ) 〉 − 〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| = E |〈ω − ω∗θ′ , g ( x , ω ) − g ( x , ω′ ) + g ( θ′ , ω′ ) − g ( θ′ , ω ) 〉| ≤ 2Rω ( E ‖g ( x , ω ) − g ( x , ω′ ) ‖2 + E ‖g ( θ′ , ω′ ) − g ( θ′ , ω ) ‖2 ) ≤ 2Rω ( E ‖g ( x , ω ) − g ( x , ω′ ) ‖2 + E ∥∥∥∥ Ex∼θ′ [ g ( x , ω′ ) ] − Ex∼θ′ [ g ( x , ω ) ] ∥∥∥∥ 2 ) = 2Rω ( E ‖ ( γφ ( s′ ) − φ ( s ) ) > ( ω − ω′ ) ‖2 + E ∥∥∥∥ Ex∼θ′ [ ( γφ ( s′ ) − φ ( s ) ) > ] ( ω′ − ω ) ∥∥∥∥ 2 ) ≤ 2Rω ( ( 1 + γ ) E ‖ω − ω′‖2 + ( 1 + γ ) E ‖ω − ω′‖2 ) = 4Rω ( 1 + γ ) E ‖ω − ω′‖2 . We bound the second term in ( 95 ) as E |〈ω − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉 − 〈ω′ − ω∗θ′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| = E |〈ω − ω′ , g ( x , ω′ ) − g ( θ′ , ω′ ) 〉| ≤ 2Cδ E ‖ω − ω′‖2 . Collecting the upper bounds of the two terms in ( 95 ) yields E [ I2 ] ≤ ( 4 ( 1 + γ ) Rω + 2Cδ ) E ‖ω − ω′‖2 . We first bound I3 as E [ I3|θ′ , ω′ , st−m+1 ] = E [ ∆1 ( x , θ′ , ω′ ) −∆1 ( x̃ , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] ≤ |E [ ∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] − E [ ∆1 ( x̃ , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] | ≤ sup x |∆1 ( x , θ′ , ω′ ) | ‖P ( x ∈ ·|θ′ , ω′ , st−m+1 ) − P ( x̃ ∈ ·|θ′ , ω′ , st−m+1 ) ‖TV ≤ 8RωCδdTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) , ( 96 ) where the second last inequality follows the definition of TV norm , and the last inequality follows the fact that |∆1 ( x , θ′ , ω′ ) | ≤ ‖ω′ − ω∗θ′‖2‖g ( x , ω′ ) − g ( θ′ , ω′ ) ‖2 ≤ 4RωCδ . By following ( 22 ) in Lemma 2 , we have dTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) ≤ 1 2 |A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2| θ′ , st−m+1 ] . Substituting the last inequality into ( 96 ) , then taking total expectation on both sides yield E [ I3 ] ≤ 4RωCδ|A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 . Next we bound I4 . Define x : = ( s , a , s′ ) where s ∼ µθ′ , a ∼ πθ′ and s′ ∼ P̃ . It is immediate that E [ ∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] = 〈ω′ − ω∗θ′ , E [ g ( x , ω′ ) |θ′ , ω′ , st−m+1 ] − g ( θ′ , ω′ ) 〉 = 〈ω′ − ω∗θ′ , g ( θ′ , ω′ ) − g ( θ′ , ω′ ) 〉 = 0 . ( 97 ) Then we have E [ I4|θ′ , ω′ , st−m+1 ] = E [ ∆1 ( x̃ , θ′ , ω′ ) −∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] ≤ |E [ ∆1 ( x̃ , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] − E [ ∆1 ( x , θ′ , ω′ ) |θ′ , ω′ , st−m+1 ] | ≤ sup x |∆1 ( x , θ′ , ω′ ) | ‖P ( x̃ ∈ ·|θ′ , st−m+1 ) − P ( x ∈ ·|θ′ , st−m+1 ) ‖TV ≤ 8RωCδdTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , P ( x ∈ ·|θ′ , st−m+1 ) ) = 8RωCδdTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) , ( 98 ) where the second inequality follows the definition of TV norm , and the third inequality follows ( 97 ) . The auxiliary Markov chain with policy πθ′ starts from initial state st−m+1 , and s̃t is the ( m− 1 ) th state on the chain . Following Lemma 1 , we have : dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) = dTV ( P ( ( s̃t , ãt , s̃t+1 ) ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ≤ κρm−1 . Substituting the last inequality into ( 98 ) and taking total expectation on both sides yield E [ I4 ] ≤ 8RωCδκρm−1 . Taking total expectation on ( 93 ) and collecting bounds of I1 , I2 , I3 , I4 yield E [ ∆1 ( x , θ , ω ) ] ≤ C4 E ‖θk − θk−dm‖2 + C5 dm∑ i=τk E ‖θk−i − θk−dm‖2 + C6 E ‖ωk − ωk−dm‖2 + C7κρm−1 , where C4 : = 2CδLω + 4RωCδ|A|Lπ ( 1 + logρ κ−1 + ( 1 − ρ ) −1 ) , C5 : = 4RωCδ|A|Lπ , C6 : = 4 ( 1 + γ ) Rω + 2Cδ and C7 : = 8RωCδ . C.2 SUPPORTING LEMMAS FOR THEOREM 4 Lemma 5 . For any m ≥ 1 and k ≥ ( K0 + 1 ) m+K0 + 1 , we have E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −D2 E ‖θk−τk − θk−dm‖2 −D3 E ‖θk − θk−dm‖2 −D4 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θ ) ‖2 , where D2 : = 2LV LψCδ , D3 : = ( 2CδCψLJ + LV Cψ ( Lω + LV ) ( 1 + γ ) + 2CψLJ app ) , D4 : = 2LV CψCδ|A|Lπ and D5 : = 4LV CψCδ . Proof . For the worker that contributes to the kth update , we construct its Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . By ( 92 ) in Lemma 4 , we have dm ≤ ( K0 + 1 ) m+K0 . Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct an auxiliary Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . First we have〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 = 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 + 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−dm ( s ( k ) , a ( k ) ) 〉 . ( 99 ) We first bound the fist term in ( 99 ) as〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 ≥ −‖J ( θk ) ‖2|δ̂ ( x ( k ) , ω∗k ) − δ ( x ( k ) , θk ) |‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −‖J ( θk ) ‖2 ( |δ̂ ( x ( k ) , ω∗k ) |+ |δ ( x ( k ) , θk ) | ) ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV ( |δ̂ ( x ( k ) , ω∗k ) |+ |δ ( x ( k ) , θk ) | ) ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −2LV Cδ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −2LV LψCδ‖θk−τk − θk−dm‖2 , ( 100 ) where the last inequality follows Assumption 3 and second last inequality follows |δ̂ ( x , ω∗θ ) | ≤ |r ( x ) |+ γ‖φ ( s′ ) ‖2‖ω∗θ‖2 + ‖φ ( s ) ‖2‖ω∗θ‖2 ≤ rmax + ( 1 + γ ) Rω ≤ Cδ , |δ ( x , θ ) | ≤ |r ( x ) |+ γ|Vπθ ( s′ ) |+ |Vπθ ( s ) | ≤ rmax + ( 1 + γ ) rmax 1− γ ≤ Cδ . Substituting ( 100 ) into ( 99 ) gives〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −2LV LψCδ‖θk−τk − θk−dm‖2 + 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−dm ( s ( k ) , a ( k ) ) 〉 . ( 101 ) Then we start to bound the second term in ( 101 ) . For brevity , we define ∆2 ( x , θ ) : = 〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθk−dm ( s , a ) 〉 . In the following proof , we use θ , θ′ , ω∗θ , ω ∗ θ′ , x and x̃ as shorthand notations for θk , θk−dm , ω ∗ k , ω∗k−dm , xt and x̃t respectively . We also define x : = ( s , a , s ′ ) , where s ∼ µθ′ , a ∼ πθ′ and s′ ∼ P̃ . We decompose the second term in ( 101 ) as ∆2 ( x , θ ) = ∆2 ( x , θ ) −∆2 ( x , θ′ ) I1 + ∆2 ( x , θ ′ ) −∆2 ( x̃ , θ′ ) I2 + ∆2 ( x̃ , θ ′ ) −∆2 ( x , θ′ ) I3 + ∆2 ( x , θ ′ ) I4 . We bound the term I1 as I1 = 〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉 = 〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 + 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉 . For the first term in I1 , we have〈 ∇J ( θ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 = 〈 ∇J ( θ ) −∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 ≥ −‖∇J ( θ ) −∇J ( θ′ ) ‖2‖δ̂ ( x , ω∗θ ) − δ ( x , θ ) ‖2‖ψθ′ ( s , a ) ‖2 ≥ −2CδCψ‖∇J ( θ ) −∇J ( θ′ ) ‖2 ≥ −2CδCψLJ‖θ − θ′‖2 , where the last inequality is due to the LJ -Lipschitz of policy gradient shown in Proposition 1 . For the second term in I1 , we have〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 − 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉 = 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ ) − δ̂ ( x , ω∗θ′ ) + δ ( x , θ′ ) − δ ( x , θ ) ) ψθ′ ( s , a ) 〉 ≥ −LV Cψ ∣∣∣δ̂ ( x , ω∗θ ) − δ̂ ( x , ω∗θ′ ) + δ ( x , θ′ ) − δ ( x , θ ) ∣∣∣ ≥ −LV Cψ ∣∣γφ ( s′ ) > ( ω∗θ − ω∗θ′ ) + φ ( s ) > ( ω∗θ′ − ω∗θ ) + γVπθ′ ( s′ ) − γVπθ ( s′ ) + Vπθ ( s ) − Vπθ′ ( s ) ∣∣ ≥ −LV Cψ ( γ‖ω∗θ − ω∗θ′‖2 + ‖ω∗θ′ − ω∗θ‖2 + γ|Vπθ′ ( s ′ ) − Vπθ ( s′ ) |+ |Vπθ ( s ) − Vπθ′ ( s ) | ) ≥ −LV Cψ ( γLω‖θ − θ′‖2 + Lω‖θ − θ′‖2 + γLV ‖θ − θ′‖2 + LV ‖θ − θ′‖2 ) = −LV Cψ ( Lω + LV ) ( 1 + γ ) ‖θ − θ′‖2 , where the last inequality is due to the Lω-Lipschitz continuity of ω∗θ shown in Proposition 2 and LV -Lipschitz continuity of Vπθ ( s ) shown in Lemma 3 . Collecting the upper bounds of I1 yields I1 ≥ − ( 2CδCψLJ + LV Cψ ( Lω + LV ) ( 1 + γ ) ) ‖θ − θ′‖2 . First we bound I2 as E [ I2|θ′ , st−m+1 ] = E [ ∆2 ( x , θ′ ) −∆2 ( x̃ , θ′ ) |θ′ , st−m+1 ] ≥ − |E [ ∆2 ( x , θ′ ) | θ′ , st−m+1 ] − E [ ∆2 ( x̃ , θ′ ) | θ′ , st−m+1 ] | ≥ − sup x |∆2 ( x , θ′ ) | ‖P ( x ∈ ·|θ′ , st−m+1 ) − P ( x̃ ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −4LV CψCδdTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) ≥ −2LV CψCδ|A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2| θ′ , st−m+1 ] , ( 102 ) where the second inequality is due to the definition of TV norm , the last inequality follows ( 22 ) in Lemma 2 , and the second last inequality follows the fact that |∆2 ( x , θ′ ) | ≤ ‖∇J ( θ′ ) ‖2|δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) |‖ψθ′ ( s , a ) ‖2 ≤ 2LV CδCψ . ( 103 ) Taking total expectation on both sides of ( 102 ) yields E [ I2 ] ≥ −2LV CψCδ|A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 . Next we bound I3 as E [ I3|θ′ , st−m+1 ] = E [ ∆2 ( x̃ , θ′ ) −∆2 ( x , θ′ ) | θ′ , st−m+1 ] ≥ − |E [ ∆2 ( x̃ , θ′ ) | θ′ , st−m+1 ] − E [ ∆2 ( x , θ′ ) | θ′ , st−m+1 ] | ≥ − sup x |∆2 ( x , θ′ ) | ‖P ( x̃ ∈ ·|θ′ , st−m+1 ) − P ( x ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −4LV CψCδdTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) , ( 104 ) where the second inequality is due to the definition of TV norm , and the last inequality follows ( 103 ) . The auxiliary Markov chain with policy πθ′ starts from initial state st−m+1 , and s̃t is the ( m− 1 ) th state on the chain . Following Lemma 1 , we have : dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) =dTV ( P ( ( s̃t , ãt , s̃t+1 ) ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ≤ κρm−1 . Substituting the last inequality into ( 104 ) and taking total expectation on both sides yield E [ I3 ] ≥ −4LV CψCδκρm−1 We bound I4 as E [ I4|θ′ ] = E [ 〈 ∇J ( θ′ ) , ( δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉∣∣∣ θ′ ] ≥ −Cψ‖∇J ( θ′ ) ‖2 E [ ∣∣∣δ̂ ( x , ω∗θ′ ) − δ ( x , θ′ ) ∣∣∣∣∣∣ θ′ ] = −Cψ‖∇J ( θ′ ) ‖2 E [ ∣∣γ ( φ ( s′ ) > ω∗θ′ − Vπθ′ ( s′ ) ) + Vπθ′ ( s ) − φ ( s ) > ω∗θ′ ∣∣∣∣ θ′ ] ≥ −Cψ‖∇J ( θ′ ) ‖2 ( γ E [ |φ ( s′ ) > ω∗θ′ − Vπθ′ ( s ′ ) | ∣∣ θ′ ] + E [ |Vπθ′ ( s ) − φ ( s ) > ω∗θ′ |∣∣ θ′ ] ) ≥ −Cψ‖∇J ( θ′ ) ‖2 ( γ √ E [ |φ ( s′ ) > ω∗θ′ − Vπθ′ ( s ′ ) |2 ∣∣ θ′ ] +√E [ |Vπθ′ ( s ) − φ ( s ) > ω∗θ′ |2∣∣ θ′ ] ) = −Cψ‖∇J ( θ′ ) ‖2 ( γ √ E s′∼µθ′ |φ ( s′ ) > ω∗θ′ − Vπθ′ ( s ′ ) |2 + √ E s∼µθ′ |Vπθ′ ( s ) − φ ( s ) > ω∗θ′ |2 ) ≥ −2Cψ‖∇J ( θ′ ) ‖2 app , where the second last inequality follows Jensen ’ s inequality . The last inequality further implies E [ I4 ] ≥ −2Cψ E ‖∇J ( θ′ ) −∇J ( θ ) +∇J ( θ ) ‖2 app ≥ −2Cψ app E ‖∇J ( θ′ ) −∇J ( θ ) ‖2 − 2Cψ app E ‖∇J ( θ ) ‖2 ≥ −2Cψ app E ‖∇J ( θ′ ) −∇J ( θ ) ‖2 − 2Cψ fa E ‖∇J ( θ ) ‖2 − 2CψLV sp ≥ −2CψLJ app E ‖θ − θ′‖2 − 2Cψ fa E ‖∇J ( θ ) ‖2 − 2CψLV sp , where the last inequality follows Proposition 1 . Taking total expectation on both sides of ( 101 ) , and collecting lower bounds of I1 , I2 , I3 and I4 yield E 〈 ∇J ( θk ) , ( δ̂ ( x ( k ) , ω ∗ k ) − δ ( x ( k ) , θk ) ) ψθk−τk ( s ( k ) , a ( k ) ) 〉 ≥ −D2 E ‖θk−τk − θk−dm‖2 −D3 E ‖θk − θk−dm‖2 −D4 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D5κρm−1 − 2CψLV fa − 2Cψ sp E ‖∇J ( θk ) ‖2 , where D2 : = 2LV LψCδ , D3 : = ( 2CδCψLJ + LV Cψ ( Lω + LV ) ( 1 + γ ) + 2CψLJ app ) , D4 : = 2LV CψCδ|A|Lπ and D5 : = 4LV CψCδ . Lemma 6 . For any m ≥ 1 and k ≥ ( K0 + 1 ) m+K0 + 1 , we have E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −D6 E ‖θk−τk − θk−dm‖2 −D7 E ‖θk − θk−dm‖2−D8 dm∑ i=τk E ‖θk−i − θk−dm‖2−D9κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 , where D6 : = LV CδLψ , D7 : = CpLJ + ( 1 + γ ) L2V Cψ + 2LV LJ + 8CψrmaxLJ ( 1 − γ ) , D8 : = LV ( Cp + LV ) |A|Lπ , D9 : = 2LV ( Cp + LV ) . Proof . For the worker that contributes to the kth update , we construct its Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−1−−−−−−→ at−m+1 · · · st−1 θk−d1−−−−→ at−1 P̃−→ st θk−d0−−−−→ at P̃−→ st+1 , where ( st , at , st+1 ) = ( s ( k ) , a ( k ) , s′ ( k ) ) , and { dj } m j=0 is some increasing sequence with d0 : = τk . By ( 92 ) in Lemma 4 , we have dm ≤ ( K0 + 1 ) m+K0 . Given ( st−m , at−m , st−m+1 ) and θk−dm , we construct an auxiliary Markov chain : st−m θk−dm−−−−→ at−m P̃−→ st−m+1 θk−dm−−−−→ ãt−m+1 · · · s̃t−1 θk−dm−−−−→ ãt−1 P̃−→ s̃t θk−dm−−−−→ ãt P̃−→ s̃t+1 . First we have 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 = 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 + 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−dm ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 . ( 105 ) We bound the first term in ( 105 ) as〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ( ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ) 〉 ≥ −‖∇J ( θk ) ‖2 ‖δ ( x ( k ) , θk ) ‖2‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV ‖δ ( x ( k ) , θk ) ‖2‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV Cδ‖ψθk−τk ( s ( k ) , a ( k ) ) − ψθk−dm ( s ( k ) , a ( k ) ) ‖2 ≥ −LV CδLψ‖θk−τk − θk−dm‖2 , ( 106 ) where the last inequality follows Assumption 3 , and the second last inequality follows the fact that |δ ( x , θ ) | ≤ |r ( x ) |+ γ|Vπθ ( s′ ) |+ |Vπθ ( s ) | ≤ rmax + ( 1 + γ ) rmax 1− γ ≤ Cδ . Substituting ( 106 ) into ( 105 ) gives〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −LV CδLψ‖θk−τk − θk−dm‖2 + 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−dm ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 . ( 107 ) Then we start to bound the second term in ( 107 ) . For brevity , we define ∆3 ( x , θ ) : = 〈 ∇J ( θ ) , δ ( x , θ ) ψθk−dm ( s , a ) −∇J ( θ ) 〉 . Throughout the following proof , we use θ , θ′ , x and x̃ as shorthand notations of θk , θk−dm , xt and x̃t respectively . We decompose ∆3 ( x , θ ) as ∆3 ( x , θ ) = ∆3 ( x , θ ) −∆3 ( x , θ′ ) I1 + ∆3 ( x , θ ′ ) −∆3 ( x̃ , θ′ ) I2 + ∆3 ( x̃ , θ ′ ) I3 . We first bound I1 as |I1| = |∆3 ( x , θ ) −∆3 ( x , θ′ ) | = ∣∣〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − ‖∇J ( θ ) ‖22 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉+ ‖∇J ( θ′ ) ‖22∣∣ ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉|+ ∣∣‖∇J ( θ′ ) ‖22 − ‖∇J ( θ ) ‖22∣∣ ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉|+ ‖∇J ( θ′ ) +∇J ( θ ) ‖2‖∇J ( θ′ ) −∇J ( θ ) ‖2 ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉|+ 2LV LJ‖θ − θ′‖2 , ( 108 ) where the last equality is due to LV -Lipschitz of value function and LJ -Lipschitz of policy gradient . We bound the first term in ( 108 ) as |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| ≤ |〈∇J ( θ ) , δ ( x , θ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| + |〈∇J ( θ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉 − 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| = |〈∇J ( θ ) , ( δ ( x , θ ) − δ ( x , θ′ ) ) ψθ′ ( s , a ) 〉|+ |〈∇J ( θ ) −∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) 〉| ≤ LV Cψ |δ ( x , θ ) − δ ( x , θ′ ) |+ Cp‖∇J ( θ ) −∇J ( θ′ ) ‖2 = LV Cψ ∣∣γ ( Vπθ ( s′ ) − Vπθ′ ( s′ ) ) + Vπθ′ ( s ) − Vπθ ( s ) ∣∣+ Cp‖∇J ( θ ) −∇J ( θ′ ) ‖2 ≤ LV Cψ ( γ ∣∣Vπθ ( s′ ) − Vπθ′ ( s′ ) ∣∣+ ∣∣Vπθ′ ( s ) − Vπθ ( s ) ∣∣ ) + Cp‖∇J ( θ ) −∇J ( θ′ ) ‖2 ≤ LV Cψ ( γLV ‖θ − θ′‖2 + LV ‖θ′ − θ‖ ) + CpLJ‖θ − θ′‖2 = ( CpLJ + ( 1 + γ ) L 2 V Cψ ) ‖θ − θ′‖2 . Substituting the above inequality into ( 108 ) gives the lower bound of I1 : I1 ≥ − ( CpLJ + ( 1 + γ ) L 2 V Cψ + 2LV LJ ) ‖θ − θ′‖2 . First we bound I2 as E [ I2|θ′ , st−m+1 ] = E [ ∆3 ( x , θ′ ) −∆3 ( x̃ , θ′ ) |θ′ , st−m+1 ] ≥ − |E [ ∆3 ( x , θ′ ) |θ′ , st−m+1 ] − E [ ∆3 ( x̃ , θ′ ) |θ′ , st−m+1 ] | ≥ − sup x |∆3 ( x , θ′ ) | ‖P ( x ∈ ·|θ′ , st−m+1 ) − P ( x̃ ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −2LV ( Cp + LV ) dTV ( P ( x ∈ ·|θ′ , st−m+1 ) , P ( x̃ ∈ ·|θ′ , st−m+1 ) ) ≥ −LV ( Cp + LV ) |A|Lπ dm∑ i=τk E [ ‖θk−i − θk−dm‖2|θ′ , st−m+1 ] , ( 109 ) where the second inequality is due to the definition of TV norm , the last inequality is due to ( 22 ) in Lemma 2 , and thesecond last inequality follows the fact that |∆3 ( x , θ′ ) | ≤ ‖∇J ( θ ) ‖2 ( ‖δ ( x , θ ) ψθk−dm ( s , a ) ‖2 + ‖∇J ( θ ) ‖2 ) ≤ LV ( Cp + LV ) . ( 110 ) Taking total expectation on both sides of ( 109 ) yields E [ I2 ] ≥ −LV ( Cp + LV ) |A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 . Define x : = ( s , a , s′ ) , where s ∼ dθ′ , a ∼ πθ′ and s′ ∼ P̃ . Then we have E [ I3 ] = E [ ∆3 ( x̃ , θ′ ) −∆3 ( x , θ′ ) ] + E [ ∆3 ( x , θ′ ) ] . ( 111 ) We bound the first term in ( 111 ) as E [ ∆3 ( x̃ , θ′ ) −∆3 ( x , θ′ ) |θ′ , st−m+1 ] ≥ − |E [ ∆3 ( x̃ , θ′ ) |θ′ , st−m+1 ] − E [ ∆3 ( x , θ′ ) |θ′ , st−m+1 ] | ≥ − sup x |∆3 ( x , θ′ ) | ‖P ( x̃ ∈ ·|θ′ , st−m+1 ) − P ( x ∈ ·|θ′ , st−m+1 ) ‖TV ≥ −2LV ( Cp + LV ) dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , P ( x ∈ ·|θ′ , st−m+1 ) ) = −2LV ( Cp + LV ) dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , dθ′ ⊗ πθ′ ⊗ P̃ ) = −2LV ( Cp + LV ) dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ( 112 ) where the second inequality follows the definition of total variation norm , and the third inequality follows ( 110 ) . The last equality is due to the fact shown by [ 6 ] that µθ′ ( · ) = dθ′ ( · ) , where µθ′ is the stationary distribution of an artificial MDP with transition kernel P̃ ( ·|s , a ) and policy πθ′ . The auxiliary Markov chain with policy πθ′ starts from initial state st−m+1 , and s̃t is the ( m− 1 ) th state on the chain . Following Lemma 1 , we have : dTV ( P ( x̃ ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) = dTV ( P ( ( s̃t , ãt , s̃t+1 ) ∈ ·|θ′ , st−m+1 ) , µθ′ ⊗ πθ′ ⊗ P̃ ) ≤ κρm−1 . Substituting the last inequality into ( 112 ) and taking total expectation on both sides yield E [ ∆3 ( x̃ , θ′ ) −∆3 ( x , θ′ ) ] ≥ −2LV ( Cp + LV ) κρm−1 . Consider the second term in ( 111 ) . Note its form is similar to ( 59 ) , so by following the derivation of ( 63 ) , we directly have E [ ∆3 ( x , θ′ ) ] = E 〈∇J ( θ′ ) , δ ( x , θ′ ) ψθ′ ( s , a ) −∇J ( θ′ ) 〉 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θ′ ) ‖2 , which further implies E [ ∆3 ( x , θ′ ) ] ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θ′ ) ‖2 ≥ −8Cψrmax ( 1− γ ) E ‖∇J ( θ′ ) −∇J ( θ ) ‖2 − 8Cψrmax ( 1− γ ) E ‖∇J ( θ ) ‖2 ≥ −8CψrmaxLJ ( 1− γ ) E ‖θ′ − θ‖2−8Cψrmax ( 1− γ ) E ‖∇J ( θ ) ‖2 where the last inequality follows from Proposition 1 . Collecting the lower bounds gives E [ I3 ] ≥ −2LV ( Cp + LV ) κρm−1 − 8Cψrmax ( 1− γ ) ( LJ E ‖θ′ − θ‖2 − E ‖∇J ( θ ) ‖2 ) . Taking total expectation on ∆3 ( x , θ ) and collecting lower bounds of I1 , I2 , I3 yield E [ ∆3 ( x , θ ) ] ≥ − ( CpLJ + ( 1 + γ ) L 2 V Cψ + 2LV LJ + 8CψrmaxLJ ( 1− γ ) ) E ‖θk − θk−dm‖2 − LV ( Cp + LV ) |A|Lπ dm∑ i=τk E ‖θk−i − θk−dm‖2 − 2LV ( Cp + LV ) κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 . Taking total expectation on ( 107 ) and substituting the above inequality into it yield E 〈 ∇J ( θk ) , δ ( x ( k ) , θk ) ψθk−τk ( s ( k ) , a ( k ) ) −∇J ( θk ) 〉 ≥ −D6 E ‖θk−τk − θk−dm‖2 −D7 E ‖θk − θk−dm‖2 −D8 dm∑ i=τk E ‖θk−i − θk−dm‖2 −D9κρm−1 − 8Cψrmax ( 1− γ ) E ‖∇J ( θk ) ‖2 , where D6 : = LV CδLψ , D7 : = CpLJ + ( 1 + γ ) L2V Cψ + 2LV LJ + 8CψrmaxLJ ( 1 − γ ) , D8 : = LV ( Cp + LV ) |A|Lπ , D9 : = 2LV ( Cp + LV ) . C.3 EXPLANATION OF THE APPROXIMATION ERROR In this section , we will provide a justification for the circumstances when the approximation error app defined in ( 14 ) is small . Lemma 7 . Suppose Assumption 2 and 4 hold . Then it holds that app ≤ max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω̄∗θ ( s ) |2 + 4rmax ( λ −1 + λ−2rmax ) ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) ( 113 ) where ω̄∗θ the critic stationary point of original Markov chain with policy πθ and transition kernel P . In ( 113 ) , the first term captures the quality of critic function parameterization method which also appears in previous works [ 14 , 15 , 17 ] . When using linear critic function approximation , it becomes zero when the value function Vπθ belongs to the linear function space for any θ . The second term corresponds to the error introduced by sampling from the artificial transition kernel P̃ ( ·|s , a ) = ( 1− γ ) P ( ·|s , a ) + γη ( · ) . For a large γ close to 1 , the artificial Markov chain is close to the original one . In this case , the second error term is therefore small . This fact also consists with practice where large γ is commonly used in two time-scale actor critic algorithms [ 3 ] . Before going into the proof , we first define that : Āθ , φ : = E s∼µ̄θ , s′∼Pπθ [ φ ( s ) ( γφ ( s′ ) − φ ( s ) ) > ] , b̄θ , φ : = E s∼µ̄θ , a∼πθ , s′∼P [ r ( s , a , s′ ) φ ( s ) ] , where µ̄θ as the stationary distribution of the original Markov chain with πθ and transition kernel P . Proof . Recall the definition of the approximation error : app = max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω∗θ ( s ) |2 , where µθ is the stationary distribution of the artificial Markov chain with πθ and transition kernel P̃ , and ω∗θ is the stationary point of critic update under the artificial Markov chain . We decompose app as app = max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω̄∗θ ( s ) + V̂ω̄∗θ ( s ) − V̂ω∗θ ( s ) |2 ≤ max θ∈Rd √ E s∼µθ |Vπθ ( s ) − V̂ω̄∗θ ( s ) |2 fa + max θ∈Rd √ E s∼µθ |V̂ω̄∗θ ( s ) − V̂ω∗θ ( s ) |2 sp , ( 114 ) where the first term corresponds to the function approximation error fa , and second term corresponds to the sampling error sp . With A , b and Ā , b̄ as shorthand notations for Aθ , ψ , bθ , ψ and Āθ , ψ , b̄θ , ψ respectively , we bound the second term in ( 114 ) as |V̂ω̄∗θ ( s ) − V̂ω∗θ ( s ) | = ∣∣φ ( s ) > ω∗θ − φ ( s ) > ω̄∗θ ∣∣ ≤ ∥∥A−1b− Ā−1b̄∥∥ 2 = ∥∥A−1b−A−1b̄+A−1b̄− Ā−1b̄∥∥ 2 ≤ ∥∥A−1 ( b− b̄ ) ∥∥ 2 + ∥∥ ( A−1 − Ā−1 ) b̄∥∥ 2 ≤ λ−1‖b− b̄‖2 + rmax ∥∥A−1 − Ā−1∥∥ 2 = λ−1‖b− b̄‖2 + rmax ∥∥A−1 ( Ā−A ) Ā−1∥∥ 2 ≤ λ−1‖b− b̄‖2 + λ−2rmax ∥∥Ā−A∥∥ 2 . ( 115 ) We bound the first term in last inequality as ‖b− b̄‖2 = ∥∥∥∥∥ Es∼µθ , a∼πθ , s′∼P̃ [ r ( s , a , s′ ) φ ( s ) ] − Es∼µ̄θ , a∼πθ , s′∼P [ r ( s , a , s′ ) φ ( s ) ] ∥∥∥∥∥ ≤ sup ‖r ( s , a , s′ ) φ ( s ) ‖2‖µθ ⊗ πθ ⊗ P̃ − µ̄θ ⊗ πθ ⊗ P‖TV ≤ 2rmaxdTV ( µθ ⊗ πθ ⊗ P̃ , µ̄θ ⊗ πθ ⊗ P ) . ( 116 ) We now bound the divergence term in the last inequality as dTV ( µθ ⊗ πθ ⊗ P̃ , µ̄θ ⊗ πθ ⊗ P ) = ∫ s∈S ∑ a∈A ∫ s′∈S ∣∣∣µθ ( s ) πθ ( a|s ) P̃ ( s′|s , a ) − µ̄θ ( s ) πθ ( a|s ) P ( s′|s , a ) ∣∣∣ = ∫ s∈S ∑ a∈A ∫ s′∈S |µθ ( s ) πθ ( a|s ) P̃ ( s′|s , a ) − µθ ( s ) πθ ( a|s ) P ( s′|s , a ) + µθ ( s ) πθ ( a|s ) P ( s′|s , a ) − µ̄θ ( s ) πθ ( a|s ) P ( s′|s , a ) | ≤ ∫ s∈S ∑ a∈A µθ ( s ) πθ ( a|s ) ∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣+ ∫ s∈S |µθ ( s ) − µ̄θ ( s ) | . ( 117 ) We bound the first term in ( 117 ) as∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣ = ( 1− γ ) ∫ s′∈S |P ( s′|s , a ) − η ( s′ ) | ≤ 2 ( 1− γ ) . ( 118 ) Following [ 39 , Theorem 3.1 ] , the second term in ( 117 ) can be bounded as∫ s∈S |µθ ( s ) − µ̄θ ( s ) | ≤ ( logρ κ −1 + 1 1− ρ ) sup s ∫ s′∈S ∣∣∣∣∣∑ a πθ ( a|s ) ( P̃ ( s′|s , a ) − P ( s′|s , a ) ) ∣∣∣∣∣ ≤ ( logρ κ −1 + 1 1− ρ ) sup s ∑ a πθ ( a|s ) ∫ s′∈S ∣∣∣P̃ ( s′|s , a ) − P ( s′|s , a ) ∣∣∣ ≤ 2 ( logρ κ −1 + 1 1− ρ ) ( 1− γ ) , ( 119 ) where the last inequality follows ( 118 ) . Substituting ( 118 ) and ( 119 ) into ( 117 ) gives dTV ( µθ ⊗ πθ ⊗ P̃ , µ̄θ ⊗ πθ ⊗ P ) ≤ 2 ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) . Substituting the above inequality into ( 116 ) gives ‖b− b̄‖2 ≤ 4rmax ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) . ( 120 ) Similarly , we also have ‖A− Ā‖2 ≤ 4rmax ( 1 + logρ κ −1 + 1 1− ρ ) ( 1− γ ) . ( 121 ) Substituting ( 120 ) and ( 121 ) into ( 115 ) , then substituting ( 115 ) into ( 114 ) completes the proof . D EXPERIMENT DETAILS Hardware device . The tests on synthetic environment and CartPole was performed in a 16-core CPU computer . The test on Atari game was run in a 4 GPU computer . Parameterization . For the synthetic environment , we used linear value function approximation and tabular softmax policy [ 36 ] . For CartPole , we used a 3-layer MLP with 128 neurons and sigmoid activation function in each layer . The first two layers are shared for both actor and critic network . For the Atari seaquest game , we used a convolution-LSTM network . For network details , see [ 40 ] . Hyper-parameters . For the synthetic environment tests , we run Algorithm 1 with actor step size αk = 0.05 ( 1+k ) 0.6 and critic step size βk = 0.05 ( 1+k ) 0.4 . In tests of CartPole , we run Algorithm 1 with a minibatch of 20 samples . We update the actor network with a step size of αk = 0.01 ( 1+k ) 0.6 and critic network with a step size of βk = 0.01 ( 1+k ) 0.4 . See Table 1 for hyper-parameters to generate the Atari game results in Figure 4 . | This paper revisits the A3C algorithm with TD(0) for the critic update to provide better theoretical analysis of A3C. A3C-TD(0) achieves linear speedup and it also matches our intuition. To show the empirical results, the authors provide convergence results of A3C-TD(0) with Markovian sampling in synthetic environments and speedup of A3C-TD(0) in CartPole and Seaquest. | SP:7a1fd3da1fb6af86b3a25f133d0cfe1fa23b71fa |
Dataset Condensation with Gradient Matching | 1 INTRODUCTION . Large-scale datasets , comprising millions of samples , are becoming the norm to obtain state-ofthe-art machine learning models in multiple fields including computer vision , natural language processing and speech recognition . At such scales , even storing and preprocessing the data becomes burdensome , and training machine learning models on them demands for specialized equipment and infrastructure . An effective way to deal with large data is data selection – identifying the most representative training samples – that aims at improving data efficiency of machine learning techniques . While classical data selection methods , also known as coreset construction ( Agarwal et al. , 2004 ; Har-Peled & Mazumdar , 2004 ; Feldman et al. , 2013 ) , focus on clustering problems , recent work can be found in continual learning ( Rebuffi et al. , 2017 ; Toneva et al. , 2019 ; Castro et al. , 2018 ; Aljundi et al. , 2019 ) and active learning ( Sener & Savarese , 2018 ) where there is typically a fixed budget in storing and labeling training samples respectively . These methods commonly first define a criterion for representativeness ( e.g . in terms of compactness ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ) , diversity ( Sener & Savarese , 2018 ; Aljundi et al. , 2019 ) , forgetfulness ( Toneva et al. , 2019 ) ) , then select the representative samples based on the criterion , finally use the selected small set to train their model for a downstream task . Unfortunately , these methods have two shortcomings : they typically rely on i ) heuristics ( e.g . picking cluster centers ) that does not guarantee any optimal solution for the downstream task ( e.g . image classification ) , ii ) presence of representative samples , which is neither guaranteed . A recent method , Dataset Distillation ( DD ) ( Wang et al. , 2018 ) goes beyond these limitations by learning a small set of informative images from large training data . In particular , the authors model the network parameters as a function of the synthetic training data and learn them by minimizing the training loss over the original training data w.r.t . synthetic data . Unlike in the coreset methods , the synthesized data are directly optimized for the downstream task and thus the success of the method does not rely on the presence of representative samples . Inspired from DD ( Wang et al. , 2018 ) , we focus on learning to synthesize informative samples that are optimized to train neural networks for downstream tasks and not limited to individual samples in original dataset . Like DD , our goal is to obtain the highest generalization performance with a model trained on a small set of synthetic images , ideally comparable performance to that of a model trained on the original images ( see Figure 1 ( a ) ) . In particular , we investigate the following 1The implementation is available at https : //github.com/VICO-UoE/DatasetCondensation . questions . Is it possible to i ) compress a large image classification dataset into a small synthetic set , ii ) train an image classification model on the synthetic set that can be further used to classify real images , iii ) learn a single set of synthetic images that can be used to train different neural network architectures ? To this end , we propose a Dataset Condensation method to learn a small set of “ condensed ” synthetic samples such that a deep neural network trained on them obtains not only similar performance but also a close solution to a network trained on the large training data in the network parameter space . We formulate this goal as a minimization problem between two sets of gradients of the network parameters that are computed for a training loss over a large fixed training set and a learnable condensed set ( see Figure 1 ( b ) ) . We show that our method enables effective learning of synthetic images and neural networks trained on them , outperforms ( Wang et al. , 2018 ) and coreset methods with a wide margin in multiple computer vision benchmarks . In addition , learning a compact set of synthetic samples also benefits other learning problems when there is a fixed budget on training images . We show that our method outperforms popular data selection methods by providing more informative training samples in continual learning . Finally , we explore a promising use case of our method in neural architecture search , and show that – once our condensed images are learned – they can be used to train numerous network architectures extremely efficiently . Our method is related to knowledge distillation ( KD ) techniques ( Hinton et al. , 2015 ; Buciluǎ et al. , 2006 ; Ba & Caruana , 2014 ; Romero et al. , 2014 ) that transfer the knowledge in an ensemble of models to a single one . Unlike KD , we distill knowledge of a large training set into a small synthetic set . Our method is also related to Generative Adversarial Networks ( Goodfellow et al. , 2014a ; Mirza & Osindero , 2014 ; Radford et al. , 2015 ) and Variational AutoEncoders ( Kingma & Welling , 2013 ) that synthesize high-fidelity samples by capturing the data distribution . In contrast , our goal is to generate informative samples for training deep neural networks rather than to produce “ real-looking ” samples . Finally our method is related to the methods that produce image patches by projecting the feature activations back to the input pixel space ( Zeiler & Fergus , 2014 ) , reconstruct the input image by matching the feature activations ( Mahendran & Vedaldi , 2015 ) , recover private training images for given training gradients ( Zhu et al. , 2019 ; Zhao et al. , 2020 ) , synthesize features from semantic embeddings for zero-shot learning ( Sariyildiz & Cinbis , 2019 ) . Our goal is however to synthesize a set of condensed training images not to recover the original or missing training images . In the remainder of this paper , we first review the problem of dataset condensation and introduce our method in section 2 , present and analyze our results in several image recognition benchmarks in section 3.1 , showcase applications in continual learning and network architecture search in section 3.2 , and conclude the paper with remarks for future directions in section 4 . 2 METHOD . 2.1 DATASET CONDENSATION . Suppose we are given a large dataset consisting of |T | pairs of a training image and its class label T = { ( xi , yi ) } ||T |i=1 where x ∈ X ⊂ Rd , y ∈ { 0 , . . . , C − 1 } , X is a d-dimensional input space and C is the number of classes . We wish to learn a differentiable function φ ( i.e . deep neural network ) with parameters θ that correctly predicts labels of previously unseen images , i.e . y = φθ ( x ) . One can learn the parameters of this function by minimizing an empirical loss term over the training set : θT = argmin θ LT ( θ ) ( 1 ) where LT ( θ ) = 1|T | ∑ ( x , y ) ∈T ` ( φθ ( x ) , y ) , ` ( · , · ) is a task specific loss ( i.e . cross-entropy ) and θT is the minimizer of LT . The generalization performance of the obtained model φθT can be written as Ex∼PD [ ` ( φθT ( x ) , y ) ] where PD is the data distribution . Our goal is to generate a small set of condensed synthetic samples with their labels , S = { ( si , yi ) } ||S|i=1 where s ∈ Rd and y ∈ Y , |S| |T | . Similar to eq . ( 1 ) , once the condensed set is learned , one can train φ on them as follows θS = argmin θ LS ( θ ) ( 2 ) where LS ( θ ) = 1|S| ∑ ( s , y ) ∈S ` ( φθ ( s ) , y ) and θ S is the minimizer of LS . As the synthetic set S is significantly smaller ( 2-3 orders of magnitude ) , we expect the optimization in eq . ( 2 ) to be significantly faster than that in eq . ( 1 ) . We also wish the generalization performance of φθS to be close to φθT , i.e . Ex∼PD [ ` ( φθT ( x ) , y ) ] ' Ex∼PD [ ` ( φθS ( x ) , y ) ] over the real data distribution PD . Discussion . The goal of obtaining comparable generalization performance by training on the condensed data can be formulated in different ways . One approach , which is proposed in ( Wang et al. , 2018 ) and extended in ( Sucholutsky & Schonlau , 2019 ; Bohdal et al. , 2020 ; Such et al. , 2020 ) , is to pose the parameters θS as a function of the synthetic data S : S∗ = argmin S LT ( θS ( S ) ) subject to θS ( S ) = argmin θ LS ( θ ) . ( 3 ) The method aims to find the optimum set of synthetic images S∗ such that the model φθS trained on them minimizes the training loss over the original data . Optimizing eq . ( 3 ) involves a nested loop optimization and solving the inner loop for θS ( S ) at each iteration to recover the gradients for S which requires a computationally expensive procedure – unrolling the recursive computation graph for S over multiple optimization steps for θ ( see ( Samuel & Tappen , 2009 ; Domke , 2012 ) ) . Hence , it does not scale to large models and/or accurate inner-loop optimizers with many steps . Next we propose an alternative formulation for dataset condensation . 2.2 DATASET CONDENSATION WITH PARAMETER MATCHING . Here we aim to learn S such that the model φθS trained on them achieves not only comparable generalization performance to φθT but also converges to a similar solution in the parameter space ( i.e . θS ≈ θT ) . Let φθ be a locally smooth function2 , similar weights ( θS ≈ θT ) imply similar mappings in a local neighborhood and thus generalization performance , i.e . Ex∼PD [ ` ( φθT ( x ) , y ) ] ' Ex∼PD [ ` ( φθS ( x ) , y ) ] . Now we can formulate this goal as min S D ( θS , θT ) subject to θS ( S ) = argmin θ LS ( θ ) ( 4 ) where θT = argminθ LT ( θ ) and D ( · , · ) is a distance function . In a deep neural network , θT typically depends on its initial values θ0 . However , the optimization in eq . ( 4 ) aims to obtain an optimum set of synthetic images only for one model φθT with the initialization θ0 , while our actual goal is to generate samples that can work with a distribution of random initializations Pθ0 . Thus we modify eq . ( 4 ) as follows : min S Eθ0∼Pθ0 [ D ( θ S ( θ0 ) , θ T ( θ0 ) ) ] subject to θS ( S ) = argmin θ LS ( θ ( θ0 ) ) ( 5 ) where θT = argminθ LT ( θ ( θ0 ) ) . For brevity , we use only θS and θT to indicate θS ( θ0 ) and θT ( θ0 ) respectively in the next sections . The standard approach to solving eq . ( 5 ) employs implicit differentiation ( see ( Domke , 2012 ) for details ) , which involves solving an inner loop optimization for θS . As the inner loop optimization θS ( S ) = argminθ LS ( θ ) can be computationally expensive in 2Local smoothness is frequently used to obtain explicit first-order local approximations in deep networks ( e.g . see ( Rifai et al. , 2012 ; Goodfellow et al. , 2014b ; Koh & Liang , 2017 ) ) . case of large-scale models , one can adopt the back-optimization approach in ( Domke , 2012 ) which re-defines θS as the output of an incomplete optimization : θS ( S ) = opt-algθ ( LS ( θ ) , ς ) ( 6 ) where opt-alg is a specific optimization procedure with a fixed number of steps ( ς ) . In practice , θT for different initializations can be trained first in an offline stage and then used as the target parameter vector in eq . ( 5 ) . However , there are two potential issues by learning to regress θT as the target vector . First the distance between θT and intermediate values of θS can be too big in the parameter space with multiple local minima traps along the path and thus it can be too challenging to reach . Second opt-alg involves a limited number of optimization steps as a tradeoff between speed and accuracy which may not be sufficient to take enough steps for reaching the optimal solution . These problems are similar to those of ( Wang et al. , 2018 ) , as they both involve parameterizing θS with S and θ0 . | The paper proposes a novel dataset condensation technique that generates synthetic samples by matching model gradients with those obtained on the original input dataset. This technique is investigated empirically on several smaller datasets like MNIST, SVHN and CIFAR10. Two applications to continual learning and neural architecture search (NAS) are also explored and show some promising results. | SP:6d80f796adf8ca9c35f6fb2eee898eab1d71ad8e |
Dataset Condensation with Gradient Matching | 1 INTRODUCTION . Large-scale datasets , comprising millions of samples , are becoming the norm to obtain state-ofthe-art machine learning models in multiple fields including computer vision , natural language processing and speech recognition . At such scales , even storing and preprocessing the data becomes burdensome , and training machine learning models on them demands for specialized equipment and infrastructure . An effective way to deal with large data is data selection – identifying the most representative training samples – that aims at improving data efficiency of machine learning techniques . While classical data selection methods , also known as coreset construction ( Agarwal et al. , 2004 ; Har-Peled & Mazumdar , 2004 ; Feldman et al. , 2013 ) , focus on clustering problems , recent work can be found in continual learning ( Rebuffi et al. , 2017 ; Toneva et al. , 2019 ; Castro et al. , 2018 ; Aljundi et al. , 2019 ) and active learning ( Sener & Savarese , 2018 ) where there is typically a fixed budget in storing and labeling training samples respectively . These methods commonly first define a criterion for representativeness ( e.g . in terms of compactness ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ) , diversity ( Sener & Savarese , 2018 ; Aljundi et al. , 2019 ) , forgetfulness ( Toneva et al. , 2019 ) ) , then select the representative samples based on the criterion , finally use the selected small set to train their model for a downstream task . Unfortunately , these methods have two shortcomings : they typically rely on i ) heuristics ( e.g . picking cluster centers ) that does not guarantee any optimal solution for the downstream task ( e.g . image classification ) , ii ) presence of representative samples , which is neither guaranteed . A recent method , Dataset Distillation ( DD ) ( Wang et al. , 2018 ) goes beyond these limitations by learning a small set of informative images from large training data . In particular , the authors model the network parameters as a function of the synthetic training data and learn them by minimizing the training loss over the original training data w.r.t . synthetic data . Unlike in the coreset methods , the synthesized data are directly optimized for the downstream task and thus the success of the method does not rely on the presence of representative samples . Inspired from DD ( Wang et al. , 2018 ) , we focus on learning to synthesize informative samples that are optimized to train neural networks for downstream tasks and not limited to individual samples in original dataset . Like DD , our goal is to obtain the highest generalization performance with a model trained on a small set of synthetic images , ideally comparable performance to that of a model trained on the original images ( see Figure 1 ( a ) ) . In particular , we investigate the following 1The implementation is available at https : //github.com/VICO-UoE/DatasetCondensation . questions . Is it possible to i ) compress a large image classification dataset into a small synthetic set , ii ) train an image classification model on the synthetic set that can be further used to classify real images , iii ) learn a single set of synthetic images that can be used to train different neural network architectures ? To this end , we propose a Dataset Condensation method to learn a small set of “ condensed ” synthetic samples such that a deep neural network trained on them obtains not only similar performance but also a close solution to a network trained on the large training data in the network parameter space . We formulate this goal as a minimization problem between two sets of gradients of the network parameters that are computed for a training loss over a large fixed training set and a learnable condensed set ( see Figure 1 ( b ) ) . We show that our method enables effective learning of synthetic images and neural networks trained on them , outperforms ( Wang et al. , 2018 ) and coreset methods with a wide margin in multiple computer vision benchmarks . In addition , learning a compact set of synthetic samples also benefits other learning problems when there is a fixed budget on training images . We show that our method outperforms popular data selection methods by providing more informative training samples in continual learning . Finally , we explore a promising use case of our method in neural architecture search , and show that – once our condensed images are learned – they can be used to train numerous network architectures extremely efficiently . Our method is related to knowledge distillation ( KD ) techniques ( Hinton et al. , 2015 ; Buciluǎ et al. , 2006 ; Ba & Caruana , 2014 ; Romero et al. , 2014 ) that transfer the knowledge in an ensemble of models to a single one . Unlike KD , we distill knowledge of a large training set into a small synthetic set . Our method is also related to Generative Adversarial Networks ( Goodfellow et al. , 2014a ; Mirza & Osindero , 2014 ; Radford et al. , 2015 ) and Variational AutoEncoders ( Kingma & Welling , 2013 ) that synthesize high-fidelity samples by capturing the data distribution . In contrast , our goal is to generate informative samples for training deep neural networks rather than to produce “ real-looking ” samples . Finally our method is related to the methods that produce image patches by projecting the feature activations back to the input pixel space ( Zeiler & Fergus , 2014 ) , reconstruct the input image by matching the feature activations ( Mahendran & Vedaldi , 2015 ) , recover private training images for given training gradients ( Zhu et al. , 2019 ; Zhao et al. , 2020 ) , synthesize features from semantic embeddings for zero-shot learning ( Sariyildiz & Cinbis , 2019 ) . Our goal is however to synthesize a set of condensed training images not to recover the original or missing training images . In the remainder of this paper , we first review the problem of dataset condensation and introduce our method in section 2 , present and analyze our results in several image recognition benchmarks in section 3.1 , showcase applications in continual learning and network architecture search in section 3.2 , and conclude the paper with remarks for future directions in section 4 . 2 METHOD . 2.1 DATASET CONDENSATION . Suppose we are given a large dataset consisting of |T | pairs of a training image and its class label T = { ( xi , yi ) } ||T |i=1 where x ∈ X ⊂ Rd , y ∈ { 0 , . . . , C − 1 } , X is a d-dimensional input space and C is the number of classes . We wish to learn a differentiable function φ ( i.e . deep neural network ) with parameters θ that correctly predicts labels of previously unseen images , i.e . y = φθ ( x ) . One can learn the parameters of this function by minimizing an empirical loss term over the training set : θT = argmin θ LT ( θ ) ( 1 ) where LT ( θ ) = 1|T | ∑ ( x , y ) ∈T ` ( φθ ( x ) , y ) , ` ( · , · ) is a task specific loss ( i.e . cross-entropy ) and θT is the minimizer of LT . The generalization performance of the obtained model φθT can be written as Ex∼PD [ ` ( φθT ( x ) , y ) ] where PD is the data distribution . Our goal is to generate a small set of condensed synthetic samples with their labels , S = { ( si , yi ) } ||S|i=1 where s ∈ Rd and y ∈ Y , |S| |T | . Similar to eq . ( 1 ) , once the condensed set is learned , one can train φ on them as follows θS = argmin θ LS ( θ ) ( 2 ) where LS ( θ ) = 1|S| ∑ ( s , y ) ∈S ` ( φθ ( s ) , y ) and θ S is the minimizer of LS . As the synthetic set S is significantly smaller ( 2-3 orders of magnitude ) , we expect the optimization in eq . ( 2 ) to be significantly faster than that in eq . ( 1 ) . We also wish the generalization performance of φθS to be close to φθT , i.e . Ex∼PD [ ` ( φθT ( x ) , y ) ] ' Ex∼PD [ ` ( φθS ( x ) , y ) ] over the real data distribution PD . Discussion . The goal of obtaining comparable generalization performance by training on the condensed data can be formulated in different ways . One approach , which is proposed in ( Wang et al. , 2018 ) and extended in ( Sucholutsky & Schonlau , 2019 ; Bohdal et al. , 2020 ; Such et al. , 2020 ) , is to pose the parameters θS as a function of the synthetic data S : S∗ = argmin S LT ( θS ( S ) ) subject to θS ( S ) = argmin θ LS ( θ ) . ( 3 ) The method aims to find the optimum set of synthetic images S∗ such that the model φθS trained on them minimizes the training loss over the original data . Optimizing eq . ( 3 ) involves a nested loop optimization and solving the inner loop for θS ( S ) at each iteration to recover the gradients for S which requires a computationally expensive procedure – unrolling the recursive computation graph for S over multiple optimization steps for θ ( see ( Samuel & Tappen , 2009 ; Domke , 2012 ) ) . Hence , it does not scale to large models and/or accurate inner-loop optimizers with many steps . Next we propose an alternative formulation for dataset condensation . 2.2 DATASET CONDENSATION WITH PARAMETER MATCHING . Here we aim to learn S such that the model φθS trained on them achieves not only comparable generalization performance to φθT but also converges to a similar solution in the parameter space ( i.e . θS ≈ θT ) . Let φθ be a locally smooth function2 , similar weights ( θS ≈ θT ) imply similar mappings in a local neighborhood and thus generalization performance , i.e . Ex∼PD [ ` ( φθT ( x ) , y ) ] ' Ex∼PD [ ` ( φθS ( x ) , y ) ] . Now we can formulate this goal as min S D ( θS , θT ) subject to θS ( S ) = argmin θ LS ( θ ) ( 4 ) where θT = argminθ LT ( θ ) and D ( · , · ) is a distance function . In a deep neural network , θT typically depends on its initial values θ0 . However , the optimization in eq . ( 4 ) aims to obtain an optimum set of synthetic images only for one model φθT with the initialization θ0 , while our actual goal is to generate samples that can work with a distribution of random initializations Pθ0 . Thus we modify eq . ( 4 ) as follows : min S Eθ0∼Pθ0 [ D ( θ S ( θ0 ) , θ T ( θ0 ) ) ] subject to θS ( S ) = argmin θ LS ( θ ( θ0 ) ) ( 5 ) where θT = argminθ LT ( θ ( θ0 ) ) . For brevity , we use only θS and θT to indicate θS ( θ0 ) and θT ( θ0 ) respectively in the next sections . The standard approach to solving eq . ( 5 ) employs implicit differentiation ( see ( Domke , 2012 ) for details ) , which involves solving an inner loop optimization for θS . As the inner loop optimization θS ( S ) = argminθ LS ( θ ) can be computationally expensive in 2Local smoothness is frequently used to obtain explicit first-order local approximations in deep networks ( e.g . see ( Rifai et al. , 2012 ; Goodfellow et al. , 2014b ; Koh & Liang , 2017 ) ) . case of large-scale models , one can adopt the back-optimization approach in ( Domke , 2012 ) which re-defines θS as the output of an incomplete optimization : θS ( S ) = opt-algθ ( LS ( θ ) , ς ) ( 6 ) where opt-alg is a specific optimization procedure with a fixed number of steps ( ς ) . In practice , θT for different initializations can be trained first in an offline stage and then used as the target parameter vector in eq . ( 5 ) . However , there are two potential issues by learning to regress θT as the target vector . First the distance between θT and intermediate values of θS can be too big in the parameter space with multiple local minima traps along the path and thus it can be too challenging to reach . Second opt-alg involves a limited number of optimization steps as a tradeoff between speed and accuracy which may not be sufficient to take enough steps for reaching the optimal solution . These problems are similar to those of ( Wang et al. , 2018 ) , as they both involve parameterizing θS with S and θ0 . | This paper tackles the challenging dataset condensation problem. The goal is to learn to synthesize a small dataset, so that a neural network trained on the small synthetic dataset can have similar performance as a network trained on the full dataset. The proposed method tackles the problem by gradient matching. The proposed method achieves state-of-the-art performance, and shows promising results on two other downstream tasks, continual learning and neural architecture search. | SP:6d80f796adf8ca9c35f6fb2eee898eab1d71ad8e |
Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks | 1 INTRODUCTION . Deep neural networks have driven a shift from feature engineering to feature learning . The great progress largely comes from well-designed networks with increasing capacity of models ( He et al. , 2016a ; Xie et al. , 2017 ; Huang et al. , 2017 ; Tan & Le , 2019 ) . To achieve the superior performance , a useful practice is to add more layers ( Szegedy et al. , 2015 ) or expand the size of existing convolutions ( kernel width , number of channels ) ( Huang et al. , 2019 ; Tan & Le , 2019 ; Mahajan et al. , 2018 ) . Meantime , the computational cost significantly increases , hindering the deployment of these models in realistic scenarios . Instead of adding much more computational burden , we prefer adding sampledependent modules to networks , increasing the model capacity by accommodating the data variance . Several existing work attempt to augment the sample-dependent modules into network . For example , Squeeze-and-Excitation network ( SENet ) ( Hu et al. , 2018 ) learns to scale the activations in the channel dimension conditionally on the input . Conditionally Parameterized Convolution ( CondConv ) ( Yang et al. , 2019 ) uses over-parameterization weights and generates individual convolutional kernels for each sample . GaterNet ( Chen et al. , 2018 ) adopts a gate network to extract features and generate sparse binary masks for selecting filters in the backbone network based upon inputs . All these methods focus on the adjustment of the micro structure of neural networks , using a data-dependent module to influence the feature representation at the same level . Recall the deep neural network to mammalian brain mechanism in biology ( Rauschecker , 1984 ) , the neurons are linked by synapses and responsible for sensing different information , the synapses are activated to varying degrees when the neurons perceive external information . Such a phenomenon inspires us to design a data-dependent network structure so that different samples will activate different network paths . In this paper , we learn to optimize the connectivity of neural networks based upon inputs . Instead of using stacked-style or hand-designed manners , we allow more flexible selection for forwarding paths . Specifically , we reformulate the network into a directed acyclic graph , where nodes represent the convolution block while edges indicate connections . Different from randomly wired neural networks ( Xie et al. , 2019 ) that generate random graphs as connectivity using predefined generators , we rewire the graph as a complete graph so that all nodes establish connections with each other . Such a setting allows more possible connections and makes the task of finding the most suitable connectivity for each sample equivalent to finding the optimal sub-graph in the complete graph . In the graph , each node aggregates features from the preceding nodes , performs feature transformation ( e.g . convolution , normalization , and non-linear operations ) , and distributes the transformed features to the succeeding nodes . The output of the last node in the topological order is employed as the representation through the graph . To adjust the contribution of different nodes to the feature representation , we further assign weights to the edges in the graph . The weights are generated dynamically for each input via an extra module ( denoted as router ) along with each node . During the inference , only crucial connections are maintained , which creates different paths for different instances . As the connectivity for each sample is generated through non-linear functions determined by routers , our method can enable the networks to have more representation power than the static network . We call our method as the Dynamic Graph Network ( DG-Net ) . It doesn ’ t increase the depth or width of the network , while only introduces an extra negligible cost to compute the edge weights and aggregate the features . To facilitate the training , we represent the network connection of each sample as a adjacent matrix and design a buffer mechanism to cache the matrices of a sample batch during training . With the buffer mechanism , we can conveniently aggregate the feature maps in the forward pass and compute the gradient in the backward pass by looking up the adjacent matrices . The main contributions of our work are as follows : • We first introduce the dynamic connectivity based upon inputs to exploit the model capacity of neural networks . Without bells and whistles , simply replacing static connectivity with dynamic one in many networks achieves solid improvement with only a slight increase of ( ∼ 1 % ) parameters and ( ∼ 2 % ) computational cost ( see table 1 ) . • DG-Net is easy and memory-conserving to train . The parameters of networks and routers can be optimized in a differentiable manner . We also design a buffer mechanism to conveniently access the network connectivity , in order to aggregate the feature maps in the forward pass and compute the gradient in the backward pass . • We show that DG-Net not only improves the performance for human-designed networks ( e.g . MobielNetV2 , ResNet , ResNeXt ) but also boosts the performance for automatically searched architectures ( e.g . RegNet ) . It demonstrates good generalization ability on ImageNet classification ( see table 1 ) and COCO object detection ( see table 2 ) tasks . 2 RELATED WORK . Non-modular Network Wiring . Different from the modularized designed network which consists of topologically identical blocks , there exists some work that explores more flexible wiring patterns . MaskConnect ( Ahmed & Torresani , 2018 ) removes predefined architectures and learns the connections between modules in the network with k conenctions . Randomly wired neural networks ( Xie et al. , 2019 ) use classical graph generators to yield random wiring instances and achieve competitive performance with manually designed networks . DNW ( Wortsman et al. , 2019 ) treats each channel as a node and searches a fine-grained sparse connectivity among layers . TopoNet ( Yuan et al. , 2020 ) learns to optimize the connectivity of neural networks in a complete graph that adapt to the specific task . Prior work demonstrates the potential of more flexible wirings , our work on DG-Net pushes the boundaries of this paradigm , by enabling each example to be processed with different connectivity . Dynamic Networks . Dynamic networks , adjusting the network architecture to the corresponding input , have been recently studied in the computer vision domain . SkipNet ( Wang et al. , 2018b ) , BlockDrop ( Wu et al. , 2018 ) and HydraNet ( Mullapudi et al. , 2018 ) use reinforcement learning to learn the subset of blocks needed to process a given input . Some approaches prune channels ( Lin et al. , 2017a ; You et al. , 2019 ) for efficient inference . However , most prior methods are challenging to train , because they need to obtain discrete routing decisions from individual examples . Different from these approaches , DG-Net learns continuous weights for connectivity to enable ramous propagation of features , so can be easily optimized in a differentiable way . Conditional Attention . Some recent work proposes to adapt the distribution of features or weights through attention conditionally on the input . SENet ( Hu et al. , 2018 ) boosts the representational power of a network by adaptively recalibrating channel-wise feature responses by assigning attention over channels . CondConv ( Yang et al. , 2019 ) and dynamic convolution ( Chen et al. , 2020 ) are restricted to modulating different experts/kernels , resulting in attention over convolutional weights . Attention-based models are also widely used in language modeling ( Luong et al. , 2015 ; Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) , which scale previous sequential inputs based on learned attention weights . In the vision domain , previous methods most compute attention over micro structure , ignoring the influence of the features produced by different layers on the final representation . Unlike these approaches , DG-Net focuses on learning the connectivity based upon inputs , which can be seen as attention over features with different semantic hierarchy . Neural Architecture Search . Recently , Neural Architecture Search ( NAS ) has been widely used for automatic network architecture design . With evolutionary algorithm ( Real et al. , 2019 ) , reinforcement learning ( Pham et al. , 2018 ) or gradient descent ( Liu et al. , 2019 ) , one can obtain task-dependent architectures . Different from these NAS-based approaches , which search for a single architecture , the proposed DG-Net generates forward paths on the fly according to the input without searching . We also notice a recent method InstaNAS ( Cheng et al. , 2020 ) that generates domain-specific architectures for different samples . It trained a controller to select child architecture from the defined meta-graph , achieving latency reduction during inference . Different from them , DG-Net focuses on learning connectivity in a complete graph using a differentiable way and achieves higher performance . 3 METHODOLOGY . 3.1 NETWORK REPRESENTATION WITH DAGS . The architecture of a neural network can be naturally represented by a directed acyclic graphs ( DAG ) , consisting of an ordered sequence of nodes . Specifically , we map both combinations ( e.g. , addition ) and transformation ( e.g. , convolution , normalization , and activation ) into a node . At the same time , connections between layers are represented as edges , which determine the path of the features in the network . For simplicity , we denote a DAG with N ordered nodes as G = ( N , E ) , where N is the set of nodes and E is the set of edges . We show E = { e ( i , j ) |1 ≤ i < j ≤ N } , where e ( i , j ) indicates a directed edge from the i-th node to the j-th node . Most traditional convolutional neural networks can be represented with DAGs . For example , VGGNet ( Simonyan & Zisserman , 2015 ) is stacked directly by a series of convolutional layers , where a current layer is only connected to the previous layer . The connectivity in each stage can be represented as Evgg = { e ( i , j ) |j = i+ 1|1≤i < N } . To ease problems of gradient vanishing and exploding , ResNets ( He et al. , 2016a ) build additional shortcut and enable cross-layer connections whose nature view 1 can be denoted by Eres = { e ( i , j ) |j ∈ { i + 1 , i + 2 } |1≤i < N } . It is worth noting that some NAS methods ( Real et al. , 2019 ; Liu et al. , 2019 ) also follow this wiring pattern that blocks connect two immediate preceding blocks . Differently , DenseNets ( Huang et al. , 2017 ) aggregate features from all previous layers in the manner of Edense = { e ( i , j ) |i ∈ [ 1 , j − 1 ] |1 < j≤N } . Given these patterns of connectivity , the forward procedure of network can be performed according to the topological order . For the j-th node , the output feature x ( j ) is computed by : x ( j ) = f ( j ) ( ∑ i < j 1E ( e ( i , j ) ) · x ( i ) ) , s.t . 1E ( e ( i , j ) ) ∈ { 0 , 1 } ( 1 ) where f ( j ) ( · ) is the corresponding mapping function for transformations , and 1E ( e ( i , j ) ) stands for the indicator function and equals to one when e ( i , j ) exists in E . In each graph , the first node in topological order is the input one that only performs the distribution of features . The last node is the output one that only generates final output by gathering preceding inputs . For a network with K stages , K DAGs are initialized and connected sequentially . Each graph is linked to its preceding or succeeding stage by output or input node . Let F ( k ) ( · ) be the mapping 1In ( Veit et al. , 2016 ) , its unrolled type can be viewed as Edense = { e ( i , j ) |i ∈ [ 1 , j − 1 ] |1 < j≤N } . function of the k-th stage , which is established by G ( k ) with nodesN ( k ) and connectivity E ( k ) . Given an input x , the mapping function from the sample to the feature representation can be written as : T ( x ) = F ( K ) ( · · · F ( 2 ) ( F ( 1 ) ( x ) ) ) ( 2 ) | This work proposes a novel method, called Dynamic Graph Network (DG-Net), for optimizing the architecture of a neural network. Building on the previous work introduced by (Xie et al., 2019), the authors propose to consider the network as a complete directed acyclic graph (DAG). Then, the edge weights of the DAG are generated dynamically for each input of the network. At each node of the network, the authors introduce an extra-module, called router, to estimate the edge weights as function of the input features. | SP:925ffc4463ef78ca77f5ae77a63b86d7fa87a1cd |
Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks | 1 INTRODUCTION . Deep neural networks have driven a shift from feature engineering to feature learning . The great progress largely comes from well-designed networks with increasing capacity of models ( He et al. , 2016a ; Xie et al. , 2017 ; Huang et al. , 2017 ; Tan & Le , 2019 ) . To achieve the superior performance , a useful practice is to add more layers ( Szegedy et al. , 2015 ) or expand the size of existing convolutions ( kernel width , number of channels ) ( Huang et al. , 2019 ; Tan & Le , 2019 ; Mahajan et al. , 2018 ) . Meantime , the computational cost significantly increases , hindering the deployment of these models in realistic scenarios . Instead of adding much more computational burden , we prefer adding sampledependent modules to networks , increasing the model capacity by accommodating the data variance . Several existing work attempt to augment the sample-dependent modules into network . For example , Squeeze-and-Excitation network ( SENet ) ( Hu et al. , 2018 ) learns to scale the activations in the channel dimension conditionally on the input . Conditionally Parameterized Convolution ( CondConv ) ( Yang et al. , 2019 ) uses over-parameterization weights and generates individual convolutional kernels for each sample . GaterNet ( Chen et al. , 2018 ) adopts a gate network to extract features and generate sparse binary masks for selecting filters in the backbone network based upon inputs . All these methods focus on the adjustment of the micro structure of neural networks , using a data-dependent module to influence the feature representation at the same level . Recall the deep neural network to mammalian brain mechanism in biology ( Rauschecker , 1984 ) , the neurons are linked by synapses and responsible for sensing different information , the synapses are activated to varying degrees when the neurons perceive external information . Such a phenomenon inspires us to design a data-dependent network structure so that different samples will activate different network paths . In this paper , we learn to optimize the connectivity of neural networks based upon inputs . Instead of using stacked-style or hand-designed manners , we allow more flexible selection for forwarding paths . Specifically , we reformulate the network into a directed acyclic graph , where nodes represent the convolution block while edges indicate connections . Different from randomly wired neural networks ( Xie et al. , 2019 ) that generate random graphs as connectivity using predefined generators , we rewire the graph as a complete graph so that all nodes establish connections with each other . Such a setting allows more possible connections and makes the task of finding the most suitable connectivity for each sample equivalent to finding the optimal sub-graph in the complete graph . In the graph , each node aggregates features from the preceding nodes , performs feature transformation ( e.g . convolution , normalization , and non-linear operations ) , and distributes the transformed features to the succeeding nodes . The output of the last node in the topological order is employed as the representation through the graph . To adjust the contribution of different nodes to the feature representation , we further assign weights to the edges in the graph . The weights are generated dynamically for each input via an extra module ( denoted as router ) along with each node . During the inference , only crucial connections are maintained , which creates different paths for different instances . As the connectivity for each sample is generated through non-linear functions determined by routers , our method can enable the networks to have more representation power than the static network . We call our method as the Dynamic Graph Network ( DG-Net ) . It doesn ’ t increase the depth or width of the network , while only introduces an extra negligible cost to compute the edge weights and aggregate the features . To facilitate the training , we represent the network connection of each sample as a adjacent matrix and design a buffer mechanism to cache the matrices of a sample batch during training . With the buffer mechanism , we can conveniently aggregate the feature maps in the forward pass and compute the gradient in the backward pass by looking up the adjacent matrices . The main contributions of our work are as follows : • We first introduce the dynamic connectivity based upon inputs to exploit the model capacity of neural networks . Without bells and whistles , simply replacing static connectivity with dynamic one in many networks achieves solid improvement with only a slight increase of ( ∼ 1 % ) parameters and ( ∼ 2 % ) computational cost ( see table 1 ) . • DG-Net is easy and memory-conserving to train . The parameters of networks and routers can be optimized in a differentiable manner . We also design a buffer mechanism to conveniently access the network connectivity , in order to aggregate the feature maps in the forward pass and compute the gradient in the backward pass . • We show that DG-Net not only improves the performance for human-designed networks ( e.g . MobielNetV2 , ResNet , ResNeXt ) but also boosts the performance for automatically searched architectures ( e.g . RegNet ) . It demonstrates good generalization ability on ImageNet classification ( see table 1 ) and COCO object detection ( see table 2 ) tasks . 2 RELATED WORK . Non-modular Network Wiring . Different from the modularized designed network which consists of topologically identical blocks , there exists some work that explores more flexible wiring patterns . MaskConnect ( Ahmed & Torresani , 2018 ) removes predefined architectures and learns the connections between modules in the network with k conenctions . Randomly wired neural networks ( Xie et al. , 2019 ) use classical graph generators to yield random wiring instances and achieve competitive performance with manually designed networks . DNW ( Wortsman et al. , 2019 ) treats each channel as a node and searches a fine-grained sparse connectivity among layers . TopoNet ( Yuan et al. , 2020 ) learns to optimize the connectivity of neural networks in a complete graph that adapt to the specific task . Prior work demonstrates the potential of more flexible wirings , our work on DG-Net pushes the boundaries of this paradigm , by enabling each example to be processed with different connectivity . Dynamic Networks . Dynamic networks , adjusting the network architecture to the corresponding input , have been recently studied in the computer vision domain . SkipNet ( Wang et al. , 2018b ) , BlockDrop ( Wu et al. , 2018 ) and HydraNet ( Mullapudi et al. , 2018 ) use reinforcement learning to learn the subset of blocks needed to process a given input . Some approaches prune channels ( Lin et al. , 2017a ; You et al. , 2019 ) for efficient inference . However , most prior methods are challenging to train , because they need to obtain discrete routing decisions from individual examples . Different from these approaches , DG-Net learns continuous weights for connectivity to enable ramous propagation of features , so can be easily optimized in a differentiable way . Conditional Attention . Some recent work proposes to adapt the distribution of features or weights through attention conditionally on the input . SENet ( Hu et al. , 2018 ) boosts the representational power of a network by adaptively recalibrating channel-wise feature responses by assigning attention over channels . CondConv ( Yang et al. , 2019 ) and dynamic convolution ( Chen et al. , 2020 ) are restricted to modulating different experts/kernels , resulting in attention over convolutional weights . Attention-based models are also widely used in language modeling ( Luong et al. , 2015 ; Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) , which scale previous sequential inputs based on learned attention weights . In the vision domain , previous methods most compute attention over micro structure , ignoring the influence of the features produced by different layers on the final representation . Unlike these approaches , DG-Net focuses on learning the connectivity based upon inputs , which can be seen as attention over features with different semantic hierarchy . Neural Architecture Search . Recently , Neural Architecture Search ( NAS ) has been widely used for automatic network architecture design . With evolutionary algorithm ( Real et al. , 2019 ) , reinforcement learning ( Pham et al. , 2018 ) or gradient descent ( Liu et al. , 2019 ) , one can obtain task-dependent architectures . Different from these NAS-based approaches , which search for a single architecture , the proposed DG-Net generates forward paths on the fly according to the input without searching . We also notice a recent method InstaNAS ( Cheng et al. , 2020 ) that generates domain-specific architectures for different samples . It trained a controller to select child architecture from the defined meta-graph , achieving latency reduction during inference . Different from them , DG-Net focuses on learning connectivity in a complete graph using a differentiable way and achieves higher performance . 3 METHODOLOGY . 3.1 NETWORK REPRESENTATION WITH DAGS . The architecture of a neural network can be naturally represented by a directed acyclic graphs ( DAG ) , consisting of an ordered sequence of nodes . Specifically , we map both combinations ( e.g. , addition ) and transformation ( e.g. , convolution , normalization , and activation ) into a node . At the same time , connections between layers are represented as edges , which determine the path of the features in the network . For simplicity , we denote a DAG with N ordered nodes as G = ( N , E ) , where N is the set of nodes and E is the set of edges . We show E = { e ( i , j ) |1 ≤ i < j ≤ N } , where e ( i , j ) indicates a directed edge from the i-th node to the j-th node . Most traditional convolutional neural networks can be represented with DAGs . For example , VGGNet ( Simonyan & Zisserman , 2015 ) is stacked directly by a series of convolutional layers , where a current layer is only connected to the previous layer . The connectivity in each stage can be represented as Evgg = { e ( i , j ) |j = i+ 1|1≤i < N } . To ease problems of gradient vanishing and exploding , ResNets ( He et al. , 2016a ) build additional shortcut and enable cross-layer connections whose nature view 1 can be denoted by Eres = { e ( i , j ) |j ∈ { i + 1 , i + 2 } |1≤i < N } . It is worth noting that some NAS methods ( Real et al. , 2019 ; Liu et al. , 2019 ) also follow this wiring pattern that blocks connect two immediate preceding blocks . Differently , DenseNets ( Huang et al. , 2017 ) aggregate features from all previous layers in the manner of Edense = { e ( i , j ) |i ∈ [ 1 , j − 1 ] |1 < j≤N } . Given these patterns of connectivity , the forward procedure of network can be performed according to the topological order . For the j-th node , the output feature x ( j ) is computed by : x ( j ) = f ( j ) ( ∑ i < j 1E ( e ( i , j ) ) · x ( i ) ) , s.t . 1E ( e ( i , j ) ) ∈ { 0 , 1 } ( 1 ) where f ( j ) ( · ) is the corresponding mapping function for transformations , and 1E ( e ( i , j ) ) stands for the indicator function and equals to one when e ( i , j ) exists in E . In each graph , the first node in topological order is the input one that only performs the distribution of features . The last node is the output one that only generates final output by gathering preceding inputs . For a network with K stages , K DAGs are initialized and connected sequentially . Each graph is linked to its preceding or succeeding stage by output or input node . Let F ( k ) ( · ) be the mapping 1In ( Veit et al. , 2016 ) , its unrolled type can be viewed as Edense = { e ( i , j ) |i ∈ [ 1 , j − 1 ] |1 < j≤N } . function of the k-th stage , which is established by G ( k ) with nodesN ( k ) and connectivity E ( k ) . Given an input x , the mapping function from the sample to the feature representation can be written as : T ( x ) = F ( K ) ( · · · F ( 2 ) ( F ( 1 ) ( x ) ) ) ( 2 ) | This paper presents a novel approach (DG-Net) to “generate” a dynamic structure for the neural network, by learning to predict and select the edges between computational nodes in an end-to-end manner. The method is based on a gating mechanism, applied on top of a fully connected graph (similar to the connectivity in a DenseNet), designed to control the quantity of information received from each previous layer. The experiments show consistent improvement in image classification (ImageNet) and object detection (COCO). | SP:925ffc4463ef78ca77f5ae77a63b86d7fa87a1cd |
To Understand Representation of Layer-aware Sequence Encoders as Multi-order-graph | In this paper , we propose a unified explanation of representation for layer-aware neural sequence encoders , which regards the representation as a revisited multigraph called multi-order-graph ( MoG ) , so that model encoding can be viewed as a processing to capture all subgraphs in MoG . The relationship reflected by Multi-order-graph , called n-order dependency , can present what existing simple directed graph explanation can not present . Our proposed MoG explanation allows to precisely observe every step of the generation of representation , put diverse relationship such as syntax into a unifiedly depicted framework . Based on the proposed MoG explanation , we further propose a graph-based self-attention network empowered Graph-Transformer by enhancing the ability of capturing subgraph information over the current models . Graph-Transformer accommodates different subgraphs into different groups , which allows model to focus on salient subgraphs . Result of experiments on neural machine translation tasks show that the MoG-inspired model can yield effective performance improvement . 1 INTRODUCTION . Vaswani et al . ( 2017 ) propose self-attention ( SAN ) -based neural network ( called Transformer ) for neural machine translation ( NMT ) . As state-of-the-art NMT model , several variants of the Transformer have been proposed for further performance improvement ( Shaw et al. , 2018 ; He et al. , 2018 ) and for other natural language process tasks such as language model ( Devlin et al. , 2019 ) , parsing ( Kitaev & Klein , 2018 ; Zhou & Zhao , 2019 ) , etc . Similar as recurrent neural network ( RNN ) -based ( Kalchbrenner & Blunsom , 2013 ; Bahdanau et al. , 2015 ; Sutskever et al. , 2014 ) model , SAN-based models try to make representation of one word containing information of the rest sentence in every layer . Empirically , one layer alone can not result in satisfactory result , in the meantime , staking layers may greatly increase the complexity of model ( Hao et al. , 2019 ; Yang et al. , 2019 ; Guo et al. , 2019 ) . Better understanding the representations may help better solve the problem and further improve performance of SAN-based models . It is common to model the representation as a simple directed graph , which views words as nodes and relationships between words as edges . However , such understanding of representations may be still insufficient to model various and complicated relationship among words such as syntax and semantics , let alone presenting a unified explanation for the representations given by SAN- or RNN-based models ( Eriguchi et al. , 2016 ; Aharoni & Goldberg , 2017 ; Wang et al. , 2018b ) . In addition , simple directed graph mostly models the relationship among words but is incapable of modeling the relationship among phrases or clauses . To overcome the shortcomings of modeling the representation as a simple directed graph and then in the hope of helping further improve SAN-based model , in this paper , we propose a novel explanation that representation generated by SAN-based model can be viewed as a multigraph called multi-order-graph ( MoG ) . In MoG , a set of nodes and edges between these nodes form a subgraph . Meanwhile , one edge not only connects words , but also connects subgraphs which words belong to . Thus we call the relationship reflected by MoG n-order dependency , where n is the number of words involved in this relationship . With such an explanation , we can precisely observe every step of the generation of representation , unify various complicated relationship such as syntax into n-order dependency and understand the model encoding eventually . Inspired by our proposed explanation , we further propose a graph-based SAN empowered GraphTransformer by enhancing the ability of capturing subgraph information over the current SAN-based sequence encoder . First of all , we generally define a full representation as the fusing result of all concerned subgraph representations . Then let the representation of one layer split into two parts , previous representation and incremental representation . The previous representation reflects full representation from previous layer , and the incremental representation reflects new information generated in this layer . Based on this , the encoding process is modified to adapt to such representation division . We split the original self-attention into three independent parts to generate incremental representation . Our method accommodates subgraphs of different orders into different parts of incremental representation , and reduces the information redundancy . To fuse the full representation , We consider three fusing strategies in terms of different weighting schemes so that let the model focus on salient parts of the representation . 2 MULTI-ORDER-GRAPH EXPLANATION . In graph theory , a directed multigraph ( or pseudograph ) is a graph has multiple parallel edges , and these edges have the same end nodes . Two vertices may be connected by more than one directed edge . In fact , multigraph is enough to reflect representation generated by model after encoding , while definition of edge can not reflect relationship between subgraphs and the process of generation . In this paper , we propose a multigraph called multi-order-graph ( MoG ) for representation of input , which defines edges to reflect relationship between nodes more accurately . 2.1 ENCODING OF MODELS . General speaking , encoding of sentence is a process to transfer a sequence of words to a sequence of vectors . During encoding , model is treated as a stable function independent of data without change of parameters . Representation generated by model only reflects information of input sentence . 2.2 MULTI-ORDER-GRAPH . We generally define MoG as G = ( V , E , SN , TN ) over a given sentence S = { s1 , ... , sn } , in which nodes V = { v1 , ... , vn } reflect words of S , edges E = { e1 , ... , em } reflect relationship between words of S , SN = { sn1 , ... , snm|snj ∈ V , 1 < j ≤ m } is the set of source node of each edge in E and TN = { tn1 , ... , tnm|tnj ∈ V , 1 < j ≤ m } is the set of target node of each edge in E. Node vi ∈ V in G can access other nodes in one step . Information captured from S is splited into two parts , ( 1 ) Word information , which are contained in V and reflects word , ( 2 ) Relationship information , which are contained in E and reflect relationship of word-pairs . Note thatE inG is the most difference between MoG and standard multigraph . As mentioned above , MoG revises the definition of edges to reflect relationship between subgraphs ofG . In Section 2.4 we will discuss the definition of edge ej ∈ E , subgraph and relationship between edges and subgraphs in detail . 2.3 NODE AND WORD . Similar as simple directed graph , nodes in MoG reflect word of input sentence , which means number of nodes in MoG is equal to the number of words of input . Words are represented by nodes of MoG . Without relationship between words , MoG is just a set of graphs which have only one node and no edge . Obviously , one word is independent of others , and model can not enrich word information . 2.4 EDGE AND SUBGRAPH . In this section , we define edge , subgraph and relationship between edge and subgraph in MoG . A subgraph of G is a graph whose vertex set is a subset of V , and whose edge set is a subset of E. We define SubG = { subG1 , ... , subGp } as the set of all subgraphs of G. Subgraph can be defined as subGj = ( V G j , E G j , SN G j , TN G j ) . Order of subgraph sub G j , which is equal to |V Gj | , i.e. , the number of nodes in it , and also means number of the words involved in this subgraph . The simplest subgraph has one node and no edge , and order of it is 1 . Edge in MoG connects two nodes which is same as simple direct graph . However , edges in MoG reflect not only relationship between words , but also the relationship between subgraphs . Given one node-pair , several edges are generated because nodes may belong to different subgraph-pairs . p ( vi , vi ∈ V Gk |vj , vj ∈ V Gh ) is the conditional probability to present one relationship between vi and vj . It indicates that edge ej is determined by four variables , ( 1 ) source node snj of edge ej , ( 2 ) target node tnj of edge ej , ( 3 ) subgraph subGk in which snj ∈ V Gk , ( 4 ) subgraph subGh in which tnj ∈ V Gh . When ej is generated , ej will connect subGk and sub G h and generate a novel subgraph , which we call this subgraph related subgraph of ej and use subGR ( j ) to represent it , where R ( j ) is a function to get the identifier of related subgraph of ej . To reflect importance of ej and complexity of subGR ( j ) , we define order of ej , which is represented by oj and equal to the order of subGR ( j ) .We can use a 6-tuple ej = ( snj , tnj , sub G k , sub G h , sub G R ( j ) , oj ) to present edge ej . If we only focus on source and target node , we can use ( snj → tnj , oj ) for ej . Figure 1 shows generation of four kinds of subgraph . To make the process of generating subgraph clear to understand , we only focus on subgraph without loop and overlap , which is the most simple kind of subgraph . Obviously , subgraphs and edges can not be generated in random order . The order in which edges are generated is the order in which related subgraph of these edges are generated , which is also the order in which all subgraphs are generated . It also means that the process of subgraph ( edge ) generation is an iterative process , in which one subgraph ( edge ) relies on previous generated subgraphs ( edges ) . We define an operation to express the process of generating subgraphs , subk = ( subi ) → vm ∪ ( subj ) → vn , This operation means that one new edge ( vm , vn , subi , subj , subk , |Vi|+|Vj | ) and one new subgraph subk are generated , where |Vi| and |Vj | are orders of subi and subj , vm ∈ subi is the source node of new edge , vn ∈ subj is the target node of new edge and subk is generated by connecting subi and subj . Note that the commutative property , distributive property and associative property do not apply in this formula . For an example , the process of generating subgraph in Figure 1 ( a ) can be expressed as ( ( ( suba ) → va ∪ ( subb ) → vb ) → vb ∪ ( subc ) → vc ) → va ∪ ( ( subd ) → vd ∪ ( sube ) → ve ) → vd , where suba , subb , subc , subd and sube are subgraphs with only one node . It also means that this process can be expressed as binary tree . Especially , given a sentence , if we add words to one subgraph according to the order of word in the sentence , this subgraph can reflect the order of the sentence . As mentioned above , relationship which ej reflects is not only relationship between words but also relaitonship between subgraphs . In this paper , we call this relationship , which is a combination of relationship between words and between subgraphs , n-order dependency where n is equal to oj . In fact , n-order dependency relationship can conveniently model quite a lot of relationships among words , typically , various syntax . | The authors propose a new Transformer variant for neural machine translation. Compared with the standard Transformer framework, this work explains the representation generation process of the encoder via a multi-ordered-graph MoG and develops a novel Graph-Transformer method based on MoG, which is capable of capturing diverse relationships within the sequence. Empirical results over benchmark datasets validate the effectiveness of the proposed method. | SP:548aec7a3eab3e843017e91576c97c1c85c359f4 |
To Understand Representation of Layer-aware Sequence Encoders as Multi-order-graph | In this paper , we propose a unified explanation of representation for layer-aware neural sequence encoders , which regards the representation as a revisited multigraph called multi-order-graph ( MoG ) , so that model encoding can be viewed as a processing to capture all subgraphs in MoG . The relationship reflected by Multi-order-graph , called n-order dependency , can present what existing simple directed graph explanation can not present . Our proposed MoG explanation allows to precisely observe every step of the generation of representation , put diverse relationship such as syntax into a unifiedly depicted framework . Based on the proposed MoG explanation , we further propose a graph-based self-attention network empowered Graph-Transformer by enhancing the ability of capturing subgraph information over the current models . Graph-Transformer accommodates different subgraphs into different groups , which allows model to focus on salient subgraphs . Result of experiments on neural machine translation tasks show that the MoG-inspired model can yield effective performance improvement . 1 INTRODUCTION . Vaswani et al . ( 2017 ) propose self-attention ( SAN ) -based neural network ( called Transformer ) for neural machine translation ( NMT ) . As state-of-the-art NMT model , several variants of the Transformer have been proposed for further performance improvement ( Shaw et al. , 2018 ; He et al. , 2018 ) and for other natural language process tasks such as language model ( Devlin et al. , 2019 ) , parsing ( Kitaev & Klein , 2018 ; Zhou & Zhao , 2019 ) , etc . Similar as recurrent neural network ( RNN ) -based ( Kalchbrenner & Blunsom , 2013 ; Bahdanau et al. , 2015 ; Sutskever et al. , 2014 ) model , SAN-based models try to make representation of one word containing information of the rest sentence in every layer . Empirically , one layer alone can not result in satisfactory result , in the meantime , staking layers may greatly increase the complexity of model ( Hao et al. , 2019 ; Yang et al. , 2019 ; Guo et al. , 2019 ) . Better understanding the representations may help better solve the problem and further improve performance of SAN-based models . It is common to model the representation as a simple directed graph , which views words as nodes and relationships between words as edges . However , such understanding of representations may be still insufficient to model various and complicated relationship among words such as syntax and semantics , let alone presenting a unified explanation for the representations given by SAN- or RNN-based models ( Eriguchi et al. , 2016 ; Aharoni & Goldberg , 2017 ; Wang et al. , 2018b ) . In addition , simple directed graph mostly models the relationship among words but is incapable of modeling the relationship among phrases or clauses . To overcome the shortcomings of modeling the representation as a simple directed graph and then in the hope of helping further improve SAN-based model , in this paper , we propose a novel explanation that representation generated by SAN-based model can be viewed as a multigraph called multi-order-graph ( MoG ) . In MoG , a set of nodes and edges between these nodes form a subgraph . Meanwhile , one edge not only connects words , but also connects subgraphs which words belong to . Thus we call the relationship reflected by MoG n-order dependency , where n is the number of words involved in this relationship . With such an explanation , we can precisely observe every step of the generation of representation , unify various complicated relationship such as syntax into n-order dependency and understand the model encoding eventually . Inspired by our proposed explanation , we further propose a graph-based SAN empowered GraphTransformer by enhancing the ability of capturing subgraph information over the current SAN-based sequence encoder . First of all , we generally define a full representation as the fusing result of all concerned subgraph representations . Then let the representation of one layer split into two parts , previous representation and incremental representation . The previous representation reflects full representation from previous layer , and the incremental representation reflects new information generated in this layer . Based on this , the encoding process is modified to adapt to such representation division . We split the original self-attention into three independent parts to generate incremental representation . Our method accommodates subgraphs of different orders into different parts of incremental representation , and reduces the information redundancy . To fuse the full representation , We consider three fusing strategies in terms of different weighting schemes so that let the model focus on salient parts of the representation . 2 MULTI-ORDER-GRAPH EXPLANATION . In graph theory , a directed multigraph ( or pseudograph ) is a graph has multiple parallel edges , and these edges have the same end nodes . Two vertices may be connected by more than one directed edge . In fact , multigraph is enough to reflect representation generated by model after encoding , while definition of edge can not reflect relationship between subgraphs and the process of generation . In this paper , we propose a multigraph called multi-order-graph ( MoG ) for representation of input , which defines edges to reflect relationship between nodes more accurately . 2.1 ENCODING OF MODELS . General speaking , encoding of sentence is a process to transfer a sequence of words to a sequence of vectors . During encoding , model is treated as a stable function independent of data without change of parameters . Representation generated by model only reflects information of input sentence . 2.2 MULTI-ORDER-GRAPH . We generally define MoG as G = ( V , E , SN , TN ) over a given sentence S = { s1 , ... , sn } , in which nodes V = { v1 , ... , vn } reflect words of S , edges E = { e1 , ... , em } reflect relationship between words of S , SN = { sn1 , ... , snm|snj ∈ V , 1 < j ≤ m } is the set of source node of each edge in E and TN = { tn1 , ... , tnm|tnj ∈ V , 1 < j ≤ m } is the set of target node of each edge in E. Node vi ∈ V in G can access other nodes in one step . Information captured from S is splited into two parts , ( 1 ) Word information , which are contained in V and reflects word , ( 2 ) Relationship information , which are contained in E and reflect relationship of word-pairs . Note thatE inG is the most difference between MoG and standard multigraph . As mentioned above , MoG revises the definition of edges to reflect relationship between subgraphs ofG . In Section 2.4 we will discuss the definition of edge ej ∈ E , subgraph and relationship between edges and subgraphs in detail . 2.3 NODE AND WORD . Similar as simple directed graph , nodes in MoG reflect word of input sentence , which means number of nodes in MoG is equal to the number of words of input . Words are represented by nodes of MoG . Without relationship between words , MoG is just a set of graphs which have only one node and no edge . Obviously , one word is independent of others , and model can not enrich word information . 2.4 EDGE AND SUBGRAPH . In this section , we define edge , subgraph and relationship between edge and subgraph in MoG . A subgraph of G is a graph whose vertex set is a subset of V , and whose edge set is a subset of E. We define SubG = { subG1 , ... , subGp } as the set of all subgraphs of G. Subgraph can be defined as subGj = ( V G j , E G j , SN G j , TN G j ) . Order of subgraph sub G j , which is equal to |V Gj | , i.e. , the number of nodes in it , and also means number of the words involved in this subgraph . The simplest subgraph has one node and no edge , and order of it is 1 . Edge in MoG connects two nodes which is same as simple direct graph . However , edges in MoG reflect not only relationship between words , but also the relationship between subgraphs . Given one node-pair , several edges are generated because nodes may belong to different subgraph-pairs . p ( vi , vi ∈ V Gk |vj , vj ∈ V Gh ) is the conditional probability to present one relationship between vi and vj . It indicates that edge ej is determined by four variables , ( 1 ) source node snj of edge ej , ( 2 ) target node tnj of edge ej , ( 3 ) subgraph subGk in which snj ∈ V Gk , ( 4 ) subgraph subGh in which tnj ∈ V Gh . When ej is generated , ej will connect subGk and sub G h and generate a novel subgraph , which we call this subgraph related subgraph of ej and use subGR ( j ) to represent it , where R ( j ) is a function to get the identifier of related subgraph of ej . To reflect importance of ej and complexity of subGR ( j ) , we define order of ej , which is represented by oj and equal to the order of subGR ( j ) .We can use a 6-tuple ej = ( snj , tnj , sub G k , sub G h , sub G R ( j ) , oj ) to present edge ej . If we only focus on source and target node , we can use ( snj → tnj , oj ) for ej . Figure 1 shows generation of four kinds of subgraph . To make the process of generating subgraph clear to understand , we only focus on subgraph without loop and overlap , which is the most simple kind of subgraph . Obviously , subgraphs and edges can not be generated in random order . The order in which edges are generated is the order in which related subgraph of these edges are generated , which is also the order in which all subgraphs are generated . It also means that the process of subgraph ( edge ) generation is an iterative process , in which one subgraph ( edge ) relies on previous generated subgraphs ( edges ) . We define an operation to express the process of generating subgraphs , subk = ( subi ) → vm ∪ ( subj ) → vn , This operation means that one new edge ( vm , vn , subi , subj , subk , |Vi|+|Vj | ) and one new subgraph subk are generated , where |Vi| and |Vj | are orders of subi and subj , vm ∈ subi is the source node of new edge , vn ∈ subj is the target node of new edge and subk is generated by connecting subi and subj . Note that the commutative property , distributive property and associative property do not apply in this formula . For an example , the process of generating subgraph in Figure 1 ( a ) can be expressed as ( ( ( suba ) → va ∪ ( subb ) → vb ) → vb ∪ ( subc ) → vc ) → va ∪ ( ( subd ) → vd ∪ ( sube ) → ve ) → vd , where suba , subb , subc , subd and sube are subgraphs with only one node . It also means that this process can be expressed as binary tree . Especially , given a sentence , if we add words to one subgraph according to the order of word in the sentence , this subgraph can reflect the order of the sentence . As mentioned above , relationship which ej reflects is not only relationship between words but also relaitonship between subgraphs . In this paper , we call this relationship , which is a combination of relationship between words and between subgraphs , n-order dependency where n is equal to oj . In fact , n-order dependency relationship can conveniently model quite a lot of relationships among words , typically , various syntax . | The paper proposes a new multigraph architecture called Multi-Order-Graph to explain the representation generation process in neural sequence encoders (Self-Attention or SAN based models). The main contribution of this MoG is the introduction of n-order dependency which can model not only relationships between words but also high order relationships such as syntax and semantics between subgraphs. Taking inspiration from MoG explanation, a self-attention powered Graph Transformer is proposed which beats the Transformer baselines on NMT tasks (English-German and German-English). | SP:548aec7a3eab3e843017e91576c97c1c85c359f4 |
Average-case Acceleration for Bilinear Games and Normal Matrices | 1 INTRODUCTION . The traditional analysis of optimization algorithms is a worst-case analysis ( Nemirovski , 1995 ; Nesterov , 2004 ) . This type of analysis provides a complexity bound for any input from a function class , no matter how unlikely . However , since hard-to-solve inputs might rarely occur in practice , the worst-case complexity bounds might not be representative of the observed running time . A more representative analysis is given by the average-case complexity , averaging the algorithm ’ s complexity over all possible inputs . This analysis is standard for analyzing , e.g. , sorting ( Knuth , 1997 ) and cryptography algorithms ( Katz & Lindell , 2014 ) . Recently , a line of work ( Berthier et al. , 2020 ; Pedregosa & Scieur , 2020 ; Lacotte & Pilanci , 2020 ; Paquette et al. , 2020 ) focused on optimal methods for the optimization of quadratics , specified by a symmetric matrix . While worst-case analysis uses bounds on the matrix eigenvalues to yield upper and lower bounds on convergence , average-case analysis relies on the expected distribution of eigenvalues and provides algorithms with sharp optimal convergence rates . While the algorithms developed in this context have been shown to be efficient for minimization problems , these have not been extended to smooth games . A different line of work considers algorithms for smooth games but studies worst-case optimal methods ( Azizian et al. , 2020 ) . In this work , we combine average-case analysis with smooth games , and develop novel average-case optimal algorithms for finding the root of a linear system determined by a ( potentially non-symmetric ) normal matrix . We make the following main contributions : 1 . Inspired by the problem of finding equilibria in smooth games , we develop average-case optimal algorithms for finding the root of a non-symmetric affine operator , both under a normality assumption ( Thm . 4.1 ) , and under the extra assumption that eigenvalues of the operator are supported in a disk ( Thm . 4.2 ) . The proposed method shows a polynomial speedup compared to the worst-case optimal method , verified by numerical simulations . 2 . We make a novel connection between average-case optimal methods for optimization , and average-case optimal methods for bilinear games . In particular , we show that solving the Hamiltonian using an average-case optimal method is optimal ( Theorem 3.1 ) for bilinear games . This result complements ( Azizian et al. , 2020 ) , who proved that Polyak Heavy Ball algorithm on the Hamiltonian is asymptotically worst-case optimal for bilinear games . 2 AVERAGE-CASE ANALYSIS FOR NORMAL MATRICES . In this paper we consider the following class of problems . Definition 1 . Let A ∈ Rd×d be a real matrix and x ? ∈ Rd a vector . The non-symmetric ( affine ) operator ( NSO ) problem is defined as : Find x : F ( x ) def = A ( x−x ? ) = 0 . ( NSO ) This problem generalizes that of minimization of a convex quadratic function f , since we can cast the latter in this framework by setting the operator F = ∇f . The set of solutions is an affine subspace that we will denote X ? . We will find convenient to consider the distance to this set , defined as dist ( x , X ? ) def= min v∈X ? ‖x− v‖2 , with X ? = { x ∈ Rd |A ( x− x ? ) = 0 } . ( 1 ) In this paper we will develop average-case optimal methods . For this , we consider A and x ? to be random vectors , and a random initialization x0 . This induces a probability distribution over NSO problems , and we seek to find methods that have an optimal expected suboptimality w.r.t . this distribution . Denoting E ( A , x ? , x0 ) the expectation over these random problems , we have that average-case optimal methods they verify the following property at each iteration t min xt E ( A , x ? , x0 ) dist ( xt , X ? ) s.t . xi ∈ x0 + span ( { F ( xj ) } i−1j=0 ) , ∀i ∈ [ 1 : t ] . ( 2 ) The last condition on xt stems from restricting the class of algorithms to first-order methods . The class of first-order methods encompasses many known schemes such as gradient descent with momentum , or full-matrix AdaGrad . However , methods such as Adam ( Kingma & Ba , 2015 ) or diagonal AdaGrad ( Duchi et al. , 2011 ) are not in this class , as the diagonal re-scaling creates iterates xt outside the span of previous gradients . Although we will focus on the distance to the solution , the results can be extended to other convergence criteria such as ‖F ( xt ) ‖2 . Finally , note that the expectations in this paper are on the problem instance and not on the randomness of the algorithm . 2.1 ORTHOGONAL RESIDUAL POLYNOMIALS AND FIRST-ORDER METHODS . The analysis of first-order methods simplifies through the use of polynomials . This section provides the tools required to leverage this connection . Definition 2 . A residual polynomial is a polynomial P that satisfies P ( 0 ) = 1 . Proposition 2.1 . ( Hestenes et al. , 1952 ) If the sequence ( xt ) t∈Z+ is generated by a first-order method , then there exist residual polynomials Pt , each one of degree at most t , verifying xt − x ? = Pt ( A ) ( x0 − x ? ) . ( 3 ) As we will see , optimal average-case method are strongly related to orthogonal polynomials . We first define the inner product between polynomials , where we use z∗ for the complex conjugate of z ∈ C. Definition 3 . For P , Q ∈ R [ X ] , we define the inner product 〈· , ·〉µ for a measure µ over C as 〈P , Q〉µ def = ∫ C P ( λ ) Q ( λ ) ∗ dµ ( λ ) . ( 4 ) Definition 4 . A sequence of polynomials { Pi } is orthogonal ( resp . orthonormal ) w.r.t . 〈· , ·〉µ if 〈Pi , Pi〉µ > 0 ( resp . = 1 ) ; 〈Pi , Pj〉µ = 0 if i 6= j . 2.2 EXPECTED SPECTRAL DISTRIBUTION . Following ( Pedregosa & Scieur , 2020 ) , we make the following assumption on the problem family . Assumption 1. x0 − x ? is independent ofA , and E ( x0 , x ? ) [ ( x0 − x ? ) ( x0 − x ? ) > ] = R 2 d Id . We will also require the following definitions to characterize difficulty of a problem class . Let { λ1 , . . . , λd } be the eigenvalues of a matrix A ∈ Rd×d . We define the empirical spectral distribution ofA as the probability measure µ̂A ( λ ) def = 1d ∑d i=1δλi ( λ ) , ( 5 ) where δλi is the Dirac delta , a distribution equal to zero everywhere except at λi and whose integral over the entire real line is equal to one . Note that with this definition , ∫ D dµ̂A ( λ ) corresponds to the proportion of eigenvalues in D. WhenA is a matrix-valued random variable , µA is a measure-valued random variable . As such , we can define its expected spectral distribution µA def = EA [ µ̂A ] , ( 6 ) which by the Riesz representation theorem is the measure that verifies ∫ f dµ = EA [ ∫ f dµA ] for all measureable f . Surprisingly , the expected spectral distribution is the only required characteristic to design optimal algorithms in the average-case . 2.3 EXPECTED ERROR OF FIRST-ORDER METHODS . In this section we provide an expression for the expected convergence in terms of the residual polynomial and the expected spectral distribution introduced in the previous section . To go further in the analysis , we have to assume thatA is a normal matrix . Assumption 2 . The ( real ) random matrixA is normal , that is , it verifiesAA > = A > A . Normality is equivalent to A having the spectral decomposition A = UΛU∗ , where U is unitary , i.e. , U∗U = UU∗ = I . We now have everything to write the expected error of a first-order algorithm applied to ( NSO ) . Theorem 2.1 . Consider the application of a first-order method associated to the sequence of polynomials { Pt } ( Proposition 2.1 ) on the problem ( NSO ) . Let µ be the expected spectral distribution ofA . Under Assumptions 1 and 2 , we have E [ dist ( xt , X ? ) ] = R2 ∫ C\ { 0 } |Pt|2 dµ , ( 7 ) Before designing optimal algorithms for certain specific distributions , we compare our setting with the average-case accelerating for minimization problems of Pedregosa & Scieur ( 2020 ) , who proposed optimal optimization algorithms in the average-case . 2.4 DIFFICULTIES OF FIRST-ORDER METHODS ON GAMES AND RELATED WORK . This section compares our contribution with the existing framework of average-case optimal methods for quadratic minimization problems . Definition 5 . Let H ∈ Rd×d be a random symmetric positive-definite matrix and x ? ∈ Rd a random vector . These elements determine the following random quadratic minimization problem minx∈Rd { f ( x ) def = 1 2 ( x−x ? ) > H ( x−x ? ) } . ( OPT ) As in our paper , Pedregosa & Scieur ( 2020 ) find deterministic optimal first-order algorithms in expectation w.r.t . the matrix H , the solution x ? , and the initialization x0 . Since they work with problem ( OPT ) , their problem is equivalent to ( NSO ) with the matrix A = H . However , they have the stronger assumption that the matrix is symmetric , which implies being normal . The normality assumption is restrictive in the case of game theory , as they do not always naturally fit such applications . However , this set is expressive enough to consider interesting cases , such as bilinear games , and our experiments show that our findings are also consistent with non-normal matrices . Using orthogonal residual polynomials and spectral distributions , they derive the explicit formula of the expected error . Their result is similar to Theorem 2.1 , but the major difference is the domain of the integral , a real positive line in convex optimization , but a shape in the complex plane in our case . This shape plays a crucial role in the rate of converge of first-order algorithms , as depicted in the work of Azizian et al . ( 2020 ) ; Bollapragada et al . ( 2018 ) . In the case of optimization methods , they show that optimal schemes in the average-case follow a simple three-term recurrence arising from the three-term recurrence for residual orthogonal polynomials for the measure λµ ( λ ) . Indeed , by Theorem 2.1 the optimal method corresponds to the residual polynomials minimizing 〈P , P 〉µ , and the following result holds : Theorem 2.2 . ( Fischer , 1996 , §2.4 ) When µ is supported in the real line , the residual polynomial of degree t minimizing 〈P , P 〉µ is given by the degree t residual orthogonal polynomial w.r.t . λµ ( λ ) . However , the analogous result does not hold for general measures in C , and hence our arguments will make use of the following Theorem 2.3 instead , which links the residual polynomial of degree at most t that minimizes 〈P , P 〉µ to the sequence of orthonormal polynomials for µ. Theorem 2.3 . [ Theorem 1.4 of Assche ( 1997 ) ] Let µ be a positive Borel measure in the complex plane . The minimum of the integral ∫ C |P ( λ ) | 2 dµ ( λ ) over residual polynomials P of degree lower or equal than t is uniquely attained by the polynomial P ? ( λ ) = ∑t k=0 φk ( λ ) φk ( 0 ) ∗∑t k=0 |φk ( 0 ) |2 , with optimal value ∫ C |P ? ( λ ) |2 dµ ( λ ) = 1∑t k=0 |φk ( 0 ) |2 , ( 8 ) where ( φk ) k is the orthonormal sequence of polynomials with respect to the inner product 〈· , ·〉µ . In the next sections we consider cases where the optimal scheme is identifiable . | In this submission, first-order methods for solving smooth games are studied in the average case. In particular, first-order methods are derived and studied that are average-case optimal for certain optimization problems. In particular average-optimal first-order methods for solving zero-sum minimax games are presented. Also for finding the root of non-symmetric affine operators average-case optimal operators are derived if either the relevant matrix is normal or the eigenvalues are supported in a disk. Some experiments with the derived methods are conducted but the focus lies clearly on the theoretical results. | SP:ff34a84b45570d684598dda4a9cd63be2a459e51 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.