{ "ID": "0v4VkCSkHNm", "Title": "Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning", "Keywords": "Skills, Transfer Learning, Reinforcement Learning", "URL": "https://openreview.net/forum?id=0v4VkCSkHNm", "paper_draft_url": "/references/pdf?id=MyhHvrNBXI", "Conferece": "ICLR_2023", "track": "Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)", "acceptance": "Accept: poster", "review_scores": "[['3', '6', '3'], ['3', '6', '3'], ['2', '5', '4'], ['3', '8', '4']]", "input": { "source": "CRF", "title": "Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning", "authors": [], "emails": [], "sections": [ { "heading": "1 INTRODUCTION", "text": "While Reinforcement Learning (RL) algorithms recently achieved impressive feats across a range of domains (Silver et al., 2017; Mnih et al., 2015; Lillicrap et al., 2015), they remain sample inefficient (Abdolmaleki et al., 2018; Haarnoja et al., 2018b) and are therefore of limited use for real-world robotics applications. Intelligent agents during their lifetime discover and reuse skills at multiple levels of behavioural and temporal abstraction to efficiently tackle new situations. For example, in manipulation domains, beneficial abstractions could include low-level instantaneous motor primitives as well as higher-level object manipulation strategies. Endowing lifelong learning RL agents (Parisi et al., 2019) with a similar ability could be vital towards attaining comparable sample efficiency.\nTo this end, two paradigms have recently been introduced. KL-regularized RL (Teh et al., 2017; Galashov et al., 2019) presents an intuitive approach for automating skill reuse in multi-task learning. By regularizing policy behaviour against a learnt task-agnostic prior, common behaviours across tasks are distilled into the prior, which encourages their reuse. Concurrently, hierarchical RL also enables skill discovery (Wulfmeier et al., 2019; Merel et al., 2020; Hausman et al., 2018; Haarnoja et al., 2018a; Wulfmeier et al., 2020) by considering a two-level hierarchy in which the high-level policy is task-conditioned, whilst the low-level remains task-agnostic. The lower level of the hierarchy therefore also discovers skills that are transferable across tasks. Both hierarchy and priors offer their own skill abstraction. However, when combined, hierarchical KL-regularized RL can discover multiple abstractions. Whilst prior methods attempted this (Tirumala et al., 2019; 2020; Liu et al., 2022; Goyal et al., 2019), the transfer benefits from learning both abstractions varied drastically, with approaches like Tirumala et al. (2019) unable to yield performance gains.\nIn fact, successful transfer for all approaches critically depends on the correct choice of information asymmetry (IA). IA more generally refers to an asymmetric masking of information across architectural modules. This masking forces independence to, and ideally generalisation across, the masked dimensions (Galashov et al., 2019). For example, for self-driving cars, by conditioning the prior only on proprioceptive information it discovers skills independent to, and shared across, global\ncoordinate frames. Therefore, IA crucially biases learnt behaviours and how they transfer across environments. Previous works predefined their IAs, which were primarily chosen on intuition and independent of domain. In addition, previously explored asymmetries were narrow (Table 1), which if sub-optimal, limit transfer benefits. We demonstrate that this indeed is the case for many methods on our domains (Galashov et al., 2019; Bagatella et al., 2022; Tirumala et al., 2019; 2020; Pertsch et al., 2021; Wulfmeier et al., 2019). A more systematic, theoretically and data driven, domain dependent, approach for choosing IA is thus required to maximally benefit from skills for transfer learning.\nIn this paper, we employ hierarchical KL-regularized RL to effectively transfer skills across sequential tasks. We begin by theoretically and empirically showing the crucial expressivity-transferability tradeoff, controlled by choice of IA, of skills across sequential tasks. We demonstrate this by ablating over a wide range of asymmetries (including previously unexplored ones) between the hierarchical policy and prior. We show the inefficiencies of previous methods that choose highly sub-optimal IAs for our various domains, drastically limiting transfer performance. Given this insight, we introduce APES, \u2018Attentive Priors for Expressive and Transferable Skills\u2019 as a method that forgoes user intuition and automates the choice of IA in a data driven, domain dependent, manner. APES builds on our expressivity-transferability theorems to learn the choice of asymmetry between policy and prior. Specifically, APES conditions the prior on the entire history, allowing for expressive skills to be discovered, and learns a low-entropic attention-map over the input, paying attention only where necessary, to minimise covariate shift and improve transferability across domains. Experiments over a wide range of domains (including a complex robot block stacking one), of varying levels of sparsity and extrapolation, demonstrate APES\u2019 consistent superior performance over existing methods, whilst automating IA choice and by-passing arduous IA sweeps. Further ablations show the importance of combining hierarchy and priors for discovering expressive multi-modal behaviours." }, { "heading": "2 SKILL TRANSFER IN REINFORCEMENT LEARNING", "text": "We consider multi-task reinforcement learning in Partially Observable Markov Decision Processes (POMDPs), defined by Mk = (S,X ,A, rk, p, p0k, \u03b3), with tasks k sampled from p(K). S, A, X denote observation, action, history spaces. p(x\u2032|x,a) : X \u00d7 X \u00d7 A \u2192 R\u22650 is the dynamics model. We denote the history of observations s \u2208 S, actions a \u2208 A up to timestep t as xt = (s0,a0, s1,a1, . . . , st). Reward function rk : X \u00d7A\u00d7K \u2192 R is history-, action- and task-dependent." }, { "heading": "2.1 KL-REGULARIZED REINFORCEMENT LEARNING", "text": "The typical multi-task KL-regularized RL objective (Todorov, 2007; Kappen et al., 2012; Rawlik et al., 2012; Schulman et al., 2017) takes the form:\nO(\u03c0, \u03c00) = E\u03c4\u223cp\u03c0(\u03c4), k\u223cp(K) [ \u221e\u2211 t=0 \u03b3t ( rk(xt,at)\u2212 \u03b10DKL (\u03c0(a|xt, k) \u2225 \u03c00(a|xt)) )] (1)\nwhere \u03b3 is the discount factor and \u03b10 weighs the individual objective terms. \u03c0 and \u03c00 denote the task-conditioned policy and task-agnostic prior respectively. The expectation is taken over tasks and trajectories \u03c4 from policy \u03c0 and initial state distribution p0k(s0), (i.e. p\u03c0(\u03c4)). Summation over t occurs across all episodic timesteps. When optimised with respect to \u03c0, this objective can be viewed as a trade-off between maximising rewards whilst remaining close to trajectories produced by \u03c00. When \u03c00 is learnt, it can learn shared behaviours across tasks and bias multi-task exploration (Teh et al., 2017). We consider the sequential learning paradigm, where skills are learnt from past tasks, psource(K), and leveraged while attempting the transfer set of tasks, ptrans(K)." }, { "heading": "2.2 HIERARCHICAL KL-REGULARIZED REINFORCEMENT LEARNING", "text": "While KL-regularized RL has achieved success across various settings (Abdolmaleki et al., 2018; Teh et al., 2017; Pertsch et al., 2020; Haarnoja et al., 2018a), recently Tirumala et al. (2019) proposed a hierarchical extension where policy \u03c0 and prior \u03c00 are augmented with latent variables, \u03c0(a, z|x, k) = \u03c0H(z|x, k)\u03c0L(a|z,x) and \u03c00(a, z|x) = \u03c0H0 (z|x)\u03c0L0 (a|z,x), where subscripts H and L denote the higher and lower hierarchical levels. This structure encourages the shared low-level policy (\u03c0L = \u03c0L0 ) to discover task-agnostic behavioural primitives, whilst the high-level discovers higher-level task relevant skills. By not conditioning the high-level prior on task-id, Tirumala et al. (2019) encourage the reuse of common high-level abstractions across tasks. They also propose the following upper bound for approximating the KL-divergence between hierarchical policy and prior:\nDKL (\u03c0(a|x) \u2225 \u03c00(a|x)) \u2264 DKL ( \u03c0H(z|x) \u2225\u2225 \u03c0H0 (z|x))+ E\u03c0H [DKL (\u03c0L(a|x, z) \u2225\u2225 \u03c0L0 (a|x, z))] (2) We omit task conditioning and declaring explicitly shared modules to emphasise this bound is agnostic to these choices.\n2.3 INFORMATION ASYMMETRY\nInformation Asymmetry (IA) is a key component in both of the aforementioned approaches, promoting the discovery of behaviours that generalise. IA can be understood as the masking of information accessible by certain modules. Not conditioning on specific environment aspects forces independence and generalisation across them (Galashov et al., 2019). In the context of (hierarchical) KL-\nregularized RL, the explored asymmetries between the (high-level) policy, \u03c0(H), and prior, \u03c0(H)0 , have been narrow (Tirumala et al., 2019; 2020; Liu et al., 2022; Pertsch et al., 2020; 2021; Rao et al., 2021; Ajay et al., 2020; Goyal et al., 2019). Concurrent with our research, Bagatella et al. (2022) published work exploring a wider range of asymmetries, closer to those we explore. We summarise explored asymmetries in Table 1 (with at\u2212n:t representing action history up to n steps in the past).\nChoice of information conditioning heavily influences which skills can be uncovered and how well they transfer. For example, Pertsch et al. (2020) discover observation-dependent behaviours, such as navigating corridors in maze environments, yet are unable to learn history-dependent skills, such as never traversing the same corridor twice. In contrast, Liu et al. (2022), by conditioning on history, are able to learn these behaviours. However, as we will show, in many scenarios, na\u0131\u0308vely conditioning on entire history can be detrimental for transfer, by discovering behaviours that do not generalise favourably across history instances, between tasks. Crucially, all previous works predefine the choice of asymmetry, based on the practitioner\u2019s intuition, that may be sub-optimal for skill transfer. By introducing theory behind the expressivity-transferability of skills, we present a simple data-driven method for automating the choice of IA, by learning it, yielding transfer benefits." }, { "heading": "3 MODEL ARCHITECTURE AND THE", "text": "EXPRESSIVITY-TRANSFERABILITY TRADE-OFF \u03c00,h0,hKL\nKL \u03c00,h0,h\nFigure 1: Hierarchical KLregularized architecture. The hierarchical policy modules \u03c0H and \u03c0L are regularized against their corresponding prior modules \u03c0Hi and \u03c0Li . The inputs to each module are filtered by an information gating function, depicted with colored rectangle.\nTo rigorously investigate the contribution of priors, hierarchy, and information asymmetry on skill transfer, it is important to isolate each individual mechanism while enabling the recovery of previous models of interest. To this end, we present the unified architecture in Fig. 1, which introduces information gating functions (IGFs) as a means of decoupling IA from architecture. Each component has its own IGF, depicted with colored rectangle. Every module is fed all environment information xk = (x, k) and distinctly chosen IGFs mask which part of the input each network has access to, thereby influencing which skills they learn. By presenting multiple priors, we enable a comparison with existing literature. With the right masking, one can recover previously investigated asymmetries (Tirumala et al., 2019; 2020; Pertsch et al., 2020; Bagatella et al., 2022; Goyal et al., 2019), explore additional ones, and also express purely hierarchical (Wulfmeier et al., 2019) and KL-regularized equivalents (Galashov et al., 2019; Haarnoja et al., 2018c)." }, { "heading": "3.1 THE INFORMATION ASYMMETRY EXPRESSIVITY-TRANSFERABILITY TRADE-OFF", "text": "While existing works investigating the role of IAs for skill transfer have focused on multi-task learning (Galashov et al., 2019)1 (specifically on policy \u03c0 regularization), we focus on the sequential\n1Concurrent with our research Bagatella et al. (2022) also investigated various IAs for sequential transfer.\ntask setting (particularly the prior\u2019s \u03c00 ability to handle covariate shift). In contrast to multi-task learning, in the sequential setting there exists abrupt non-stationarities for task p(K) and trajectory p\u03c0(\u03c4) distributions. As such, here it is particularly important that priors handle non-stationarity and associated covariate shift (see Theorem 3.1 for definition). IA choice plays a crucial role, influencing the level of covariate shift encountered by the prior across tasks:\nTheorem 3.1. The more random variables a network depends on, the larger the covariate shift (input distributional shift, here represented by KL-divergence) encountered across sequential tasks. That is, for distributions p, q and inputs b, c such that b = (b0, b1, ..., bn) and c \u2282 b:\nDKL (p(b) \u2225 q(b)) \u2265 DKL (p(c) \u2225 q(c)) . Proof. See Appendix B.1.\nIn our case, p and q can be interpreted as training psource(\u00b7) and transfer ptrans(\u00b7) distributions over network inputs, such as history xt for high-level prior \u03c0H0 . Intuitively, Theorem 3.1 states that the more variables you condition your network on, the less likely it will transfer due to increased covariate shift encountered between source and transfer domains, thus promoting minimal information conditioning. For example, imagine conditioning the high-level prior on either the entire history x0:t or a subset of it xt\u2212n:t, n \u2208 [0, t\u2212 1] (the subscript referring to the range of history values). According to Theorem 3.1, the covariate shift across sequential tasks will be smaller if we condition on a subset of the history, DKL (psource(x0:t) \u2225 ptrans(x0:t)) \u2265 DKL (psource(xt\u2212n:t) \u2225 ptrans(xt\u2212n:t)). Interestingly, covariate shift is upper-bounded by trajectory shift: DKL (p\u03c0source(\u03c4) \u2225 p\u03c0trans(\u03c4)) \u2265 DKL (p\u03c0source(\u03c4f ) \u2225 p\u03c0trans(\u03c4f )) (using Theorem 3.1), with the right hand side representing covariate shift over network inputs \u03c4f = IGF (\u03c4), filtered trajectories (e.g. \u03c4f = xt\u2212n:t, \u03c4f \u2282 \u03c4 ), and \u03c0source, \u03c0trans, source and transfer domain policies. It is therefore crucial, if possible, to minimise both trajectory and covariate shifts across domains, to benefit from previous skills. Nevertheless, the less information a prior is conditioned on, the less knowledge that can be distilled and transferred:\nTheorem 3.2. The more random variables a network depends on, the greater its ability to distil knowledge in the expectation (output distributional shift between network and target distribution, here represented by the expected KL-divergence). That is, for target distribution p and network q with outputs a and possible inputs b, c, d, such that b = (b0, b1, ..., bn) , d \u2282 c \u2282 b , e \u2208 d\u2295 c:\nEq(e|d) [DKL (p(a|b) \u2225 q(a|c))] \u2264 DKL (p(a|b) \u2225 q(a|d)) .\nProof. See Appendix B.2.\nIn this particular instance, p and q could be interpreted as policy \u03c0 and prior \u03c00 distributions, a as action at, b as history x0:t, and c, d, e as subsets of the history (e.g. xt\u2212n:t, xt\u2212m:t, xt\u2212n:t\u2212m respectively, with n > m and m & n \u2208 [0, t]), with e denoting the set of variables in c but not d . Intuitively, Theorem 3.2 states in the expectation, conditioning on more information improves knowledge distillation between policy and prior (e.g. E\u03c00(xt\u2212n:t\u2212m|xt\u2212m:t) [DKL (\u03c0(at|x0:t) \u2225 \u03c00(at|xt\u2212n:t))] \u2264 DKL (\u03c0(at|x0:t) \u2225 \u03c00(at|xt\u2212m:t)), with \u03c00(xt\u2212n:t\u2212m|xt\u2212m:t) the conditional distribution, induced by \u03c00, of history subset xt\u2212n:t\u2212m given xt\u2212m:t). Therefore, IA leads to an expressivity-transferability trade-off of skills (Theorems 3.1 and 3.2). Interestingly, hierarchy does not influence covariate shift and hence does not hurt transferability, but it does increase network expressivity (e.g. of the prior), enabling the distillation and transfer of rich multi-modal behaviours present in the real-world." }, { "heading": "4 APES: ATTENTIVE PRIORS FOR EXPRESSIVE AND TRANSFERABLE SKILLS", "text": "While previous works chose IA on intuition (Tirumala et al., 2019; 2020; Galashov et al., 2019; Pertsch et al., 2020; Wulfmeier et al., 2019; Bagatella et al., 2022; Singh et al., 2020; Ajay et al., 2020; Liu et al., 2022) we propose learning it. Consider the information gating functions (IGFs) introduced in Section 3 and depicted in Figure 1. Existing methods can be recovered by having the IGFs perform hard attention: IGF (xk) = m\u2299 xk, with m \u2208 {0, 1}dim(xk), predefined and static, and \u2299 representing element-wize multiplication. In contrast, we propose performing soft attention with m \u2208 [0, 1]dim(xk) and learn m based on: 1) the hierarchical KL-regularized RL objective (Equations (1) and (2)); 2) OIGF (m) = \u2212H(m),H denoting entropy (calculated by turning m into a probability distribution by performing Softmax over it), thereby encouraging low entropic, sparse IGFs (similar to Salter et al. (2019) applying a related technique for sim2real transfer):\nOAPES(\u03c0, \u03c00) = E\u03c4\u223cp\u03c0(\u03c4), k\u223cp(K) [ \u221e\u2211 t=0 \u03b3t ( rk(xt,at)\u2212 \u03b10DKL (\u03c0(a|xk) \u2225 \u03c00(a|xk)) )] \u2212 \u2211 i \u03b1miH(mi) (3)\nWith \u03b1mi weighing the relative importance of each objective for each attention mask mi for each module using self-attention (e.g. \u03c0H0 ). We train off-policy akin to SAC (Haarnoja et al., 2018b), sampling experiences from the replay buffer, approximating the return of the agent using Retrace (Munos et al., 2016) and double Q-learning (Hasselt, 2010) to train our critic. Refer to Appendices A and D for full training details. By exposing the IGFs to all available information xk, we enable the discovery of expressive skills with complex, potentially long-range, temporal dependencies (Theorem 3.2). By encouraging low-entropic masks mi, we promote minimal information conditioning (by limiting the IGF\u2019s channel capacity) whilst still capturing expressive behaviours. This is achieved by paying attention only where necessary to key environment aspects (Salter et al., 2022) that are crucial for decision making and hence heavily influence behaviour expressivity. Minimising the dependence on redundant information (aspects of the observation s, action a, or history x spaces that behaviours are independent to), we minimise covariate shift and improve the transferability of skills to downstream domains (Theorem 3.1). Consider learning the IGF of high-level prior \u03c0H0 for a humanoid navigation task. Low-level skills \u03c0L could correspond to motor-primitives, whilst the high-level prior could represent navigation skills. For navigation, joint quaternions are not relevant, but the Cartesian position is. By learning to mask parts of the state-space corresponding to joints, the agent becomes invariant and robust to covariate shifts across these dimensions (unseen joint configurations). We call our method APES, \u2018Attentive Priors for Expressive and Transferable Skills\u2019." }, { "heading": "4.1 TRAINING REGIME AND THE INFORMATION ASYMMETRY SETUP", "text": "We are concerned with investigating the roles of priors, hierarchy and IA for transfer in sequential task learning, where skills learnt over past tasks psource(K) are leveraged for transfer tasks ptrans(K). While one can investigate IA between hierarchical levels (\u03c0H , \u03c0L) as well as between policy and prior (\u03c0, \u03c00), we concern ourselves solely with the latter. Specifically, to keep our comparisons with existing literature fair, we condition \u03c0L on st and zt, and share it with the prior, \u03c0L = \u03c0L0 , thus enabling expressive multi-modal behaviours to be discovered with respect to st (Tirumala et al., 2019; 2020; Wulfmeier et al., 2019). In this paper, we focus on the role of IA between high-level policy \u03c0H and prior \u03c0H0 for supporting expressive and transferable high-level skills between tasks. Specifically, we learn skills over source domains using variational behavioural cloning from an expert policy \u03c0e:\nObc(\u03c0, \u03c00) = \u2211\ni\u2208{0,e}\nOAPES(\u03c0, \u03c0i) with rk = 0, \u03b3 = 1 and samples collected from \u03c0e. (4)\nEquation (4) can be viewed as hierarchical KL-regularized RL in the absence of rewards and with two priors: the one we learn \u03c00; the other the expert \u03c0e. See Appendix A.2 for a deeper discussion on the similarities with KL-regularized RL. We then transfer the skills and solve the transfer domains using hierarchical KL-regularized RL (as per Equation (3)). To compare the influence of distinct IAs for transfer in a controlled way, we propose the following regime: Stage 1) train a single hierarchical policy \u03c0 in the multi-task setup using Equation (4), but prevent gradient flow from prior \u03c00 to policy. Simultaneously, based on the ablation, train distinct priors (with differing IAs) on Equation (4) to imitate the policy. As such, we compare various IAs influence on skill distillation and transfer in a controlled manner, as they all distil behaviours from the same policy; Stage 2) freeze the shared\nCorridorMaze Sparse 2 Corridor\niii) Flat Prior iv) No Prior or Hierarchy\ni) Prior & Hierarchy ii) Hierarchy\nrollout\nmodules (\u03c0L, \u03c0H0 ) and train a newly instantiated \u03c0 H on the transfer task. By freezing \u03c0L, we assume the low-level skills from the source domains suffice for the transfer domains. If this assumption does not hold, one could either fine-tune \u03c0L during transfer, which would require tackling the catastrophic forgetting of skills (Kirkpatrick et al., 2017), or train additional skills (by expanding z dimensionality for discrete z spaces). We leave this as future work. For further details refer to Appendices A and D." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "Our experiments are designed to answer the following sequential task learning questions: (1) Can we benefit from both hierarchy and priors for effective transfer? (2) How important is the choice of IA between high-level policy and prior, and does it lead to an impactful expressivity-transferability tradeoff of behaviours? In practice, how detrimental is covariate shift for transfer? (3) How favourably does APES automate the choice of IA for effective transfer? Which IAs are discovered? (4) How important is hierarchy for transferring expressive skills? Is hierarchy necessary for effective transfer?" }, { "heading": "5.1 ENVIRONMENTS", "text": "We evaluate on two domains: one designed for controlled investigation of core agent capabilities and the another, more practical, robotics domain (see Figure 2). Both exhibit modular behaviours whose discovery could yield transfer benefits. See Appendix C for full environmental setup details.\n\u2022 CorridorMaze. The agent must traverse corridors in a given ordering. We collect 4k trajectories from a scripted policy traversing any random ordering of two corridors (psource(K)). For transfer (ptrans(K)), an inter- or extrapolated ordering must be traversed (number of sequential corridors = {2, 4}) allowing us to inspect the generalization ability of distinct priors to increasing levels of covariate shift. We also investigate the influence of covariate shift on effective transfer across reward sparsity levels: s-sparse (short for semi-sparse), rewarding per half-corridor completion; sparse, rewarding on task completion. Our transfer tasks are sparse 2 corr and s-sparse 4 corr.\n\u2022 Stack. The agent must stack a subset of four blocks over a target pad in a given ordering. The blocks have distinct masses and only lighter blocks should be placed on heavier ones. Therefore, discovering temporal behaviours corresponding to sequential block stacking according to mass, is beneficial. We collect 17.5k trajectories from a scripted policy, stacking any two blocks given this requirement (psource(K)). The extrapolated transfer task (ptrans(K)), called 4 blocks, requires all blocks be stacked according to mass. Rewards are given per individual block stacked." }, { "heading": "5.2 HIERARCHY AND PRIORS FOR KNOWLEDGE TRANSFER", "text": "Our full setup, APES, leverages hierarchy and priors for skill transfer. The high-level prior is given access to the history (as is common for POMDPs) and learns sparse self-attention m. To investigate the importance of priors, we compare against APES-no prior, a baseline from Tirumala et al. (2019), with the full APES setup except without a learnt prior. Comparing transfer results in Table 2, we see APES\u2019 drastic gains highlighting the importance of temporal high-level behavioural priors. To inspect the transfer importance of the traditional hierarchical setup (with \u03c0L(st, zt)), we compare APES-no prior against two baselines trained solely on the transfer task. RecSAC represents a historydependent SAC (Haarnoja et al., 2018d) and Hier-RecSAC a hierarchical equivalent from Wulfmeier et al. (2019). APES-no prior has marginal benefits showing the importance of both hierarchy and priors for transfer. See Table 6 for a detailed explanation of all baseline and ablation setups." }, { "heading": "5.3 INFORMATION ASYMMETRY FOR KNOWLEDGE TRANSFER", "text": "To investigate the importance of IA for transfer, we ablate over high-level priors with increasing levels of asymmetry (each input a subset of the previous): APES-{H20, H10, H1, S}, S denoting an observation-dependent high-level prior, Hi a history-dependent one, xt\u2212i:t. Crucially, these ablations do not do not learn m, unlike APES, our full method. Ablating history lengths is a natural dimension for POMDPs where discovering belief states by history conditioning is crucial (Thrun, 1999). APES-H1, APES-S\nare hierarchical extensions of Bagatella et al. (2022); Galashov et al. (2019) respectively, and APESH20 (representing entire history conditioning) is from Tirumala et al. (2020). APES-S is also an extension of Pertsch et al. (2020) with \u03c0L(st, zt) rather than \u03c0L(zt). Table 2 shows the heavy IA influence, with the trend that conditioning on too little or much information limits performance. The level of influence depends on reward sparsity level: the sparser, the heavier influence, due to rewards guiding exploration less. Regardless of whether the transfer domain is interpolated or extrapolated, IA plays a important role, suggesting that IA is important over a wide range of transfer tasks." }, { "heading": "5.4 THE EXPRESSIVITY-TRANSFERABILITY TRADE-OFF", "text": "To investigate whether Theorems 3.1 and 3.2 are the reason for the apparent expressivitytransferability trade-off seen in Table 2, we plot Figure 4 showing, on the vertical axis, the distillation loss DKL ( \u03c0H\n\u2225\u2225 \u03c0H0 ) at the end of training over psource(K), verses, on the horizontal axis, the increase in transfer performance (on ptrans(K)) when initialising \u03c0H as a task agnostic high-level prior pretrained over psource(K) (instead of randomly, as is default). We\nran these additional pre-trained \u03c0H experiments to investigate whether covariate shift is the culprit for the degradation in transfer performance when conditioning the high-level prior on additional information. By pre-training and transferring \u03c0H , we reduce initial trajectory shift and thus initial covariate shift between source and transfer domains (see Section 2.3). This is as no networks are reinitialized during transfer, which would usually lead to an initial shift in policy behaviour across domains. As per Theorem 3.1, we would expect a larger reduction in covariate shift for the priors that condition on more information. If covariate shift were the culprit for reduced transfer performance, we would expect a larger performance gain for those priors conditioned on more more information. Figure 4 demonstrates that is the case in general, regardless of whether the transfer domain is interor extra-polated. The trend is significantly less apparent for the semi-sparse domain, as here denser rewards guide learning significantly, alleviating covariate shift issues. We show results for APES{H20, H10, H1, S} as each input is a subset of the previous. These relations govern Theorems 3.1 and 3.2. In addition, Figure 4 and Table 3 show that conditioning on more information improves expressivity of the prior, reducing distillation losses, as per Theorem 3.2. These results together with\nTable 2, show the impactful expressivity-transferability trade-off of skills, controlled by IA (as per Theorems 3.1 and 3.2), where conditioning on too little or much information limits performance." }, { "heading": "5.5 APES: ATTENTIVE PRIORS FOR EXPRESSIVE AND TRANSFERABLE SKILLS", "text": "As seen in Table 2, APES, our full method, strongly outperforms (almost) every baseline and ablation on each transfer domain. Comparing APES with APES-H20, the most comparable approach with the prior fed the same input (xt\u221220:t), we observe drastic performance gains (of 581%, 183%, 323% for sparse 2\ncorr, s-sparse 4 corr and 4 blocks, respectively). These results demonstrate the importance of reducing covariate shift (by minimising information conditioning), whilst still supporting expressive behaviours (by exposing the prior to maximal information), only achieved by APES. Table 3 shows H(m), each \u03c0H0 mask\u2019s entropy (a proxy for the amount of information conditioning), vs DKL(\u03c0H ||\u03c0H0 ) (distillation loss), reporting max/min scores across all experiments cycles. We ran 4 random seeds but omit standard deviations as they were negligible. APES not only attends to minimal information (H(m)), but for that given level achieves a far lower distillation loss than comparable methods. This demonstrates that APES learnt to pay attention only where necessary. We inspect APES\u2019 attention masks m in Figure 5. APES primarily pays attention to the recent history of actions. This is interesting, as is inline with recent work, performed concurrent with our research, by Bagatella et al. (2022) demonstrating the effectiveness of state-free priors, conditioned on a history of actions, for effective generalization. However, unlike Bagatella et al. (2022) that need exhaustive history lengths sweeps for effective transfer, our approach learns the length in an automated, data driven, domain dependent manner. As such, our learnt history lengths are distinct for CorridorMaze and Stack. Additionally, for CorridorMaze, some attention is paid to the most recent observation st, which is unsurprising as this information is required to infer how to optimally act whilst at the end of a corridor. Refer to Appendix E.1 for a more in-depth analysis of APES\u2019 attention maps.\n5.6 HIERARCHY FOR EXPRESSIVITY Table 4: Hierarchy Ablation Transfer task sparse 2 corr s-sparse 4 corr\nAPES-H1 0.80 \u00b1 0.03 6.37 \u00b1 0.17 APES-H1-KL-a 0.76 \u00b1 0.09 5.59 \u00b1 0.17 APES-H1-flat 0.05 \u00b1 0.035 4.52 \u00b1 0.53 Expert 1 8 To further investigate whether hierarchy is necessary for effective transfer, we compare APES-H1 with APESH1-flat, which has the same setup except with a flat, non-hierarchical prior (conditioned on xt\u22121:t). With a flat prior, KL-regularization must occur over the raw, rather than latent, action space. Therefore, to adequately compare, we additionally investigate a hierarchical setup where regularization occurs only over the action-space, APES-H1-KL-a. Transfer results for CorridorMaze are shown in Table 4. Comparing APES-H1-KL-a and APES-H1-KL-flat, we see the benefits that a hierarchical prior brings, although less significant for less sparse domains. Upon inspection (see Section 5.7), APES-H1-KL-flat is unable to solve the task due to its inability to capture multi-modal behaviours (at corridor intersections). By contrasting APES-H1 with APESH1-KL-a, we see minimal benefits regularizing against latent actions, suggesting that with alternate methods for multi-modality, hierarchy may not be necessary." }, { "heading": "5.7 EXPLORATION ANALYSIS", "text": "To gain a further understanding on the effects of hierarchy and priors, we visualise policy rollouts early on during transfer (5\u2217103 steps). For CorridorMaze (Figure 3), with hierarchy and priors APES explores randomly at the corridor level. Hierarchy alone, unable to express preference over high-level skills, leads to temporally uncorrelated behaviours, unable to explore at the corridor level. The flat prior, unable to represent multi-modal behaviours, leads to suboptimal exploration at the intersection of corridors, with the agent often remaining static. Without priors nor hierarchy, exploration is further hindered, rarely traversing corridor depths. For Stack (Figure 6), the full setup explores at the individual block stacking level, alternating block orderings, but primarily stacking lighter upon heavier blocks. Hierarchy alone explores undirectedly without knowledge of what blocks to operate on, leading to temporally uncorrelated high-frequency switching of low-level manipulation skills.\nStack Sparse 4 Blocks i) Prior & Hierarchy ii) Hierarchy\nFigure 6: 4 Blocks. Rollouts with endeffector and cube movement depicted by dotted and dashed lines, colour-coded per rollout. Each rollout\u2019s end position represented by cross or cube respectively. Cube numbers correspond to their inherent ordering. See text for detailed analysis." }, { "heading": "6 RELATED WORK", "text": "Hierarchical frameworks have a long history (Sutton et al., 1999). The option RL literature tackle the semi-MDP setup and explore the benefits that hierarchy and temporal abstraction bring (Nachum et al., 2018; Wulfmeier et al., 2020; Igl et al., 2019; Salter et al., 2022; Kamat & Precup, 2020; Harb et al., 2018; Riemer et al., 2018). Approaches like Wulfmeier et al. (2019; 2020) use hierarchy to enforce knowledge transfer through shared hierarchical modules. However, for lifelong learning (Parisi et al., 2019; Khetarpal et al., 2020), where number of skills increase over time, it is unclear how well these approaches will fare, without priors to narrow skill exploration.\nPriors have been used in various fields. In the context of offline-RL, Siegel et al. (2020); Wu et al. (2019) primarily use priors to tackle value overestimation (Levine et al., 2020). In the variational literature, priors have been used to guide latent-space learning (Hausman et al., 2018; Igl et al., 2019; Pertsch et al., 2020; Merel et al., 2018). Hausman et al. (2018) learn episodic skills, limiting their ability to transfer. Igl et al. (2019) learn options and priors, but are on-policy and therefore suffers from sample inefficiency, making the application to robotic domains challenging. In the multi-task literature, priors have been used to guide exploration (Pertsch et al., 2020; Galashov et al., 2019; Siegel et al., 2020; Pertsch et al., 2021; Teh et al., 2017), yet without hierarchy expressivity in learnt behaviours is limited. In the sequential transfer literature, priors have also been used to bias exploration (Pertsch et al., 2020; Ajay et al., 2020; Singh et al., 2020; Bagatella et al., 2022; Goyal et al., 2019; Rao et al., 2021; Liu et al., 2022), yet either do not leverage hierarchy (Pertsch et al., 2020) or condition on minimal information (Ajay et al., 2020; Singh et al., 2020; Rao et al., 2021), limiting expressivity, all choosing asymmetry on intuition. Unlike APES, methods like Singh et al. (2020); Bagatella et al. (2022) leverage flow-based transformations, instead of hierarchy, to achieve multi-modality. Tirumala et al. (2020) exploit hierarchical priors to transfer entire history-dependent high-level skills. However, their setup was in a diverse high-data regime (1e9 samples for pre-training skills), where covariate shift is less predominant. Unlike aforementioned works, we consider the POMDP setting, arguably more suited for real-world robotics, and learn what information to condition our priors on based on our expressivity-transferability theorems.\nWhilst most previous works rely on IA, choice is primarily motivated by intuition. For example, Igl et al. (2019); Wulfmeier et al. (2019; 2020) only employ task or goal asymmetry and Tirumala et al. (2019); Merel et al. (2020); Galashov et al. (2019) use exteroceptive asymmetry. Salter et al. (2020) investigate a way of learning asymmetry for sim2real domain adaptation, but condition m on observation and state. We consider exploring this direction as future work. We provide a principled investigation on the role of IA for transfer, proposing a method for automating the choice." }, { "heading": "7 CONCLUSION", "text": "We employ hierarchical KL-regularized RL to efficiently transfer skills across sequential tasks, showing the effectiveness of combining hierarchy and priors. We theoretically and empirically show the crucial expressivity-transferability trade-off, controlled by IA choice, of skills. Our experiments validate the importance of this trade-off for both interpolated and extrapolated domains. Given this insight, we introduce APES, \u2018Attentive Priors for Expressive and Transferable Skills\u2019 automating the IA choice for the high-level prior, by learning it in a data driven, domain dependent, manner. This is achieved by feeding the entire history to the prior, capturing expressive behaviours, whilst encouraging its information gating function\u2019s attention mask to be low entropic, minimising covariate shift and improving transferability. Experiments over several domains, of varying sparsity levels, demonstrate APES\u2019 consistent superior performance over existing methods, whilst by-passing arduous IA sweeps. Ablations demonstrate the importance of hierarchy for prior expressivity, by supporting multi-modal behaviours. Future work will focus on additionally learning the IGFs between hierarchical levels." }, { "heading": "A METHOD", "text": "" }, { "heading": "A.1 TRAINING REGIME", "text": "In this section, we algorithmically describe our training setup. We relate each training phase to the principle equations in the main paper, but note that Appendices A.2 and A.3 outline a more detailed version of these equations that were actually used. We note that during BC, we apply DAGGER (Ross et al., 2011), as per Algorithm 2, improving learning rates. For further details refer to Appendix D.\nAlgorithm 1 APES training regime 1: # Full training and transfer regime. For BC, gradients\nare prevented from flowing from \u03c00 to \u03c0. In practice \u03c00 = {\u03c0i}i\u2208{0,...,N}, multiple trained priors. During transfer, \u03c0H , is reinitialized. 2: 3: # Behavioral Cloning 4: Initialize: policy \u03c0, prior \u03c00, replay Rbc, DAGGER\nrate r, environment env 5: for Number of BC training steps do 6: Rbc, env\u2190 collect(\u03c0, Rbc, env, True, r) 7: \u03c0, \u03c00\u2190 BC update(\u03c0, \u03c00, Rbc) # Eq. 4 8: end for 9: # Reinforcement Learning\n10: Initialize: high level policy \u03c0H , critics Qk\u2208{1,2}, replay Rrl, transfer environment envt 11: for Number of RL training steps do 12: Rrl, envt\u2190 collect(\u03c0, Rrl, envt) 13: \u03c0H \u2190 RL policy update(\u03c0, \u03c00, Rrl) # Eq. 3 14: Qk \u2190 RL critic update(Qk, \u03c0, Rrl) # Eq. 9 15: end for\nAlgorithm 2 collect 1: # Collects experience from either \u03c0i\nor \u03c0e, applying DAGGER at a given rate if instructed, and updates Rj , env accordingly. 2: function COLLECT(\u03c0i, Rj , env, dag=False, r=1) 3: x\u2190 env.observation() 4: \u03c0e \u2190 env.expert() 5: ai \u2190 \u03c0i(x) 6: ae \u2190 \u03c0e(x) 7: a\u2190 Bernoulli([ai, ae], [r, 1 - r]) 8: x\u2032, rk, env \u2190 env.step(a) 9: if dag then\n10: af \u2190 ae 11: else 12: af \u2190 ai 13: end if 14: Rj \u2190 Rj .update(x,af ,rk,x\u2032) 15: return Rj , env 16: end function\nA.2 VARIATIONAL BEHAVIORAL CLONING AND REINFORCEMENT LEARNING\nIn the following section, we omit APES\u2019 specific information gating function objective IGF (xk) for simplicity and generality. Nevertheless, it is trivial to extend the following derivations to APES.\nBehavioral Cloning (BC) and KL-Regularized RL, when considered from the variational-inference perspective, share many similarities. These similarities become even more apparent when dealing with hierarchical models. A particularly unifying choice of objective functions for BC and RL that fit with off-policy, generative, hierarchical RL: desirable for sample efficiency, are:\nObc(\u03c0, {\u03c0i}i\u2208I) = \u2212 \u2211 i\u2208I DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4)), Orl(\u03c0, {\u03c0i}i\u2282I) = E\u03c0(\u03c4)[R(\u03c4)]+Obc(\u03c0, {\u03c0i}i\u2282I)\n(5)\nObc, corresponds to the KL-divergence between trajectories from the policy, \u03c0, and various priors, \u03c0i. For BC, i \u2208 {0, u, e}, denote the learnt, uniform, and expert priors. For BC, in practice, we train multiple priors in parallel: \u03c00 = {\u03c0i}i\u2208{0,...,N}. We leave this notation out for the remainder of this section for simplicity. When considering only the expert prior, this is the reverse KL-divergence, opposite to what is usually evaluated in the literature, (Pertsch et al., 2020). Orl, refers to a lower bound on the expected optimality of each prior log p\u03c0i(O = 1); O denoting the event of achieving maximum return (return referred to as R(.)); refer to (Abdolmaleki et al., 2018), appendix B.4.3 for proof, further explanation, and necessary conditions. During transfer using RL, we do not have access to the expert or its demonstrations (i \u2282 I := i \u2208 {0, u}). For hierarchical policies, the KL terms are not easily evaluable. DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4)) \u2264\u2211\nt E\u03c0(\u03c4) [ DKL ( \u03c0H(zt|xk) \u2225\u2225 \u03c0Hi (zt|xk))+ E\u03c0H(zt|xk) [DKL (\u03c0L(at|xk, zt) \u2225\u2225 \u03c0Li (at|xk, zt))]]\n2 (Tirumala et al., 2019), is a commonly chosen upper bound. If sharing modules, e.g. \u03c0Li = \u03c0 L, or using non-hierarchical networks, this bound can be simplified (removing the second or first terms respectively). To make both Eq. (5) amendable to off-policy training (experience from {\u03c0e, \u03c0b}, for BC/RL respectively; \u03c0b representing behavioral policy), we introduce importance weighting (IW), removing off-policy bias at the expense of higher variance. Combining all the above with additional individual term weighting hyperparameters, {\u03b2zi , \u03b2ai }, we attain:\nD\u0303 q(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) := Eq(\u03c4) [\u2211 t \u03bdq[t] \u00b7 ( \u03b2zi \u00b7 Ci,h(zt|xk) + \u03b2ai \u00b7 E\u03c0H(zt|xk) [Ci,l(at|xk, zt)] )]\n\u03b6ni = E\u03c0H(zi|xi,k)\n[ \u03c0L (ai|xi, zi, k) ] n(ai|xi, k) , \u03bdn = [ \u03b6n1 , \u03b6 n 1 \u03b6 n 2 , . . . , \u03c4t\u220f i=1 \u03b6ni ] , C\u00b5,\u03f5(y) = log ( \u03c0r\u03f5 (y) \u03c0r\u00b5,\u03f5(y) ) \u2212DKL(\u03c0(\u03c4)||\u03c0e(\u03c4)) \u2265 \u2212\n\u2211 i\u2208{0,u,e} D\u0303 \u03c0e(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) (6)\nEp(K), \u03c00(\u03c4) [log(O = 1|\u03c4, k)] \u2265 E\u03c0b(\u03c4) [\u2211 t \u03bd\u03c0b [t] \u00b7 rk(xt,at) ] \u2212 \u2211 i\u2208{0,u} D\u0303 \u03c0b(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) (7)\nWhere D\u0303q(\u03c4)KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) (for {\u03b2zi , \u03b2ai } = 1) is an unbiased estimate for the aforementioned upper bound, using q\u2019s experience. \u03b6ni is the IW for timestep i, between \u03c0 and arbitrary policy n. \u03bd\nn[t] is the tth element of \u03bdn; the cumulative IW product at timestep t. Equations 6, 7 are the BC/RL lower bounds used for policy gradients. See Appendix B.4 for a derivation, and necessary conditions, of these bounds. For BC, this bounds the KL-divergence between hierarchical and expert policies, \u03c0, \u03c0e. For RL, this bounds the expected optimality, for the learnt prior policy, \u03c00. Intuitively, maximising this particular bound, maximizes return for both policy and prior, whilst minimizing the disparity between them. Regularising against an uninformative prior, \u03c0u, encourages highly-entropic policies, further aiding at exploration and stabilising learning (Igl et al., 2019).\nIn RL, IWs are commonly ignored (Lillicrap et al., 2015; Abdolmaleki et al., 2018; Haarnoja et al., 2018b), thereby considering each sample equally important. This is also convenient for BC, as IWs require the expert probability distribution: not usually provided. We did not observe benefits of using them and therefore ignore them too. We employ module sharing (\u03c0Li = \u03c0\nL; unless stated otherwize), and freeze certain modules during distinct phases, and thus never employ more than 2 hyperparameters, \u03b2, at any given time, simplifying the hyperparameter optimisation. These weights balance an exploration/exploitation trade-off. We use a categorical latent space, explicitly marginalising over, rather than using sampling approximations (Jang et al., 2016). For BC, we train for 1 epoch (referring to training in the expectation once over each sample in the replay buffer)." }, { "heading": "A.3 CRITIC LEARNING", "text": "The lower bound presented in Eq. (7) is non-differentiable due to rewards being sampled from the environment. Therefore, as is common in the RL literature (Mnih et al., 2015; Lillicrap et al., 2015), we approximate the return of policy \u03c0 with a critic, Q. To be sample efficient, we train in an off-policy manner with TD-learning (Sutton, 1988) using the Retrace algorithm (Munos et al., 2016) to provide a low-variance, low-bias, policy evaluation operator:\nQrett := Q \u2032 (xt,at, k) + \u221e\u2211 j=t \u03f5tj [ rk(xj ,aj) + E \u03c0H(z|xj+1,k),\n\u03c0L(a\u2032|xj+1,z,k)\n[ Q \u2032 (xj+1,a \u2032, k) ] \u2212Q \u2032 (xj ,aj , k) ] (8)\nL(Q) = Ep(K), \u03c0b(\u03c4)\n[ (Q(xt,at, k)\u2212 argmin\nQrett\n(Qrett )) 2 ] \u03f5tj = \u03b3 j\u2212t j\u220f\ni=t+1\n\u03b6bi (9)\n2For proof refer to Appendix B.3\nWhere Qrett represents the policy return evaluated via Retrace. Q \u2032\nis the target Q-network, commonly used to stabilize critic learning (Mnih et al., 2015), and is updated periodically with the current Q values. IWs are not ignored here, and are clipped between [0, 1] to prevent exploding gradients, (Munos et al., 2016). To further reduce bias and overestimates of our target, Qrett , we apply the double Q-learning trick, (Hasselt, 2010), and concurrently learn two target Q-networks, Q \u2032 . Our critic is trained to minimize the loss in Eq. (9), which regularizes the critic against the minimum of the two targets produced by both target networks." }, { "heading": "B THEORY AND DERIVATIONS", "text": "In this section we provide proofs for the theory introduced in the main paper and in Appendix A." }, { "heading": "B.1 THEOREM 1.", "text": "Theorem 1. The more random variables a network depends on, the larger the covariate shift (input distributional shift, here represented by KL-divergence) encountered across sequential tasks. That is, for distributions p, q\nDKL (p(b) \u2225 q(b)) \u2265 DKL (p(c) \u2225 q(c)) with b = (b0, b1, ..., bn) and c \u2282 b.\n(10)" }, { "heading": "Proof", "text": "DKL (p(b) \u2225 q(b)) = Ep(b) [ log ( p(b)\nq(b) )] = Ep(d|c)\u00b7p(c) [ log ( p(d|c) \u00b7 p(c) q(d|c) \u00b7 q(c) )] with d \u2208 b\u2295 c\n= Ep(c) [ Ep(d|c) [1] \u00b7 log ( p(c)\nq(c)\n)] + Ep(c) [ Ep(d|c) [ log ( p(d|c) q(d|c) )]] = DKL (p(c) \u2225 q(c)) + Ep(c) [DKL (p(d|c) \u2225 q(d|c))] \u2265 DKL (p(c) \u2225 q(c)) given Ep(c) [DKL (p(d|c) \u2225 q(d|c))] \u2265 0 (11)" }, { "heading": "B.2 THEOREM 2.", "text": "Theorem 2. The more random variables a network depends on, the greater its ability to distil knowledge in the expectation (output distributional shift between network and target distribution, here represented by the expected KL-divergence). That is, for target distribution p and network q with outputs a and possible inputs b, c, d, such that b = (b0, b1, ..., bn) and d \u2282 c \u2282 b\nEq(e|d) [DKL (p(a|b) \u2225 q(a|c))] \u2264 DKL (p(a|b) \u2225 q(a|d)) with e \u2208 d\u2295 c (12)" }, { "heading": "Proof", "text": "DKL (p(a|b) \u2225 q(a|d)) = Ep(a|b) [ log ( p(a|b) q(a|d) )] = Ep(a|b) [ log p(a|b)\u2212 logEq(e|d) [q(a|c)] ] with e \u2208 d\u2295 c\n\u2265 Ep(a|b)\u00b7q(e|d) [ log ( p(a|b) q(a|c) )] given Jensen\u2019s Inequality\n= Eq(e|d) [DKL (p(a|b) \u2225 q(a|c))]\n(13)" }, { "heading": "B.3 HIERARCHICAL KL-DIVERGENCE UPPER BOUND", "text": "All proofs in this section ignore multi-task setup for simplicity. Extending to this scenario is trivial." }, { "heading": "Upper Bound", "text": "DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4)) \u2264 \u2211 t E\u03c0(\u03c4)[DKL ( \u03c0H(zt|xt) \u2225\u2225 \u03c0Hi (zt|xt)) + E\u03c0H(zt|xt) [ DKL ( \u03c0L(at|xt, zt)\n\u2225\u2225 \u03c0Li (at|xt, zt))]] (14)" }, { "heading": "Proof", "text": "DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4)) = E\u03c0(\u03c4) [ log ( \u03c0(\u03c4)\n\u03c0i(\u03c4)) )] = E\u03c0(\u03c4) [ log ( p(s0) \u00b7 \u220f t p(st+1|xt,at) \u00b7 \u03c0(at|xt)\np(s0) \u00b7 \u220f t p(st+1|xt,at) \u00b7 \u03c0i(at|xt) )] = E\u03c0(\u03c4) [ log\n(\u220f t \u03c0(at|xt) \u03c0i(at|xt) )] =\n\u2211 t E\u03c0(\u03c4) [DKL (\u03c0(at|xt) \u2225 \u03c0i(at|xt))]\n\u2264 \u2211 t E\u03c0(\u03c4)[DKL (\u03c0(at|xt) \u2225 \u03c0i(at|xt))+\nE\u03c0(at|xt) [DKL (\u03c0(zt|xt,at) \u2225 \u03c0(zt|xt,at))]] = \u2211 t E\u03c0(\u03c4) [ E\u03c0(at,zt|xt) [ log ( \u03c0(at|xt) \u03c0i(at|xt) ) + log ( \u03c0(zt|xt,at) \u03c0i(zt|xt,at) )]] = E\u03c0(\u03c4) [DKL (\u03c0(at, zt|xt) \u2225 \u03c0i(at, zt|xt))]\n= \u2211 t E\u03c0(\u03c4)[DKL ( \u03c0H(zt|xt) \u2225\u2225 \u03c0Hi (zt|xt)) + E\u03c0H(zt|xt) [ DKL ( \u03c0L(at|xt, zt) \u2225\u2225 \u03c0Li (at|xt, zt))]]\n(15)" }, { "heading": "B.4 POLICY GRADIENT LOWER BOUNDS", "text": "B.4.1 IMPORTANCE WEIGHTS DERIVATION\nD\u0303 q(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) = ub(DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4))) (16)\nFor \u03b2zi , \u03b2 a i = 1, where ub(DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4))) corresponds to the hierarchical upper bound introduced in Appendix A.2." }, { "heading": "Proof", "text": "ub(DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4))) = \u2211 t E\u03c0(\u03c4)[DKL ( \u03c0H(zt|xt) \u2225\u2225 \u03c0Hi (zt|xt)) + E\u03c0H(zt|xt) [ DKL ( \u03c0L(at|xt, zt)\n\u2225\u2225 \u03c0Li (at|xt, zt))]] =\n\u2211 t E q(\u03c4)\u00b7\u03c0(\u03c4) q(\u03c4) [DKL ( \u03c0H(zt|xt) \u2225\u2225 \u03c0Hi (zt|xt)) + E\u03c0H(zt|xt) [ DKL ( \u03c0L(at|xt, zt)\n\u2225\u2225 \u03c0Li (at|xt, zt))]] =\n\u2211 t E q(\u03c4)\u00b7 \u220ft i=0 \u03c0(ai|xi) q(ai|xi) [DKL ( \u03c0H(zt|xt) \u2225\u2225 \u03c0Hi (zt|xt)) + E\u03c0H(zt|xt) [ DKL ( \u03c0L(at|xt, zt)\n\u2225\u2225 \u03c0Li (at|xt, zt))]] = D\u0303\nq(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4))\n(17)\nB.4.2 BEHAVIORAL CLONING UPPER BOUND \u2212DKL(\u03c0(\u03c4)||\u03c0e(\u03c4)) \u2265 \u2212 \u2211\ni\u2208{0,u,e}\nD\u0303 \u03c0e(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4))\nfor \u03b2zi , \u03b2 a i \u2265 1\n(18)" }, { "heading": "Proof", "text": "DKL (\u03c0(\u03c4) \u2225 \u03c0e(\u03c4)) \u2264 \u2211 i\u2208{0,u,e} DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4))\n\u2264 \u2211\ni\u2208{0,u,e}\nub(DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4)))\n= \u2211\ni\u2208{0,u,e}\nD\u0303 q(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) for \u03b2 z i , \u03b2 a i = 1\n\u2264 \u2211\ni\u2208{0,u,e}\nD\u0303 q(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) for \u03b2 z i , \u03b2 a i \u2265 1\n(19)\nThe last line holds true as each weighted term in D\u0303q(\u03c4)KL (\u03c0(\u03c4)||\u03c0i(\u03c4)) corresponds to KL-divergences which are positive." }, { "heading": "B.4.3 REINFORCEMENT LEARNING UPPER BOUND", "text": "Ep(K), \u03c00(\u03c4) [log(O = 1|\u03c4, k)] \u2265 E\u03c0b(\u03c4) [\u2211 t \u03bd\u03c0b [t] \u00b7 rk(xt,at) ] \u2212 \u2211 i\u2208{0,u} D\u0303 \u03c0b(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4))\nfor \u03b2zi , \u03b2 a i \u2265 1 and rk < 0\n(20)" }, { "heading": "Proof", "text": "Ep(K), \u03c00(\u03c4) [log(O = 1|\u03c4, k)] \u2265 E\u03c0b(\u03c4) [\u2211 t \u03bd\u03c0b [t] \u00b7 rk(xt,at) ] \u2212DKL (\u03c0(\u03c4) \u2225 \u03c0i(\u03c4))\n\u2265 E\u03c0b(\u03c4) [\u2211 t \u03bd\u03c0b [t] \u00b7 rk(xt,at) ] \u2212 D\u0303\u03c0b(\u03c4)KL (\u03c0(\u03c4)||\u03c0i(\u03c4)), for \u03b2 z i , \u03b2 a i = 1\n\u2265 E\u03c0b(\u03c4) [\u2211 t \u03bd\u03c0b [t] \u00b7 rk(xt,at) ] \u2212 D\u0303\u03c0b(\u03c4)KL (\u03c0(\u03c4)||\u03c0i(\u03c4)), for \u03b2 z i , \u03b2 a i \u2265 1\n\u2265 E\u03c0b(\u03c4) [\u2211 t \u03bd\u03c0b [t] \u00b7 rk(xt,at) ] \u2212 \u2211 i\u2208{0,u} D\u0303 \u03c0b(\u03c4) KL (\u03c0(\u03c4)||\u03c0i(\u03c4)),\nfor \u03b2zi , \u03b2 a i \u2265 1\n(21)\nFor line 1 proof see (Abdolmaleki et al., 2018). The final 2 lines hold due to positive KL-divergences." }, { "heading": "C ENVIRONMENTS", "text": "We continue by covering each environment setup in detail." }, { "heading": "C.1 CORRIDORMAZE", "text": "Intuitively, the agent starts at the intersection of corridors, at the origin, and must traverse corridors, aligned with each dimension of the state space, in a given ordering. This requires the agent to reach the end of the corridor (which we call half-corridor cycle), and return back to the origin, before the corridor is considered complete.\ns \u2208 {0, l}c, p(s0) = 0c, k = one-hot task encoding, a \u2208 [0, 1], rsemi-sparsek (xt,at) = 1 if agent has correctly completed the entire or half-corridor cycle else 0, rsparsek (xt,at) = 1 if task complete else 0. Task is considered complete when a desired ordering of corridors have been traversed. c = 5 represents the number of corridors in our experiments. l = 6, the lengths of each corridor. States transition according to deterministic transition function sjtt+1 = f(s jt t ,at). jt corresponds to the index of the current corridor that the agent is in (i.e. jt = 0 if the agent is in corridor 0 at timestep t). si corresponds to the ith dimension of the state. States transition incrementally or decrementally down a corridor, and given state dimension si, if actions fall into corresponding transition action bins \u03c8inc, \u03c8dec. We define the transition function as follows:\nf(sjtt ,at) = s jt t + 1, if \u03c8 w inc(at, jt).\nsjtt \u2212 1, elif \u03c8wdec(at, jt). 0, otherwise.\n(22)\n\u03c8winc(at, j) = bool(at in [j/c, (j + 0.5 \u00b7w)/c]), \u03c8wdec(at, j) = bool(at in [j/c, (2 \u00b7 j \u2212 0.5 \u00b7w)/c]). The smaller the w parameter, the narrower the distribution of actions that lead to transitions. As such, w, together with rk controls the exploration difficulty of task k. We set w = 0.9. We constrain the state transitions to not transition outside of the corridor boundaries. Furthermore, if the agent is at the origin, s = 0c (at the intersection of corridors), then the transition function is ran for all values of jt, thereby allowing the agent to transition into any corridor." }, { "heading": "C.2 STACK", "text": "This domain is adapted from the well known gym robotics FetchPickAndPlace-v0 environment (Plappert et al., 2018). The following modifications were made: 1) 3 additional blocks were introduced, with different colours, and a goal pad, 2) object spawn locations were not randomized and were instantiated equidistantly around the goal pad, see Fig. 2, 3) the number of substeps was increased from 20 to 60, as this reduced episodic lengths, 4) a transparent hollow rectangular tube was placed around the goal pad, to simplify the stacking task and prevent stacked objects from collapsing due to structural instabilities, 5) the arm was always spawned over the goal pad, see figure Fig. 2, 6) the state space corresponded to gripper position and grasp state, as well as the object positions and relative positions with respect to the arm: velocities were omitted as access to such information may not be realistic for real robotic systems. k = one-hot task encoding, rsparsek (xt,at) = 1 if correct object has been placed on stack in correct ordering else 0." }, { "heading": "D EXPERIMENTAL SETUP", "text": "We provide the reader with the experimental setup for all training regimes and environments below. We build off the softlearning code base (Haarnoja et al., 2018b). Algorithmic details not mentioned in the following sections are omitted as are kept constant with the original code base. For all experiments, we sample batch size number of entire episodes of experience during training." }, { "heading": "D.1 MODEL ARCHITECTURES", "text": "We continue by outlining the shared model architectures across domains and experiments. Each policy network (e.g. \u03c0H , \u03c0L, \u03c0H0 , \u03c0 L 0 ) is comprized of a feedforward module outlined in Table 5. The softlearning repository that we build off (Haarnoja et al., 2018b), applies tanh activation over network outputs, where appropriate, to match the predefined outpute ranges of any given module. The critic is also comprized of the same feedforward module, but is not hierarchical. To handle historical inputs, we tile the inputs and flatten, to become one large input 1-dimensional array. We ensure the input always remains of fixed size by appropriately left padding zeros. For \u03c0H , \u03c0H0 we use a categorical latent space of size 10. We found this dimensionality sufficed for expressing the diverse behaviours exhibited in our domains. Table 6 describes the setup for all the experiments in the main paper, including inputs to each module, level over which KL-regularization occurs (z or a), which modules are shared (e.g. \u03c0L and \u03c0L0 ), and which modules are reused across sequential tasks. For the covariate-shift designed experiments in Table 8, we additionally reuse \u03c0H (or \u03c0L for RecSAC) across domains, and whose input is xt. For all the above experiments, any reused modules are not\ngiven access to task-dependent information, namely task-id (k) and exteroceptive information (cube locations for Stack domain). This choice ensures reused modules generalize across task instances." }, { "heading": "D.2 BEHAVIOURAL CLONING", "text": "For the BC setup, we use a deterministic, noisy, expert controller to create experience to learn off. We apply DAGGER (Ross et al., 2011) during data collection and training of policy \u03c0 as we found this aided at achieving a high success rate at the BC tasks. Our DAGGER setup intermittently during data collection, with a predefined rate, samples an action from \u03c0 instead of \u03c0e, but still saves BC target action at as the one that would have been taken by the expert for xk. This setup helps mitigate covariate shift during training, between policy and expert. Noise levels were chosen to be small enough so that the expert still succeeded at the task. We trained our policies for one epoch (once over each collected data sample in the expectation). It may be possible to be more sample efficient, by increasing the ratio of gradient steps to data collection, but we did not explore this direction. The interplay we use between data collection and training over the collected experience, is akin to the RL paradigm. We build off the softlearning code base (Haarnoja et al., 2018b), so please refer to it for details regarding this interplay.\nRefer to Table 7a for BC algorithmic details. It is important to note here that, although we report five \u03b2 hyper-parameter values, there are only two degrees of freedom. As we stop gradients flowing from \u03c00 to \u03c0, choice of \u03b20 is unimportant (as long as it is not 0) as it does not influence the interplay between gradients from individual loss terms. We set these values to 1. \u03b2ae \u2019s absolute value is also unimportant, and only its relative value compared to \u03b2zu and \u03b2 a u matters. We also set \u03b2 a e to 1. For the remaining two hyper-parameters, \u03b2zu, \u03b2 a u, we performed a hyper-parameter sweep over three orders of magnitude, three values across each dimension, to obtain the reported optimal values. In practice \u03c00 = {\u03c0i}i\u2208{0,...,N}, multiple trained priors each sharing the same \u03b20 hyper-parameters. For \u03b1m, denoting the hyper-paramter in Equation (3) weighing the relative contribution of the \u03c0H0 \u2019s self-attention entropy objective IGF (xk) with relation to the remainder of the RL/BC objectives, we performed a sweep over three ordered of magnitude (1e0, 1e\u22121, 1e\u22122). This sweep was ran independent of all the other sweeps, using the optimal setup for all other hyper-parameters. We chose the hyper-parameter with lowest DKL(\u03c0H ||\u03c0H0 ), H(m) combination. Four seeds were ran, as for all experiments. We observed very small variation in learning across the seeds, and used the best performing seed to bootstrap off for transfer. We separately also performed a hyper-parameter sweep over \u03c0 learning rate, in the same way as before. We did not perform a sweep for batch size. We found for both BC and RL setups, that conditioning on entire history for \u03c0H was not always necessary, and sometimes hurt performance. We state the history lengths used for \u03c0H for BC in Table 7a. This value was also used for both \u03c0H and Q for the RL setup.\nWe prevent gradient flow from \u03c00 to \u03c0, to ensure as fair a comparison between ablations as possible: each prior distils knowledge from the same, high performing, policy \u03c0 and dataset. If we simultaneously trained multiple \u03c0 and \u03c00 pairs (for each distinct prior), it is possible that different learnt priors would influence the quality of each policy \u03c0 which knowledge is distilled off. In this paper, we are not interested in investigating how priors affect \u03c0 during BC, but instead how priors influence what knowledge can be distilled and transferred. We observed prior KL-distillation loss convergence across tasks and seeds, ensuring a fair comparison." }, { "heading": "D.3 REINFORCEMENT LEARNING", "text": "During this stage of training we freeze the prior and low-level policy (if applicable, depending on the ablation). In general, any reused modules across sequential tasks are frozen (apart from \u03c0H for the covariate shift experiments in Table 8). Any modules that are not shared (such as \u03c0H for most experiments), are initialized randomly across tasks. The RL setup is akin to the softlearning repository (Haarnoja et al., 2018b) that we build off. We note any changes in Table 7b. We regularize against the latent or action level for, depending on the ablation, whether or not our models are hierarchical, share low level policies, or use pre-trained modules (low-level policy and prior). Therefore, we only ever regularize against, at most, two \u03b2 hyper-parameters. Hyper-parameter sweeps are performed in the same way as previously. We did not sweep over Retrace \u03bb, batch size, or episodic length. For Retrace, we clip the importance weights between [0, 1], like in Munos et al. (2016), and perform \u03bb returns rather than n-step. We found the Retrace operator important for sample-efficient learning." }, { "heading": "E FINAL POLICY ROLLOUTS AND ANALYSIS", "text": "" }, { "heading": "E.1 ATTENTION ANALYSIS", "text": "We plot the full attention maps for APES in Figure 7, including intra- state and action attention. For CorridorMaze, attention is primarily paid to the most recent action at. For most states in the environment (excluding end of the corridor or corridor intersection states), conditioning on the previous action suffices to infer the optimal next action (e.g. con-\ntinue traversing the depths of a corridor). Therefore, it is understandable that APES has learnt\nCorridorMaze 4 Corridor rollout\nsuch an attention mechanism. For the remainder of the environment states (such as corridor ends), conditioning on (a history of) states is necessary to infer optimal behaviour. As such APES pays some attention to states. For Stack, attention is primarily paid to a short recent history of actions, with the weights decaying further into the past. Interestingly, attention over actions corresponding to opening/closing the gripper (the bottom row of Figure 7) decay a lot quicker, suggesting that this information is redundant. This makes sense, as there exists strong correlation between gripper actions at successive time-steps, but this correlation decays very quickly. Additionally, APES does not pay attention to states corresponding to gripper position (the final 3 state rows in Figure 7), as this can be inferred from the remainder of the state-space as well as the recent history of gripper actions." }, { "heading": "E.2 COVARIATE SHIFT ANALYSIS", "text": "In Table 8 we report additional return (over a randomly initialised \u03c0H ) achieved by pre-training (over psource(K)) and transferring a task-agnostic high-level policy \u03c0H during transfer (ptrans(K)). For experiments that are not hierarchical we pre-train an equivalent non-hierarchical agent. Theorem 3.1 suggests we\u2019d expect a larger improvement in transfer performance for priors that condition on more information. Table 8 confirms this trend demonstrating the importance of prior covariate shift in transferability of behaviours. This trend is less apparent for the semi-sparse domain. Additionally, for the interpolated transfer task (2 corridor),\nthe solution is entirely in the support of the training set of tasks. Na\u0131\u0308vely, one would expect pretraining to fully recover lost performance and match the most performant method. However, this is not the case as the critic, trained solely on the transfer task, quickly encourages sub-optimal out-of-distribution behaviours." }, { "heading": "E.3 FINAL POLICY ROLLOUTS", "text": "In this section, we show final policy performance (in terms of episodic rollouts) for APES-H1, across each transfer domain. We additionally display the categorical probability distributions for \u03c0H(xk) and \u03c0H0 (F (xk)) across each rollout to analyse the behaviour of each. F (.) denotes the chosen information gating function for the prior (referred to as IGF (.) in the main text). For CorridorMaze 2 and 4 corridor, seen in Figs. 8 and 9(a), we see that the full method successfully solves each respective task, correctly traversing the correct ordering of corridors. The categorical distributions for these domains remain relatively entropic. In general, latent categories cluster into those that lead the agent deeper down a corridor, and those that return the agent to the hallway. Policy and prior align their categorical distributions in general, as expected. Interestingly, however, the two categorical distributions deviate the most from each-other at the hallway, the bottleneck state (Sutton et al., 1999),\nwhere prior multimodality (for hierarchical \u03c00) exists most (e.g. which corridor to traverse next). In this setting, the policy needs to deviate from the multimodal prior, and traverse only the optimal next corridor. We also observe, for the hallway, that the prior allocates one category to each of the five corridors. Such behaviour would not be possible with a flat prior.\nFig. 9(b) plots the same information for Stack 4 blocks. APES-H1 successfully solves the transfer task, stacking all blocks according to their masses. Similar categorical latent-space trends exist for this domain as the previous. Most noteworthy is the behaviour of both policy and prior at the bottleneck state, the location above the block stack, where blocks are placed. This location is visited five times within the episode: at the start s0, and four more times upon each stacked block. Interestingly, for this state, the prior becomes increasingly less entropic upon each successive visit. This suggests that the prior has learnt that the number of feasible high-level actions (corresponding to which block to stack next), reduces upon each visit, as there remains fewer lighter blocks to stack. It is also interesting that for s0, the red categorical value is more favoured than the rest. Here, the red categorical value corresponds to moving towards cube 0, the heaviest cube. This behaviour is as expected, as during BC, this cube was stacked first more often than the others, given its mass. For this domain, akin to CorridorMaze, the policy deviates most from the prior at the bottleneck state, as here it needs to behave deterministically (regrading which block to stack next)." } ], "year": 2022, "abstractText": "The ability to discover behaviours from past experience and transfer them to new tasks is a hallmark of intelligent agents acting sample-efficiently in the real world. Equipping embodied reinforcement learners with the same ability may be crucial for their successful deployment in robotics. While hierarchical and KL-regularized reinforcement learning individually hold promise here, arguably a hybrid approach could combine their respective benefits. Key to these fields is the use of information asymmetry across architectural modules to bias which skills are learnt. While asymmetric choice has a large influence on transferability, existing methods base their choice primarily on intuition in a domain-independent, potentially suboptimal, manner. In this paper, we theoretically and empirically show the crucial expressivity-transferability trade-off of skills across sequential tasks, controlled by information asymmetry. Given this insight, we introduce APES, \u2018Attentive Priors for Expressive and Transferable Skills\u2019, a hierarchical KL-regularized method, heavily benefiting from both priors and hierarchy. Unlike existing approaches, APES automates the choice of asymmetry by learning it in a data-driven, domaindependent, way based on our expressivity-transferability theorems. Experiments over complex transfer domains of varying levels of extrapolation and sparsity, such as robot block stacking, demonstrate the criticality of the correct asymmetric choice, with APES drastically outperforming previous methods.", "creator": "LaTeX with hyperref" }, "output": [ [ "1. Information Asymmetry not only concerns the amount of history presented, but also the dimensions of the state that are relevant. It would have been nice to see a further application of attention across state dimensions, e.g., in a simple navigation task. This could then also involve environments for which the selected baselines have been originally developed.", "2. On a similar note, the considered environments are relatively simple, although the sparse reward structure renders them of course challenging for an RL algorithm. It would be great to see that the proposed method does not introduce regressions on tasks in which standard behavior priors work well.", "3. The presentation and writing is at times unclear and could be improved (see below)." ], [ "1. I'm concerned that by using the soft masks to allow them to be trained, the method could lose most of its theoretical connection with Thm.3.2 and thus the expressivity-transferability trade-off.", "2. Unless some mask is strictly zero (or near-zero in practice) the information the corresponding variable carries does not go away. This is probably supported by Fig.5, where the minimum mask value is \u2248exp\u2061(\u22123.1)\u22480.045. Their corresponding variables may not play big roles in the prediction, but it's hard to say that they are completely ignored.", "3. The scales of the soft masks can largely be affected by the scales of the input variables. I'm curious if they are properly normalized in the experiments.", "4. The writing could be improved in some ways. In Eq.3, \u03c0 and \u03c00 are not conditioned on the masks and OAPES does not take the masks as its input.", "5. The boldfacing of the inputs in the text and Thm.3.2 do not match.", "6. The second sentence of Sec.2 is missing an \"and\"." ], [ "1. The claims do not seem necessarily true. In particular, the authors claim that successful transfer relies on information asymmetry, which may or may not be true, depending on the method one considers for transfer learning.", "2. The relation between the theorems and the ability to transfer is not necessarily direct. Some nuances would be important here.", "3. The empirical setup is quite complex and the number of seeds is very low, bringing into question the reliability of the results." ], [ "1. \"The theoretical sections are a bit hard to follow through.\"", "2. \"In particular it is not clear whether the expressivity-transferrability tradeoff emerges due to the specific sequential nature of the problem being considered and whether the tradeoff would still exist in the usual multi-task style setup described in section 3.1.\"", "3. \"It is unclear what Figure 6 is trying to show: it seems like skill-level exploration, but without the corresponding visualizations of robot behavior it is unclear whether these diverse trajectories are 'meaningfully diverse'.\"", "4. \"Some related works in model-based skill-transfer are not discussed (see A and B below).\"" ] ], "review_num": 4, "item_num": [ 3, 6, 3, 4 ] }