{ "ID": "-5fSvp1ofdd", "Title": "Memory of Unimaginable Outcomes in Experience Replay", "Keywords": "Transfer Multitask and Meta-learning, Robotics, Model-Based Reinforcement Learning, Batch/Offline RL, Deep RL, Continuous Action RL", "URL": "https://openreview.net/forum?id=-5fSvp1ofdd", "paper_draft_url": "/references/pdf?id=aaGFrv7AQP", "Conferece": "ICLR_2023", "track": "Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)", "acceptance": "Reject", "review_scores": "[['2', '3', '3'], ['2', '3', '3'], ['3', '3', '4'], ['3', '3', '2']]", "input": { "source": "CRF", "title": "Memory of Unimaginable Outcomes in Experience Replay", "authors": [], "emails": [], "sections": [ { "heading": null, "text": "Model-based reinforcement learning (MBRL) applies a single-shot dynamics1 model to imagined actions to select those with best expected outcome. The dy-2 namics model is an unfaithful representation of the environment physics, and its3 capacity to predict the outcome of a future action varies as it is trained iteratively.4 An experience replay buffer collects the outcomes of all actions executed in the5 environment and is used to iteratively train the dynamics model. With growing6 experience, it is expected that the model becomes more accurate at predicting the7 outcome and expected reward of imagined actions. However, training times and8 memory requirements drastically increase with the growing collection of experi-9 ences. Indeed, it would be preferable to retain only those experiences that could10 not be anticipated by the model while interacting with the environment. We argue11 that doing so results in a lean replay buffer with diverse experiences that corre-12 spond directly to the model\u2019s predictive weaknesses at a given point in time.13 We propose strategies for: i) determining reliable predictions of the dynamics14 model with respect to the imagined actions, ii) retaining only the unimaginable15 experiences in the replay buffer, and iii) training further only when sufficient novel16 experience has been acquired. We show that these contributions lead to lower17 training times, drastic reduction of the replay buffer size, fewer updates to the18 dynamics model and reduction of catastrophic forgetting. All of which enable the19 effective implementation of continual-learning agents using MBRL.20\n1 INTRODUCTION21\nModel-Based Reinforcement Learning (MBRL) is attractive because it tends to have a lower sample22 complexity compared to model-free algorithms like Soft Actor Critic (SAC) (Haarnoja et al. (2018)).23 MBRL agents function by building a model of the environment in order to predict trajectories of fu-24 ture states based off of imagined actions. An MBRL agent maintains an extensive history of its25 observations, its actions in response to observations, the resulting reward, and new observation in26 an experience replay buffer. The information stored in the replay buffer is used to train a single-shot27 dynamics model that iteratively predicts the outcomes of imagined actions into a trajectory of future28 states. At each time step, the agent executes only the first action in the trajectory, and then the model29 re-imagines a new trajectory given the result of this action (Nagabandi et al. (2018)). Yet, many30 real-world tasks consist in sequences of subtasks of arbitrary length accruing repetitive experiences,31 for example driving over a long straight and then taking a corner. Capturing the complete dynamics32 here requires longer sessions of continual learning. (Xie & Finn (2021))33 Optimization of the experience replay methodology is an open problem. Choice of size and mainte-34 nance strategy for the replay buffer both have considerable impact on asymptotic performance and35 training stability (Zhang & Sutton (2017)). From a resource perspective, the size and maintenance36 strategy of the replay buffer pose major concerns for longer learning sessions.37 The issue of overfitting is also a concern when accumulating similar or repetitive states. The buffer38 can become inundated with redundant information while consequently under-representing other im-39 portant states. Indefinite training on redundant data can result in an inability to generalize to, or40 remember, less common states. Conversely, too small a buffer will be unlikely to retain sufficient41 relevant experience into the future. Ideally, a buffer\u2019s size would be the exact size needed to cap-42 ture sufficient detail for all relevant states (Zhang & Sutton (2017)). Note that knowing a priori all43 relevant states is unfeasible without extensive exploration.44\nWe argue that these problems can be subverted by employing a strategy that avoids retaining expe-45 riences that the model already has sufficiently mastered. Humans seem to perform known actions46 almost unconsciously (e.g., walking) but they reflect on actions that lead to unanticipated events47 (e.g. walking over seemingly solid ice and falling through). Such is our inspiration to attempt to48 curate the replay buffer based on whether the experiences are predictable for the model.49\nThrough this work, we propose techniques to capture both common and sporadic experiences with50 sufficient detail for prediction in longer learning sessions. The approach comprises strategies for:51 i) determining reliable predictions of the dynamics model with respect to the imagined actions, ii)52 retaining only the unimaginable experiences in the replay buffer, iii) training further only when53 sufficient novel experience has been acquired, and iv) reducing the effects of catastrophic forget-54 ting. These strategies enable a model to self-manage both its buffer size and its decisions to train,55 drastically reducing the wall-time needed to converge. These are critical improvements toward the56 implementation of effective and stable continual-learning agents.57\nOur contributions can be summarized as follows: i) contributions towards the applicability of MBRL58 in continual learning settings, ii) a method to keep the replay buffer size to a minimum without59 sacrificing performance, iii) a method that reduces the training time. These contributions result in60 keeping only useful information in a balanced replay buffer even during longer learning sessions.61\n2 RELATED WORK62\nCompared to MFRL, MBRL tends to be more sample-efficient (Deisenroth et al. (2013)) at a cost63 of reduced performance. Recent work by Nagabandi et al. (2018) showed that neural networks effi-64 ciently reduce sample complexity for problems with high-dimensional non-linear dynamics. MBRL65 approaches need to induce potential actions which will be evaluated with a dynamics model to66 choose those with best reward. Random shooting methods artificially generate large number of ac-67 tions (Rao (2010)) and model predictive control (MPC) can be used to select actions (Camacho et al.68 (2004)). Neural networks (NNs) are a suitable alternative to families of equations used to model the69 environment dynamics in MBRL (Williams et al. (2017)). But, overconfident incorrect predictions,70 which are common in DNNs, can be harmful. Thus, quantifying predictive uncertainty, a weak-71 ness in standard NN, becomes crucial. Ensembles of probabilistic NNs proved a good alternative72 to Bayesian NNs in determining predictive uncertainty (Lakshminarayanan et al. (2016)). Further-73 more, an extensive analysis about the types of model that better estimate uncertainty in the MBRL74 setting favored ensembles of probabilistic NNs (Chua et al. (2018)). The authors identified two75 types of uncertainty: aleatoric (inherent to the process) and epistemic (resulting from datasets with76 too few data points). Combining uncertainty aware probabilistic ensembles in the trajectory sam-77 pling of the MPC with a cross entropy controller the authors demonstrated asymptotic performance78 comparable to SAC but with sample efficient convergence. The MPC, however, is still computation-79 ally expensive (Chua et al. (2018); Zhu et al. (2020)). Quantification of predictive uncertainty serves80 as a notion of confidence in an imagined trajectory. Remonda et al. (2021), used this concept to pre-81 vent unnecessary recalculation, effectively using sequences of actions the model is confident in and82 reducing computations. Our approach also seeks to determine reliable predictions of the dynamics83 model with respect to the imagined actions, but as a basis to manage growth of the experience replay.84 Use of Experience Replay in MBRL: While an uncertainty aware dynamics model helps to mit-85 igate the risks of prediction overconfidence, other challenges remain. Another considerable issue86 when training an MBRL agent is the shifting of the state distribution as the model trains. Experi-87 ence replay was introduced by Lin (1992), and has been further improved upon. Typically in RL,88 transitions are sampled uniformly from the replay buffer at each step. Prioritized experience replay89 (PER) (Schaul et al. (2016)) attempts to make learning more efficient by sampling more frequently90 transitions that are more relevant for learning. PER improves how the model samples experiences91 from the already-filled replay buffer, but it does not address how the replay buffer is filled in the92 first place. In addition, neither work addresses the importance of the size of the replay buffer as a93 hyperparameter (Zhang & Sutton (2017)). Our method attempts to balance the replay buffer by only94 adding experiences that should improve the future prediction capacity and keeps the training time95 bounded to a minimum.96 Task Agnostic Continual Learning: The context of our work originates in tasks consisting in com-97 binations of possibly repetitive subtasks of arbitrary length. In the terminology of Normandin et al.98 (2021), we aim for continuous task-agnostic continual reinforcement learning. Meaning that the99\ntask boundaries are not observed and transitions may occur gradually (Zeno et al. (2021)). In our100 case, the task latent variable is not observed and the model has no explicit information about task101 transitions. In such context, a continual learner can be seen as an autonomous agent learning over an102 endless stream of tasks, where the agent has to: i) continually adapt in a non-stationary environment,103 ii) retain memories which are useful, iii) manage compute and memory resources over a long period104 of time ( Khetarpal et al. (2020), Thrun (1994)). Our proposed strategies satisfy these requirements.105 Matiisen et al. (2020) address the issue of retaining useful memories in a curriculum learning set-106 ting by training a \u201dteacher\u201d function that mandates a learning and re-learning schedule for the agent107 assuming that the agent will not frequently revisit old experiences/states and will eventually forget108 them. Ammar et al. (2015) focus on agents that acquire knowledge incrementally by learning mul-109 tiple tasks consecutively over their lifetime. Their approach rapidly learns high performance safe110 control policies based on previously learned knowledge and safety constraints on each task, accu-111 mulating knowledge over multiple consecutive tasks to optimize overall performance. Bou Ammar112 & Taylor (2014) developed a lifelong learner for policy gradient RL. Instead of learning a control113 policy for a task from scratch, they leverage on the agent\u2019s previously learned knowledge. Knowl-114 edge is shared via a latent basis that captures reusable components of the learned policies. The latent115 basis is then updated with newly acquired knowledge. This resulted in an accelerated learning of116 new task and an improvement in the performance of existing models without retraining on their re-117 spective tasks. With our method, we imbue the RL agent with the ability to self-evaluate and decide118 in real-time if it has sufficiently learned the current state. Unlike the method presented by Matiisen119 et al. (2020), our method requires no additional networks to be trained in parallel.120\nXie & Finn (2021) identified two core challenges in the lifelong learning setting: enabling forward121 transfer, i.e. reusing knowledge from previous tasks to improve learning new tasks, and to improve122 backward transfer which can be seen as avoiding catastrophic forgetting (Kirkpatrick et al. (2017)).123 They developed a method that exploits data collected from previous tasks to cumulatively grow the124 agent\u2019s skill-set using importance sampling. Their method requires the agent to know when the125 task changes whereas our method does not have this constrain. Additionally, they focus in forward126 transfer only. Our method addresses both forward and backward transfer.127\n3 PRELIMINARIES128\nAt each time t, the agent is at a state st \u2208 S, executes an action at \u2208 A and receives from129 the environment a reward rt = r(st, at) and a state st+1 according to some environment transi-130 tion function f : S \u00d7 A \u2192 S. RL consists in training a policy towards maximizing the accu-131 mulated reward obtained from the environment. The goal is to maximize the sum of discounted132 rewards \u2211\u221e i=t \u03b3\n(i\u2212t)r(si, ai), where \u03b3 \u2208 [0, 1]. Instead, given a current state st, MBRL artifi-133 cially generates a huge amount of potential future actions, for instance using random shooting ( Rao134 (2010)) or cross entropy( Chua et al. (2018)). Clarification of these methods is beyond the scope135 of this paper; we defer the interested reader to the bibliography. MBRL attempts to learn a dis-136 crete time dynamics model f\u0302 = (st, at) to predict the future state s\u0302t+\u2206t of executing action at137 at state st. To reach a state into the future, the dynamics model iteratively evaluates sequences of138 actions, at:t+H = (at, . . . , at+H\u22121) over a longer horizon H , to maximize their discounted reward139 \u2211t+H\u22121 i=t \u03b3 (i\u2212t)r(si, ai). These sequences of actions with predicted outcomes are called imagined140 trajectories. The dynamics model f\u0302 is an inaccurate representation of the transition function f and141 the future is only partially observable. So, the controller executes only a single action at in the tra-142 jectory before solving the optimization again with the updated state st+1. The process is formalized143 in Algorithm 1. The dynamics model f\u0302\u03b8 is learned with data Denv , collected on the fly. With f\u0302\u03b8,144 the simulator starts and the controller is called to plan the best trajectory resulting in a\u2217t:t+H . Only145 the first action of the trajectory a\u2217t is executed in the environment and the rest is discarded. This is146 repeated for TaskHorizon number of steps. The data collected from the environment is added to147 Denv and f\u0302\u03b8 is trained further. The process repeats for NIterations. Note that generating imag-148 ined trajectories requires subsequent calls to the dynamics model to chain predicted future states149 st+n with future actions up to the task horizon, and so it is only partially parallelizable.150\nDynamics model. We use a probabilistic model to model a probability distribution of next state151 given current state and an action. To be specific, we use a regression model realized using a neural152 network similar to Lakshminarayanan et al. (2016) and Chua et al. (2018). The last layer of the153\nmodel outputs parameters of a Gaussian distribution that models the aleatoric uncertainty (the un-154 certainty due to the randomness of the environment). Its parameters are learned together with the155 parameters of the neural network. To model the epistemic uncertainty (the uncertainty of the dy-156 namics model due to generalization errors), we use ensembles with bagging where the members of157 the ensemble are identical and only differ in the initial weight values. Each element of the ensemble158 has as input the current state st and action at and is trained to predict the difference between st and159 st+1, instead of directly predicting the next step. Thus the learning objective for the dynamics model160 becomes, \u2206s = st+1 \u2212 st. f\u0302\u03b8 outputs the probability distribution of the future state ps(t+1) from161 which we can sample the future step and its confidence s\u0302, s\u0302\u03c3 = f\u0302\u03b8(s, [a]). Where the confidence s\u03c3162 captures both, epistemic and aleatoric uncertainty.163\nAlgorithm 1 MBRL InitD with one iteration of a random controller for Iteration i = 1 to NIterations do\nTrain f\u0302 given D for Time t = 0 to TaskHorizon do\nGet a\u2217t:t+H from ComputeOptimalTrajectory(st, f\u0302) Execute a\u2217t from optimal actions a\u2217t:t+H Record outcome: D \u2190 D \u222a {st, a\u2217t , st+1}\nTrajectory Generation. Each ensemble element164 outputs the parameters of a normal distribution. To165 generate trajectories, P particles are created from the166 current state, spt = st, which are then propagated by:167 spt+1 \u223c f\u0302b(s p t , at), using a particular bootstrap ele-168 ment b \u2208 {1, ..., B}. Chua et al experimented with169 diverse methods to propagate particles through the170 ensemble. The TS\u221e method delivered the best re-171 sults. It refers to particles never changing the initial172 bootstrap element. Doing so, results in having both173 uncertainties separated at the end of the trajectory.174 Specifically, aleatoric state variance is the average175 variance of particles of same bootstrap, whilst epis-176 temic state variance is the variance of the average of177 particles of same bootstrap indexes. We use also TS\u221e.178\nControl. To select the best course of action leading to sH , MBRL generates a large number of179 trajectories K and evaluates them in terms of reward. To find the actions that maximize reward, we180 used the cross entropy method (CEM) Botev et al. (2013), an algorithm for solving optimization181 problems based on cross-entropy minimization. CEM gradually changes the sampling distribution of182 the random search so that the rare-event is more likely to occur and estimates a sequence of sampling183 distributions that converges to a distribution with probability mass concentrated in a region of near-184 optimal solutions. Appendix A details the use of CEM to get the optimal sequence of actions a\u2217t:t+H185\n4 TOWARDS CONTINUAL LEARNING186\nApplying MBRL to a continual learning setting is a promising venue for research. The dynam-187 ics model could be constantly improving and adapting dynamically to changes in the environment.188 Many real-world tasks can be broken in sequences of subtasks of arbitrary length. Capturing the189 complete dynamics then requires exposure to longer sessions of continual learning. Arbitrarily long190 repetitive tasks lead to increasing redundancy in the experience replay constantly increasing of the191 amount of experience collected. These issues hinder the use of MRBL in continual learning settings.192 What to add to the replay buffer: We posit that it would be preferable to retain only those experi-193 ences that could not be adequately anticipated by the model during each episode in the environment.194 Essentially, we would only like to add to the replay buffer observations for which the model issued195 a poor prediction. On the contrary, we would like to avoid filling the replay buffer or updating the196 model on observations that the model is good at predicting. We contend that these two elements will197 lead eventually to a balanced replay buffer, which will contain only relevant observations. This will198 contribute to the objective of continual learning.199\n5 UARF: UNCERTAINTY AWARE REPLAY FILTERING200\nContinual learning requires the MBRL agent to adapt in a non-stationary environment, retaining201 memories that are useful whilst avoiding catastrophic forgetting, and it can manage compute and202 memory resources over a long period of time ( Khetarpal et al. (2020)). The proposed method,203 UARF, addresses these issues with a variety of strategies. Algorithm 2 is the main algorithm used to204 select which observations to append in the replay buffer. The optimal actions a\u2217t:t+H are computed205\nby the ComputeOptimalTrajectory function (See Appendix A) given the current state of the206 environment st and f\u0302 . The future trajectory and its uncertainty, p\u2217r(t+1:t+1+H), is then obtained by207 using a\u2217t:t+H and st with f\u0302 . The variable unreliableModel is set to true when the algorithm believes208 the imagined trajectory not to be trustworthy. Depending on its value, calculation of new trajectories209 and additions to the replay buffer will be avoided and therefore computation time and size of the210 replay buffer will be reduced. If unreliableModel is False, the next predicted action is executed in211 the environment. Subsequent actions from a\u2217t:t+H are executed until the unreliableModel flag is212 set to False or the environment reaches TaskHorizon number of steps. The process is repeated for213 the maximum iterations allowed for the task. After the first action, every time an action a\u2217t+1:t+H214 is executed trajectory computation is avoided and this new observation is not added to the replay215 buffer on the basis that the model can already predict its outcome. If unreliableModel is True, the216 algorithm calculates a new trajectory and adds the current observation to the replay buffer. Hereby,217 the buffer stores only observations for which the model could not predict (imagine) the outcome.218\nTrustworthy imagination (Algorithm 2 L:18-21). The algorithm that assigns a value to219 unreliableModel is named BICHO. BICHO will essentially return True as long as the reward220 projected in the future does not differ significantly with respect to the imagined future reward p\u2217r221 and the confidence of the model remains high. BICHO is built assuming that if parts of the trajec-222 tory do not vary, their projected reward will be as imagined by the model with some confidence.223 After calculating a trajectory, the distribution of rewards p\u2217r is calculated for H steps in the future.224 Whereas, at each step of the environment, independent if the recalculation was skipped or not, a225 new trajectory p\u2032r of H steps is projected, starting from state st which is given by the environment226 and using actions a\u2217t+i in the imagined trajectory. We use the Wasserstein distance (Bellemare et al.227 (2017)) to find how much these two distributions change after each time step in the environment.228 If the change is > \u03b2 (which is a hyper parameter to tune) then unreliableModel is True. We can229 control how many steps ahead we would like to compare the two distributions. The comparison is230 done for just c steps (< H), which is a hyper parameter to tune. If they differ significantly, then the231 trajectory is unreliable. That is, if the projected reward differs from the imagined one the outcome232 of the actions is uncertain and the trajectory should be recalculated.233 Even for a model that has converged, predicting trajectories of great length is impossible. Recalcu-234 lations inevitably occur at the end of trajectories. Such recalculations do not necessarily represent235 the appearance of unseen information, but rather a limitation of the successful model in a complex236 environment. Hence, we would not want to add them to the buffer. The maximum prediction dis-237 tance (MPD) defines a cutoff for a trajectory, and adjusts the strictness of the filtering mechanism.238 Refer to Appendix E for an extensive analysis.239\nUpdates on novel information (Algorithm 2 L:24-25) over-training the dynamics model leads to240 instabilities due to overfitting. This problem is exacerbated when the replay buffer contains just the241 minimum essential data. If we only filter the replay buffer, continuously updating the parameters of242 the dynamics model will eventually lead to overfitting. Instead, our method updates the parameters243 of the dynamics model only when there is sufficient new information in the replay buffer. We train244 the dynamics model only when new data exceeds the new data threshold hyper parameter. For245 our experiments we set this variable to 0.01 training only when 1% of the experiences in the replay246 buffer are new since the last update of the parameters of the dynamics model.247\n6 EXPERIMENTS248\nThe primary purpose of the proposed algorithm is for the resulting replay buffer to retain only249 relevant, non-redundant, experiences that will be useful for learning the task. We envision applying250 this method to tasks that require longer training sessions and in continual learning settings.251 We designed three experimental procedures. The first experiment seeks to establish that our method252 indeed retains a reduced buffer sufficient for achieving expected rewards when learning a single253 task throughout long training sessions. To this end, we evaluate the proposed method in benchmark254 environments for higher number of episodes than in Chua et al. (2018). The second experiment255 seeks to prove that UARF retains a small number of complementary experiences compared to non-256 filtering baseline algorithms when training on a sequence of different but related tasks in a continual257 learning setting. We evaluate our method in a combined task including unseen subtasks. The third258 experiment seeks to show how UARF addresses the effects of catastrophic forgetting.259\n6.1 E1\u2013 CONTINUING TO LEARN A TASK AFTER CONVERGENCE260\nThis experiment is intended to show that our method retains sufficient experience to solve the task261 while curtailing buffer growth and unnecessary model updates. We intend to prove that this results in262\na dramatic reduction in the replay buffer size (which is free of any artificially-imposed limits) while263 retaining strong performance (per-episode reward) and reducing per-episode wall clock run-time.264\nWe use the MuJoCo (Todorov et al. (2012))265 physics engine and environments Cartpole266 (CP), Pusher (PU) and Reacher (RE) with task267 length (TaskH) and trajectory horizon (H)268 chosen for a valid comparison with Chua et al.269 (2018). With similar training scenarios, Re-270 monda et al. (2021) trained CP for 30 episodes,271 PU and RE for 150. Instead, we trained each272 for 100 episodes. We also included a modified273 version of the Masspoint environment (Thanan-274 jeyan et al. (2020)) (also used in E2). Mass-275 point is a navigation task in which a point mass276 navigates to a given goal. It is a 5-dimensional277 (x, y, vx, vy, \u03c1) state domain. Where (x, y) is278 the position of the agent, (vx, vy) its speed, and279 \u03c1 is the distance between the agent and the clos-280\nest point to a given path. The agent can exert force in cardinal directions and experiences drag coef-281 ficient \u03c8. We use \u03c8 = 0.6 and included noise in the starting position. We modified the goal of the282 agent so that it must move as fast as possible without deviating from a given path. Each task and its283 complexity is then determined by the geometry of the path to be followed. The reward is calculated284 as r = V (1 \u2212 |\u03c1|). Where V is the speed of the agent and \u03c1 the distance to the task\u2019s path. This285 experiment used sector1 (Figure in Appendix B) and Hyperparameters shown in Appendix F.286\nWe assess performance in terms of per-episode reward, per-episode wall time, and replay buffer size.287 We evaluate three algorithms: baseline (BL) is a conventional MBRL (PETS Chua et al. (2018)),288 BICHO uses functionality to avoid unnecessary computation, and UARF. BICHO and UARF used289 the same values of \u03b2 and look-ahead, estimated empirically to produce a reasonable balance in290 terms of per-episode reward and percentage of recalculation. All experiments use random seeds and291 randomized initial conditions for each run, and ran in workstations with Nvidia 3080TI GPUs.292\nResults: Fig 1 top shows the results obtained in CP. Fig 1-mid-right shows the size of the replay293 buffer during training. We observe that while the replay buffer keeps grows in the case of BL and294 BICHO, the size of the buffer derived from UARF is comparably flat: the buffer resulting from295 UARF is 10x smaller. The training time per episode (Fig 1 mid-left) remains nearly constant and296 lower for UARF. BL takes substantially longer than both BICHO and UARF to complete an episode.297 The wall time of both the BL and BICHO exhibit linear growth. It takes longer to update the model298 as the replay buffer grows linearly. Fig 1-left shows comparable reward per episode for all methods.299 Results in Fig 1 for PU (row 2), RE (row 3) and Masspoint (bottom) are consistent with those of CP.300 Fig 2 illustrates the management of buffer growth in Masspoint by showing exactly at which steps301 experiences are added to the replay buffer during untrained (E1) and trained (E99) episodes. These302 plots reveal that when the model is untrained, many experiences are added to the buffer throughout303 the episode. After the model is trained (E99), UARF stops adding experiences to the buffer as the304 model is able to predict them. Hence, new experiences are deemed redundant and not useful to305 the model. Results support our claim that UARF obtains a drastically smaller replay buffer that is306 intelligently populated with only relevant information. This is achieved while maintaining strong307 performance in the environment compared to BL. Note that while the curves are plotted per episode,308 it is misleading to assume that all methods converge roughly at the same time. The time per episode309 in the case of UARF is at least half that of BL for approximately in every environment and it remains310 stable, while it increases linearly for BL and BICHO with increasing buffer size. The total wall time311 in average for CP was BL=1.83h, BICHO=0.59h and UARF=0.57h. For PU, it was BL=0.97h,312 BICHO=0.73h and UARF=0.66h. For RE, it was BL=1.99h BICHO=1.55h and UARF=1.45h, and313 for MP it was BL=2.15h, BICHO=1.94h and UARF=1.68h.314\n6.2 EX 2. CONTINUAL LEARNING EXPERIMENT315\nThis experiment is set in Task Agnostic Continual Reinforcement Learning (the model is not aware316 of tasks or task transitions). In this setting, we sought to prove that UARF maintains a leaner and317\nmore relevant collection of experiences in the replay buffer than do baseline algorithms. These char-318 acteristics of the proposed algorithm, we posit, result in strong test performance with less data and319 greater stability. The existence of these characteristics can be verified by observing (after training)320 the size of the buffer, the number of experiences from each maneuver present in the buffer, and the321 performance of the models on the test task. We used the Masspoint racing environment, defining322 different simple tasks that can be composed to solve a complex, unseen one.323\nwhat it learned by training on each sub-task and ap-335\nply this knowledge to navigate a more complex, un-336\nseen task. All of the algorithms had a virtually un-337\nlimited replay buffer size. Each model was trained338\nfor 30 episodes on each sub-task and then tested on339\nthe test task.340\nResults Figure 3 shows episode reward, wall-time, buffer size during training, and new experiences341 added to the buffer per episode. Vertical lines illustrate task divisions. High episode reward indicates342 that each model adequately learns each subtask. UARF maintains almost a constant wall-time, while343 BL and BICHO increase as experience accumulates. Buffer growth for BL and BICHO is linear, but344 UARF evidences asymptotic growth (13x smaller) adding no new experiences at the end of training.345\nFigure 3-4 shows the buffer growth of UARF. A larger amount of additions to the replay buffer346 occur while training the first tasks. Growth slows to a near halt during the last tasks. This is the347 case for example with the fourth task (chicane inverted). The previous task (chicane) is similar, and348 the information to solve the previous task is enough that the algorithm does not require a significant349 amount of new experience to solve chicane inverted. Figure 4 shows the distribution of experiences350 from each sub-task present in each algorithm\u2019s replay buffer immediately following training. BL351 and BICHO employ a naive approach, resulting in replay buffers with distributions of experience352 determined exclusively by the length of the various maneuvers. The filtering mechanism of UARF353 results in a distribution of experience with some maneuvers having limited representation (e.g., the354 inverse maneuvers) This is because the UARF algorithm intelligently decides to omit redundant355 experiences from the buffer and leaves only the relevant ones. Figure 3 right shows that all three356 algorithms result in a model that adequately solves the test task. UARF continues to manage buffer357 growth while achieving high performance. The results support our initial hypothesis by illustrating358 clearly the proposed algorithm\u2019s propensity to maintain a smaller and more relevant replay buffer359 while achieving the performance of the baseline in a continual learning setting.360\n6.3 EX 3. CATASTROPHIC FORGETTING361\nOur approach helps to mitigate catastrophic forgetting. When using a fixed replay buffer size, it is362 important to ensure that the appropriate maximum buffer size is chosen (Zhang & Sutton (2017)).363 If this value is undertuned, important experiences can be jettisoned, and catastrophic forgetting can364 occur. To illustrate how UARF helps to alleviate this risk, we ran the same experiment shown in365 section Ex.2 but with a replay buffer of fixed size (5000 samples; roughly 4x the replay buffer size366 used by UARF in the unlimited size setting). Table 1 compares rewards achieved by each algorithm367 with both unlimited and fixed buffers. The models were validated on the full track and also on a368 maneuver that was trained early on in the training process (c1 inverse). Results reveal that with369 an undertuned fixed buffer size, BL loses about 10% performance both on the full track and on c1370 inverse. This is indicative of the fact that the non-filtering algorithms are hitting the buffer size371 cap, throwing away valuable experiences, and forgetting how to properly solve maneuvers that were372 trained early on. This impacts performance on the full track as well.\n373\n7 DISCUSSION AND CONCLUSION374\nThe results in E1 reveal that continuing to run our algorithm in a repetitive environment with re-375 dundant or monotonous actions leads to, in some tasks, no increase in buffer and reduced dynamics376 model updates. This has the consequence of reduced running and training times, while reducing the377 effects of catastrophic forgetting and keeping the replay buffer size to a bare minimum. In E2, a con-378 tinual learning setting, we demonstrated that using our approach leads outcomes with 1/25th of the379 experiences without performance degradation. UARF effectively deals with an unbounded growth380 of the replay buffer, which again reduces training time and instabilities. This effect is accentuated381 when training on a continual learning setting. UARF uses a buffer 43x smaller than the baseline.382\nThe replay buffer is an instrument that makes the use of deep neural networks in RL more stable383 and it is an essential part in algorithms such as PETs. Such analyses of replay buffer are scarce. But384 recently, research has turned to analyze the contents and strategies to manage the replay buffer of385 RL agents Fedus et al. (2020), and also in supervised learning Aljundi et al. (2019). We contribute386 to such body of work analyzing and offering strategies to manage growth of replay buffer in model387 based RL. Having managed growth, there are several aspects we would like to turn to in the future:388 i) identifying task boundary from the novelty of experiences, ii) managing what to forget for limited389 size buffers, iii) managing what to remember / refresh when a change in task is evident. All this390 would allow to run agents for arbitrary time without having to deal with size of the buffer and would391 offer promising opportunities for deploying MBRL in a continual learning setting.392\nBICHO could be used to prioritize entries in the RB where the model was uncertain. Indeed, prior-393 itized buffer strategies support the usage of experience once it is in the buffer, but as the authors of394 the PER paper state, strategies for what to add and when (our work) are important open avenues for395 research. We did not explore our methods in environments where the tasks have interfering dynam-396 ics. But, if the dynamics change, poor predictions by the model will result in adding experiences to397 the replay buffer. What happens if interfering tasks occur permanently is an interesting follow up.398\nIn summary, we proposed strategies that comply with requirements for continual learning. Our ap-399 proach retains only memories which are useful: it obtains lean and diverse replay buffers capturing400 both common and sporadic experiences with sufficient detail for prediction in longer learning ses-401 sions. Our approach manages compute and memory resources over longer periods: it deals with the402 unbounded growth of the replay buffer, its training time and instability due to catastrophic forgetting.403 These results offer promising opportunities for deploying MBRL in a continual learning setting.404\n8 REPRODUCIBILITY STATEMENT405\nTo make our experiments reproducible, we provide the source code in the supplementary material.406 We include instructions describing how to run all the experiments and to create the images. We in-407 clude the source code of the proposed algorithms, the MassPoint environment and clear instructions408 showing how to install extra packages and dependencies needed to reproduce our experiments.409\nREFERENCES410 Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection411 for online continual learning, 2019.412\nHaitham Bou Ammar, Rasul Tutunov, and Eric Eaton. Safe policy search for lifelong reinforcement413 learning with sublinear regret, 2015.414\nMarc G Bellemare, Will Dabney, and Re\u0301mi Munos. A distributional perspective on reinforcement415 learning. In International Conference on Machine Learning, pp. 449\u2013458. PMLR, 2017.416\nZdravko I. Botev, Dirk P. Kroese, Reuven Y. Rubinstein, and Pierre L\u2019Ecuyer. Chapter 3 - the417 cross-entropy method for optimization. In C.R. Rao and Venu Govindaraju (eds.), Handbook of418 Statistics, volume 31 of Handbook of Statistics, pp. 35 \u2013 59. Elsevier, 2013. doi: https://doi.419 org/10.1016/B978-0-444-53859-8.00003-5. URL http://www.sciencedirect.com/420 science/article/pii/B9780444538598000035.421\nHaitham Bou Ammar and Matthew Taylor. Online multi-task learning for policy gradient methods.422 01 2014.423\nE.F. Camacho, C. Bordons, and C.B. Alba. Model Predictive Control. Advanced Textbooks in424 Control and Signal Processing. Springer London, 2004. ISBN 9781852336943. URL https:425 //books.google.at/books?id=Sc1H3f3E8CQC.426\nKurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement427 learning in a handful of trials using probabilistic dynamics models, 2018.428\nM. P. Deisenroth, G. Neumann, and J. Peters. 2013. doi: 10.1561/2300000021.429\nWilliam Fedus, Prajit Ramachandran, Rishabh Agarwal, Yoshua Bengio, Hugo Larochelle, Mark430 Rowland, and Will Dabney. Revisiting fundamentals of experience replay, 2020.431\nTuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy432 maximum entropy deep reinforcement learning with a stochastic actor, 2018.433\nKhimya Khetarpal, Matthew Riemer, Irina Rish, and Doina Precup. Towards continual reinforce-434 ment learning: A review and perspectives, 2020.435\nJames Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A436 Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcom-437 ing catastrophic forgetting in neural networks. Proceedings of the national academy of sciences,438 114(13):3521\u20133526, 2017.439\nBalaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive440 uncertainty estimation using deep ensembles, 2016.441\nLong-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching.442 Machine Learning, pp. 293\u2013321, 1992.443\nTambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher\u2013student curriculum learn-444 ing. IEEE Transactions on Neural Networks and Learning Systems, 31(9):3732\u20133740, 2020. doi:445 10.1109/TNNLS.2019.2934906.446\nAnusha Nagabandi, G. Kahn, Ronald S. Fearing, and S. Levine. Neural network dynamics for447 model-based deep reinforcement learning with model-free fine-tuning. 2018 IEEE International448 Conference on Robotics and Automation (ICRA), pp. 7559\u20137566, 2018.449\nFabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Pau Rodriguez, Matthew D Riemer, Julio450 Hurtado, Khimya Khetarpal, Dominic Zhao, Ryan Lindeborg, Timothe\u0301e Lesort, et al. Sequoia: A451 software framework to unify continual learning research. arXiv e-prints, pp. arXiv\u20132108, 2021.452\nAnvil V. Rao. A survey of numerical methods for optimal control. Advances in the Astronautical453 Science, 135:497\u2013528, 2010.454\nAdrian Remonda, Eduardo E. Veas, and Granit Luzhnica. Acting upon imagination: when to trust455 imagined trajectories in model based reinforcement learning. CoRR, abs/2105.05716, 2021. URL456 https://arxiv.org/abs/2105.05716.457\nTom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In458 ICLR (Poster), 2016.459\nBrijen Thananjeyan, Ashwin Balakrishna, Ugo Rosolia, Felix Li, Rowan McAllister, Joseph E. Gon-460 zalez, Sergey Levine, Francesco Borrelli, and Ken Goldberg. Safety augmented value estimation461 from demonstrations (saved): Safe deep model-based rl for sparse cost robotic tasks, 2020.462\nS. Thrun. A lifelong learning perspective for mobile robot control. In Proceedings of IEEE/RSJ In-463 ternational Conference on Intelligent Robots and Systems (IROS\u201994), volume 1, pp. 23\u201330 vol.1,464 1994. doi: 10.1109/IROS.1994.407413.465\nEmanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control.466 In IROS, pp. 5026\u20135033. IEEE, 2012. ISBN 978-1-4673-1737-5.467\nGrady Williams, Paul Drews, Brian Goldfain, James M. Rehg, and Evangelos A. Theodorou. In-468 formation theoretic model predictive control: Theory and applications to autonomous driving,469 2017.470\nAnnie Xie and Chelsea Finn. Lifelong robotic reinforcement learning by retaining experiences.471 arXiv preprint arXiv:2109.09180, 2021.472\nChen Zeno, Itay Golan, Elad Hoffer, and Daniel Soudry. Task-Agnostic Continual Learning Using473 Online Variational Bayes With Fixed-Point Updates. Neural Computation, 33(11):3139\u20133177, 10474 2021. ISSN 0899-7667. doi: 10.1162/neco a 01430. URL https://doi.org/10.1162/475 neco_a_01430.476\nShangtong Zhang and Richard S. Sutton. A Deeper Look at Experience Replay. arXiv e-prints, art.477 arXiv:1712.01275, December 2017.478\nGuangxiang Zhu, Minghao Zhang, Honglak Lee, and Chongjie Zhang. Bridging imagination and479 reality for model-based deep reinforcement learning. NeurIPS, 2020.480\nA OPTIMAL TRAJECTORY GENERATION481\nAlgorithm 3 shows the use of CEM to compute the optimal sequence of actions a\u2217t:t+H .\nAlgorithm 3 Compute Optimal Trajectory Input: sinit: current state of the environment, dynamics model f\u0302 1: Initialize P particles, sp\u03c4 , with the initial state, sinit 2: for Actions sampled at:t+H \u223c CEM(.), 1 to CEMSamples do 3: Propagate state particles sp\u03c4 using TS and f\u0302 |{D, at:t+H}\n4: Evaluate actions as t+H\u2211 \u03c4=t 1 P P\u2211 p=1 r(sp\u03c4 , a\u03c4 ) 5: Update CEM(.) distribution 6: return a\u2217t:t+H\n482\nB MASS POINT TASKS483\nEach algorithm trains a model on a sequence of seven separate sub-tasks: two corners and their484 inverses, a chicane and its inverse, and a straight (Figure 5). The full track contains some of the sub-485 tasks seen during training (Shown with different colors in the full-track image (Appendix Figure 5)486 in addition to tasks unseen during training (shown in black in the full-track image).487\nC ENVIRONMENTS488\nWe evaluate the methods on agents in the MuJoCo Todorov et al. (2012) physics engine. To establish489 a valid comparison with Chua et al. (2018) we use four environments with corresponding task length490 (TaskH) and trajectory horizon (H).491\n\u2022 Cartpole (CP): S \u2208 R4, A \u2208 R1, TaskH 200, H 25492 \u2022 Reacher (RE): S \u2208 R17, A \u2208 R7, TaskH 150, H 25493 \u2022 Pusher (PU): S \u2208 R20, A \u2208 R7, TaskH 150, H 25494 \u2022 Masspoint: S \u2208 R5, A \u2208 R2, TaskH 290, H 25495\nThis means that each iteration will run for TaskH , task horizon, steps, and that imagined trajectories496 include H trajectory horizon steps. S \u2208 Ri, A \u2208 Rj refers to the dimensions of the environment497 state consisting in a vector of i components and the action consisting in a vector of j components.498\nD EX 2. CONTINUAL LEARNING EXPERIMENT. ADDITIONAL RESULTS499\nFigure 6 shows additional results with the wall-time during the training process for the continual500 learning experiment.501\nE MAXIMUM PREDICTION DISTANCE502\nAn additional parameter of interest when using UARF is what we call the \u201dmaximum prediction503 distance\u201d or MPD. This parameter operates on the assumption that even for a model that has reached504 convergence, in some environments, predicting trajectories of great length is impossible. As such,505 recalculations must inevitably occur at the end of such long trajectories. These recalculations do506 not necessarily represent the appearance of new, unseen information, but rather a limitation of the507 successful model in a complex environment. Hence, we would not want to add these experiences to508 the buffer.509\nWhere we define the cutoff for a trajectory of \u201dgreat length\u201d can be changed, and it serves to adjust510 the strictness of UARF\u2019s filtering mechanism. For Ex.1 and Ex.2, we chose to set the maximum511 prediction distance to 1 to ensure the strictest filtering of the replay buffer.512\nIn 7, we evaluate the effect of the MPD on the513 performance of UARF in the cartpole environ-514 ment. We were particularly interested in the515 effect on the rate of recalculation and on the516 size of the replay buffer. In 7 one can see that517 the models converge with no issue, but they do518 differ slightly in the rates of recalculation and519 buffer filtering. The strictest MPD, MPD=1,520 results in the leanest buffer, but its recalcula-521 tion rate is slightly higher than the models with522 MPD=2 and MPD4.523\nThese results show that the MPD serves as a524 way to tune the strictness of UARF\u2019s buffer fil-525 tering mechanism. It would be an area of future526 research to find the optimal way to tune this pa-527 rameter automatically throughout training such528 as to best balance recalculation rate and replay529 buffer filtering.530\nF HYPERPARAMETERS531\nTable 2 shows the hyper parameters used to train UARF. Look-ahead refers to the number of steps532 ahead BICHO and UARF are using to asses the quality of the imagined trajectories. \u03b2 controls the533 sensitivity of BICHO and UARF to inform whether a trajectory is still valid or not. \u201dNew Data534 Train Threshold\u201d refers to the amount of fresh data that must be added to the replay buffer before535 the UARF algorithm triggers the training of the dynamics model.536\nCartpole Pusher Reacher Masspoint Look-Ahead 10 10 10 10 \u03b2 0.005 0.005 0.005 0.5 New Data Threshold 1% 1% 1% 1% Training episodes 100 100 10 30/task CEM population 400 500 400 400 CEM # elites 40 50 40 40 CEM # iterations 5 5 5 5 CEM \u03b1 0.1 0.1 0.1 0.1 MPD 10 10 10 1\nTable 2: Hyperparameters used for UARF implementation." } ], "year": 2022, "abstractText": "Model-based reinforcement learning (MBRL) applies a single-shot dynamics 1 model to imagined actions to select those with best expected outcome. The dy2 namics model is an unfaithful representation of the environment physics, and its 3 capacity to predict the outcome of a future action varies as it is trained iteratively. 4 An experience replay buffer collects the outcomes of all actions executed in the 5 environment and is used to iteratively train the dynamics model. With growing 6 experience, it is expected that the model becomes more accurate at predicting the 7 outcome and expected reward of imagined actions. However, training times and 8 memory requirements drastically increase with the growing collection of experi9 ences. Indeed, it would be preferable to retain only those experiences that could 10 not be anticipated by the model while interacting with the environment. We argue 11 that doing so results in a lean replay buffer with diverse experiences that corre12 spond directly to the model\u2019s predictive weaknesses at a given point in time. 13 We propose strategies for: i) determining reliable predictions of the dynamics 14 model with respect to the imagined actions, ii) retaining only the unimaginable 15 experiences in the replay buffer, and iii) training further only when sufficient novel 16 experience has been acquired. We show that these contributions lead to lower 17 training times, drastic reduction of the replay buffer size, fewer updates to the 18 dynamics model and reduction of catastrophic forgetting. All of which enable the 19 effective implementation of continual-learning agents using MBRL. 20", "creator": "LaTeX with hyperref" }, "output": [ [ "1. \"The paper needs to flow better. It starts with MBRL (which is an important problem to study), then jumps to Replay Buffer (RB) and Continual Learning (CL). The transition from MBRL to RB and CL needs to be clarified. Are RB-related issues the only challenges in MBRL, or why are those issues more important? Similarly, why should one care about CL in the context of MBRL? The authors need to motivate these questions better.\"", "2. \"Related work needs references to approaches that select what examples are retained/sampled from the replay buffer in the context of CL methods. Given that the paper's primary focus is on reducing the size of the replay buffer and determining what examples/transitions to store, referencing such works is helpful.\"", "3. \"Typos (like MRBL in line 192).\"", "4. \"Need to include details like the number of seeds/trials for experiments.\"", "5. \"Several phrases are not explained, e.g., 'complementary experiences' in line 256.\"", "6. \"Several details are missing or glossed over. E.g., in line 254, 'we evaluate the proposed method in benchmarks environments for higher number of episodes than in Chua et al. (2018)' - why more episodes? Or line 270, 'With similar training scenarios, Remonda et al. (2021) trained CP for 30 episodes, PU and RE for 150. Instead, we trained each for 100 episodes.' - > why change the number of episodes?\"", "7. \"Several hyperparameters are introduced, but the corresponding ablations/analysis are missing.\"", "8. \"Reporting wall-clock time can be misleading as the wall-clock time depends on the load on the system when the time was recorded. A better metric to report is the number of floating-point operations.\"", "9. \"It is not obvious to me what the difference is between BICHO and UARF.\"", "10. \"In line 298, the paper notes, 'It takes longer to update the model as the replay buffer grows linearly.' Is this because the system is trained on the entire replay buffer at every update step? Did the authors consider the variant where a fixed number of data points (respective of the size of the replay buffer) is used for an update?\"", "11. \"In line 263, the paper says, 'replay buffer size (which is free of any artificially-imposed limits)' - it is unclear if the baseline approaches need the unlimited buffer size. Maybe a small buffer size (equal to the buffer size used by the proposed method) is sufficient for solving the task, and the extra examples are unnecessary. My suspicion grows stronger when I look at the first column of figure 1, where the models often reach quite close to the convergence performance in a few episodes. Still, the training seems to have continued to inflate the size of the replay buffer for the baseline methods. I would also like to see an ablation where the size of the replay buffer (for the baselines) is fixed, and older entries are thrown away in FIFO order as new entries become available.\"", "12. \"Regarding 'time per episode,' I understand that the baselines would take longer because the replay buffers are larger (read 2.5 for a query on that), but this effect should not kick in when the number of episodes is very small. e.g., in Figure 1, for 0 episodes, I expect all the methods to take the same amount of time (maybe the proposed method takes a bit longer due to the computation of some variables). Still, the PETS baseline is taking much longer, even with 0 episodes. Why is that the case? Isn't the proposed method using the PETS model as the underlying MBRL method?\"", "13. \"In Section 6.2, the tasks considered are basically the same line-following task. Given that the agent has access to the distance from the closest point on the path, there is not much distinction between the tasks. It would not be surprising that an agent that (relatively) easily generalize across different task instances (presented as different tasks).\"", "14. \"The previous question (2.6) about the replay buffer size also applies here. It must be clarified if the baseline needs large buffers to solve the task.\"", "15. \"Generally, the paper needs to consider more complex tasks (with longer episode lengths) if the claims are about improving performance in the lifelong learning setup.\"" ], [ "1. **Novelty. The paper is mostly combing some intuitive designs together to form an algorithm. I did not see the exactly same algorithm before. But the ideas behind the algorithm are already studied.**", "2. **The paper is mostly empirical, and the algorithmic design is mostly based on intuition. Though it is fine not to have theoretical support, each design choice should be well-justified. However, many design choices in the paper could be problematically intuitively.**", "3. **First, using reward distribution to judge whether a trajectory is novel or not is not well-justified. The reward distribution match may not imply anything about the underlying state-action-state transitions. I cannot say that such a reward-based measure does not work at all, but the paper does not provide any insight into why it works.**", "4. **Second, it is hard to believe why the method can avoid catastrophic forgetting. Intuitively, if you discard some experiences which have been predicted accurately, the model can still forget those experiences as the sampling distribution shifts (because the policy is changing).**", "5. **Third, the authors use trajectory optimization and CEM in the MBRL method. There are large amounts of MBRL algorithms. It is better to provide some reasons why the particular method is chosen.**", "6. **Missing a large body of related works.**", "7. **Some simple and theoretically sound memory-saving methods, such as reservoir sampling (I think this should be a simple baseline and it is easy to implement)**", "8. **A body of work saving memory by using theoretically justified approaches in MBRL (e.g. organizing experiences: a deeper look at replay mechanisms for sample-based planning \u2026 by Pan et al.)**", "9. **Decision/value-aware robust model-based RL by Amir-massoud et al.**", "10. **There are also papers studying/using techniques to avoid/reduce compounding errors in long-time horizon predictions.**", "11. **It is also good to survey papers about avoiding catastrophic forgetting in a supervised learning setting.**", "12. **The empirical results should at least justify the key claims of the paper:**" ], [ "1. \"I feel the particular implementation is not very well motivated and the empirical evaluation is not very extensive.\"", "2. \"The notion of model uncertainty used is a little odd to me and doesn't come across as well-motivated, particularly for deciding whether to include transitions in the replay buffer.\"", "3. \"Certain details of the algorithm are not clear, for example, 'maximum prediction distance' is mentioned as a hyperparameter in line 238, but I can't see it in Algorithm 2 so I had difficulty understanding precisely how it is used.\"", "4. \"I am also unclear on whether the reward is learned along with the dynamics or provided to the planner explicitly as I couldn't find any mention of reward learning in the paper.\"", "5. \"The empirical results are not very convincing.\"", "6. \"The obvious baseline of using a fixed-size replay buffer and dropping the oldest item is not included so it's not clearly demonstrated that the particular method used is beneficial.\"", "7. \"Similarly, it is shown that only replanning when the model is found to be 'unreliable' will save computation without sacrificing performance, but this is not compared to simply replanning after a fixed number of steps or any other simple strategy so one cannot say much about the effectiveness of the particular method suggested.\"", "8. \"I didn't find the continual learning (task switching) experiments really added much that wasn't already demonstrated in the single-task experiments.\"", "9. \"Another issue is that I believe all experiments are run with a single random seed (please correct me if I'm wrong).\"", "10. \"It does make it unclear whether the performance benefit compared to a fixed-size buffer in Table 1 is a real effect or just noise. At least this experiment should be done with more random seeds and with confidence bounds.\"", "11. \"Another, more minor, issue is the exposition of model-based RL which is currently unclear and misleading.\"", "12. \"However, this paper makes a number of statements about MBRL that are really about a specific subset of approaches.\"", "13. \"In the abstract and introduction: what is a 'single-shot' dynamics model?\"", "14. \"line 63: MFRL acronym is never actually defined.\"", "15. \"Line 137: f^=(st,at) should probably just be f^(st,at)\"", "16. \"Line 220: Is 'BICHO' an acronym? I can't see that it is ever defined.\"", "17. \"Line 220: 'BICHO will essentially return True as long as the reward projected in the future does not differ significantly with respect to the imagined future reward...', I think 'True' here should be False since BICHO returns true when the model is unreliable.\"", "18. \"Line 107: Backwards quotation marks on 'teacher', also elsewhere.\"", "19. \"Line 341: 'Figure 3 shows episode reward, wall-time, buffer size...', I don't think wall time is actually included in Figure 3.\"", "20. \"Algorithm 2 (line 17): I think there may be a typo here, as it appears to be comparing rewards with different time indices.\"" ], [ "1. \"The paper is challenging to follow.\"", "2. \"It is hard to capture the main question of interest of this paper at the first pass.\"", "3. \"The connection to continual learning feels unnatural.\"", "4. \"Section 5 and Algorithm 2 are hard to understand.\"", "5. \"The authors introduce a lot of notations on-the-fly.\"", "6. \"I would suggest the authors to define all these notations in clearly all at once.\"", "7. \"I would also suggest the authors to consider replacing the incomprehensible symbols like \u2217 and \u2032 with easier-to-interpret ones.\"", "8. \"It will be good to provide a clear description of BICHO.\"", "9. \"In line 289, it says 'BICHO uses functionality to avoid unnecessary computation'. This is rather confusing. What functionality? How does it avoid unnecessary computation?\"", "10. \"Given that BICHO is one of the three methods evaluated in the experiments, I think it is worth spending some space on a clear description of BICHO.\"", "11. \"The proposed method only demonstrates weak evidence in improving the agent's performance on the task in the experiment in Section 6.3.\"", "12. \"This paper would benefit from a more in-depth evaluation of the proposed method on task performance.\"" ] ], "review_num": 4, "item_num": [ 15, 12, 20, 12 ] }