Reza8848's picture
Track large files with Git LFS
837b615
{
"ID": "1NAzMofMnWl",
"Title": "DaxBench: Benchmarking Deformable Object Manipulation with Differentiable Physics",
"Keywords": "deformable object manipulation, differentiable physics, benchmark",
"URL": "https://openreview.net/forum?id=1NAzMofMnWl",
"paper_draft_url": "/references/pdf?id=BlZ6aI9Rs",
"Conferece": "ICLR_2023",
"track": "Infrastructure (eg, datasets, competitions, implementations, libraries)",
"acceptance": "Accept: notable-top-5%",
"review_scores": "[['3', '8', '3'], ['3', '8', '3'], ['3', '8', '4']]",
"input": {
"source": "CRF",
"title": "DaxBench: Benchmarking Deformable Object Manipulation with Differentiable Physics",
"authors": [
"DIFFERENTIABLE PHYSICS"
],
"emails": [],
"sections": [
{
"heading": "1 INTRODUCTION",
"text": "Deformable object manipulation (DOM) is critical to various applications, ranging from household scenarios (Maitin-Shepard et al., 2010; Miller et al., 2011; Ma et al., 2022) to industrial applications (Miller et al., 2012; Zhu et al., 2022). Built upon deformable object simulators, DOM benchmarks have provided a platform for algorithm development and prototyping (Lin et al., 2020; Ma et al., 2022). However, given the extremely high-dimensional state space and action space on a DOM task, learning a generalizable and effective policy remains challenging.\nIn contrast to standard non-differentiable simulators, differentiable physics simulation constructs physics rules as differentiable computational graphs, which allows direct optimization of a control policy with analytical environment gradients (Freeman et al., 2021; Hu et al., 2020). Because of their particle-based or mesh-based representations, deformable objects can be naturally connected with differentiable simulations. By directly propagating the gradients from optimization objectives, e.g., cumulative rewards, to a learning policy through environment dynamics, we can use the groundtruth simulation gradients to reason about the dynamics and improve sample efficiency during policy learning (Chen et al., 2022; Hu et al., 2020; Huang et al., 2021; Xu et al., 2022; Heiden et al., 2021), which is favorable in the context of DOM. However, most of the existing differentiable DOM benchmarks focus on a single task, and provide no evidence to the generalization ability of the resulting task-specific algorithms.\nIn this work, we present DaXBench, a differentiable and comprehensive high-performance benchmark for DOM. DaXBench relies on a Deformable object simulator, DaX, which connects the recent advances of deformable object simulation algorithms (Xu et al., 2022; Chen et al., 2022) to the high-performance computational framework, JAX (Bradbury et al., 2018). In particular, powered by JAX, DaX enjoys the auto-differentiation and parallelization to multiple devices. From the task setup perspective, in contrast to single-task-based benchmarks, DaXBench covers a wide range of object types, including rope, cloth, liquid, and elasto-plastic object manipulations. Considering object types, a wide range of tasks with varying difficulty levels and well-defined reward functions are\npresented, such as liquid pouring, cloth folding, and rope wiping. Specifically, we design tasks with high-level macro actions and low-level control-based action spaces to fully examine the performance of different algorithms. All task environments are wrapped with OpenAI Gym APIs (Brockman et al., 2016) and Python interfaces, such that DaXBench can be seamlessly connected with a DOM algorithm for fast and easy development.\nIn addition, we benchmark competitive DOM methods covering all algorithmic paradigms, namely sampling-based planning, Imitation Learning (IL), and Reinforcement Learning (RL). Specifically, for planning methods, we consider CrossEntropyMethod-MPC (CEM-MPC) (Richards, 2005), differentiable MPC (Hu et al., 2020), and a combination of the two, Diff-CEM-MPC. For RL domains, we consider standard Proximal Policy Optimization (PPO) (Schulman et al., 2017) with nondifferentiable dynamics, Short-Horizon Actor-Critic (SHAC) (Xu et al., 2022), and Analytic Policy Gradients (APG) (Freeman et al., 2021). For IL, Transporter networks (Seita et al., 2020) with nondifferentiable dynamics and Imitation Learning via Differentiable physics (ILD) (Chen et al., 2022) are compared.\nIn our experiments, by comparing the algorithms with and without analytic gradients side-by-side on each task, we provide deeper understanding of the benefits and challenges to analytic policy optimization in DOM. We discuss the limitations of existing methods and potential future directions with DaXBench. In addition, we demonstrate that the dynamics model in DaXBenchis sufficiently realistic that allows direct sim-to-real transfer to a real robot on rope manipulation tasks."
},
{
"heading": "2 RELATED WORKS",
"text": ""
},
{
"heading": "2.1 DEFORMABLE OBJECT SIMULATORS",
"text": "The progress in Deformable Object Manipulation (DOM) partially stems from the recent emergence of DOM simulators in the past years. These benchmarks and simulators have significantly advanced the state-of-the-art in modeling the complex dynamics of DOM and provide the firmament of the algorithmic innovations for the DOM methods. SoftGym (Lin et al., 2020) is the first DOM benchmark that models all liquid, fabric, and elastoplastic objects and introduces a wide range of standardized DOM tasks. However, SoftGym\u2019s simulator is non-differentiable and thus does not support methods based on differentiable physics. On the other hand, several differentiable DOM simulators emerge, such as ChainQueen (Hu et al., 2018b), Diff-PDE (Holl et al., 2020), PlasticineLab (Hu et al., 2020), DiSECt (Heiden et al., 2021), and DiffSim (Qiao et al., 2020). Each of these simulators specializes in modeling a single type of deformable object and supports a narrow range of object-specific tasks. For example, PlasticineLab (Hu et al., 2020) and DiSECt (Heiden et al., 2021) model only the elastoplastic objects, including tasks such as sculpting, rolling, and cutting the deformable objects. Since the dynamics of each deformable object are vastly different, the physic engine specialized in modeling one type of deformable objects cannot be easily extended to another. DaXBench bridges this gap by proposing a set of differentiable benchmarks that cover a wide range of deformable objects and the relevant tasks. We provide a fairground for comparing and developing all types of DOM methods, especially the differentiable ones."
},
{
"heading": "2.2 DEFORMABLE OBJECT MANIPULATION ALGORITHMS",
"text": "Deformable Object Manipulation (DOM) is more challenging than its rigid body counterpart due to the infinite dimensionality of the state space and the corresponding complex dynamics. These invalidate the direct application of the existing methods on rigid object manipulation. DOM methods need to explicitly handle the enormous DoFs in the state space and the complex dynamics. Despite the challenges, many interesting ideas have been proposed for DOM. Depending on the algorithmic paradigms, we categorize the DOM methods into three groups: planning, Imitation Learning (IL), and Reinforcement Learning (RL). Our discussion below focuses on the general DOM methods for a standardized set of tasks. We refer to (Sanchez et al., 2018; Khalil & Payeur, 2010) for a more detailed survey on prior methods for robot manipulation of deformable objects.\nReinforcement Learning algorithms using differentiable physics such as SHAC (Xu et al., 2022) and APG (Freeman et al., 2021) are also explored. These methods use the ground-truth gradients on the dynamics to optimize the local actions directly, thereby bypassing the massive sampling for the policy gradient estimation.\nImitation Learning is another important learning paradigm for DOM. To overcome the \u201ccurse of dimensionality\u201d, the existing works such as (Seita et al., 2020; Sundaresan et al., 2020; Ganapathi et al., 2020) overcome the enormous state/action space by learning the low dimensional latent space and simplifying the primitive actions to only pick-and-place. ILD (Chen et al., 2022) is another line of IL method that utilizes the differentiable dynamics to match the entire trajectory all at once; therefore, it reduces the state coverage requirement by the prior works.\nMotion planning for DOM has been explored in (Lippi et al., 2020; Nair & Finn, 2019; Yan et al., 2020). These methods overcome the enormous DoFs of the original state space by learning a lowdimensional latent state representation and a corresponding dynamic model for planning. The success of these planning methods critically depends on the dimensionality and quality of the learned latent space and the dynamic model, which in itself is challenging.\nWe believe that developing a holistic benchmark with its differentiable simulator can provide a fairground to compare with the existing methods. This allows the researchers to better understand the progress and limitations of current DOM methods and offers further insights to advance the progress of DOM methods from all paradigms."
},
{
"heading": "3 DAXBENCH: DEFORMABLE-AND-JAX BENCHMARK",
"text": "In this work, we introduce DaXBench, a holistic differentiable DOM benchmark that covers a wide range of deformable objects and manipulation tasks. To benchmark the DOM methods from all paradigms, especially the differentiable ones, we implemented our own differentiable simulator, Deformable-and-JAX (DaX), which is highly efficient and parallelizable. In this section, we will first briefly introduce our simulator DaX. Then we shall focus on introducing the benchmark tasks we implemented for DaXBench.\n3.1 OUR SIMULATOR: DAX\nWe introduce DaX, a differentiable simulator that specializes in modeling various types of deformable object manipulation with high parallelization. Comparison with other existing simulators has been shown in Table 1. DaX is implemented using JAX (Bradbury et al., 2018), which supports highly paralleled computation with great automatic differentiation directly optimized on the GPU kernel level. In addition, DaX can connect seamlessly with the plethora of learning algorithms implemented using JAX, thus enabling the entire computation graph to be highly parallelized end-to-end.\nState Representations and Dynamic Models DaX adopts different state representations and dynamic systems to model the various types of deformable objects with distinct underlying deformation and dynamics. For liquid and rope that undergo severe deforma-\ntion, DaX chooses to represent their states using particles and use Material Point Method (MPM) (Sulsky et al., 1994; Hu et al., 2018a) to model their dynamics on the particle level. On the other hand, for the cloth that only undergoes limited deformation, DaX models its state using mesh and\nuses the less expensive mass-spring system to model its dynamics. With these modeling choices, DaX can strike a balance between the modeling capacity and its complexity. We refer the readers to the appendix for more details.\nDesigned for Customization DaX allows full customization of new tasks based on the existing framework. Objects with arbitrary shapes and different rigidness can be added to the simulation. Primitive actions can also be customized.\nSaving Memory DaX uses two tricks to optimize the efficiency and memory consumption of the end-to-end gradient computation, namely lazy dynamic update and re-evaluation of the gradients. To optimize MPM\u2019s time and memory consumption for a single timestep, DaX lazily updates the dynamic of a small region affected by the manipulation at that step. In addition, DaX only stores the values of a few states at the forward pass; during the backward propagation, DaX re-computes segments of the forward values starting from every stored state in the reverse order sequentially for the overall gradient. We refer the reader to the appendix for details."
},
{
"heading": "3.2 OUR BENCHMARK TASKS",
"text": "DaXBench consists of a wide variety of Deformable Object Manipulation tasks, including manipulating liquid, rope, cloth, and elastoplastic objects, as illustrated in Figure 2. In addition to the tasks presented in the previous work, we propose a few new challenging long horizon tasks that require finer-grained control; these new tasks can better benchmark the scalability of the DOM methods. The horizon of the task is mainly determined by how many steps it takes human experts to finish the task. DaXBench, therefore, offers a common ground to compare the existing methods and future work. In this section, we detail the tasks supported by DaXBench.\nLong Horizon Tasks The policy complexity grows exponentially with the planning horizon, known as \u201ccurse of dimensionality\u201d; this is exacerbated by the large state space of deformable objects. The existing DOM benchmark tasks have relatively short horizons. For example, the cloth folding and unfolding tasks in SoftGym consist of 1-3 steps. DaXBench provides tasks with longer horizons.\nTasks without Macro-Actions A macro-action can be seen as executing raw actions for multiple steps. For example, we can define a pick-and-place or push macro-action as a 6-dimensional real vector (x, y, z, x\u2032, y\u2032, z\u2032) representing the start (x, y, z) and end (x\u2032, y\u2032, z\u2032) positions of the gripper, and the raw actions are Cartesian velocities of the gripper. Macro-actions reduce the dimension of the action space and the effective horizon, and hence increase the scalability requirement on the DOM methods. However, macro-actions might be too coarse for some tasks. Taking whipping a rope to hit a target location as an example, the momentum needs to be adjusted at a relatively high frequency throughout the execution. DaXBench includes tasks that have no suitable macro-actions."
},
{
"heading": "3.2.1 LIQUID MANIPULATION",
"text": "Pour-Water Pour a bowl of water to the target bowl as quick as possible. The agent directly controls the velocity and rotation (in Cartesian space) of the gripper that holds the bowl. In particular, the action space of the agent is a 4-dimensional vector a = (vx, vy, vz, w), for which (vx, vy, vz) represents the linear velocity of the gripper, and w denotes the angular velocity around the wrist.\nPour-Soup Pouring a bowl of soup with various solid ingredients to the target bowl as fast as possible. The actions are the same as those of Pour-Water. This task is more challenging than Pour-Water due to the additional interactions between the liquid and the solid ingredients. In particular, the\nagent needs to promptly react when a solid ingredient is transferred, so as to 1) adjust for the sudden decrease in load on the gripper and 2) control the momentum of the ingredient when hitting the soup surface to minimize spilling.\nNeither tasks have suitable high-level macro-actions and the task horizons are 100 steps."
},
{
"heading": "3.2.2 ROPE MANIPULATION",
"text": "Push-Rope Push the rope to the pre-specified configuration as quickly as possible. We randomly initialize the configuration of the rope. The agent uses a single gripper to execute the push macroactions. The horizon for this task is 6 steps.\nWhip-Rope Whip the rope into a target configuration. The agent holds one end of the rope and controls the gripper velocity to whip the rope. In particular, the action space of this task is a 3- dimensional vector a = (vx, vy, vz) that denotes the velocity of the gripper in the Cartesian space. This task cannot be completed by the abstract macro-actions since the momentum and the deformation of the rope has to be adjusted at a relatively high frequency. The task horizon is 70 steps."
},
{
"heading": "3.2.3 CLOTH MANIPULATION",
"text": "Fold-Cloth Fold a piece of flattened cloth and move it to a target location. This task has two variants: the easy version requires the agent to execute 1 fold and the difficult version 3 folds. For both variants, the task begins with the cloth lying flat on the table at an arbitrary position. The agent executes pick-and-place macro-actions. The task horizon is 3 and 4 for the easy and difficult versions respectively.\nFold-T-shirt Fold a piece of T-shirt to a target location. The task begins with the T-shirt lying flat on the table at an arbitrary position. The agent executes pick-and-place macro-actions. The task horizon is 4.\nUnfold-Cloth Flatten a piece of folded cloth to a target location as fast as possible. This task also has two variants that differ in the initial configurations of the folded cloth: the cloth in the easy version has been folded once and that in the difficult version has been folded three times. This task has the same action space as the Fold-Cloth task. The task horizon is 10 steps for both variants.\nAll of these tasks share similar APIs to the OpenAI Gym\u2019s standard environments. We provide a general template to define tasks in our simulator. This not only better organizes the existing task environments, but also allows the users to customize the existing task environments to better suit their research requirements. The variables and constants defined in the template are easily interpretable quantities that correspond to real physical semantics. This allows the users to better understand the environment, which can also help with the development and debugging process."
},
{
"heading": "4 EXPERIMENTS",
"text": "In this section, we aim to answer the following two questions: 1) Are the existing representative methods capable of solving the tasks in DaXBench? 2) Within each paradigm, will the differentiable simulator help in completing the tasks?\nTo investigate these two questions, we benchmark eight representative methods of deformable object manipulation, covering planning, imitation learning, and reinforcement learning. Each paradigm has different assumptions and requirements for input knowledge. We summarize the baseline methods and the relevant information in Table 2. In this section, we first introduce the general experiment setups for all methods. Then, we discuss the details of the evaluated methods and their performance analysis for each paradigm. Next, we summarize the insights and discuss the directions for future DOM methods. Finally, we demonstrate that our simulator DaX only incurs a small sim-2-real gap by conducting a real robot experiment."
},
{
"heading": "4.1 EXPERIMENTAL SETUP",
"text": "We report the performance evaluated under our ground-truth reward for all tasks for each method. We report the mean and variance of the performance for each task over 5 different seeds.\nReward function For each task, the goal is specified by the desired final positions of the set of particles, g, representing the deformable object. In this setup, the reward can be intuitively defined by how well the current object particles match with those of the goal. Hence, we define the groundtruth reward as rgt(s, a) = exp(\u2212\u03bbD(s\u2032, g)), where s\u2032 is the next state resulted by a at the current state s, and D(s, g) is a non-negative distance measure between the positions of object\u2019s particles s and those of the goal g.\nHowever, using the ground-truth reward along is inefficient for learning. The robot\u2019s end-effector can only change the current object pose by contacting it. Therefore, to most efficiently manipulate the object to the desired configuration, we add an auxiliary reward to encourage the robot to contact the object. We define the auxiliary reward as raux(s, a) = exp(\u2212L2(s, a)), where L2(s, a) measures the L2 distance between the end-effector and the object.\nDuring training, we use sum of the ground-truth and auxiliary reward functions rgt+raux, and during evaluation we only use the ground-truth reward rgt."
},
{
"heading": "4.2 REINFORCEMENT LEARNING",
"text": "Methods. We benchmark PPO (Schulman et al., 2017), SHAC (Xu et al., 2022), and APG (Freeman et al., 2021) as the representative RL methods. PPO is a widely used model-free nondifferentiable RL method with competitive performance, while SHAC and APG are the two latest RL methods that utilize the differentiable simulator. The computation graph of SHAC is illustrated in Figure 4a. By comparing the performance of the DaXBench tasks across these three methods, we aim to further study the benefits and limitations of applying the differentiable physics to RL.\nOverall Performance. The learning curves of RL methods are shown in Figure 3. The performance difference between PPO and the differentiable RL methods shows opposite trends in the high-level macro-action-based and local-level control tasks. To our surprise, for most of the short-horizon macro-action-based tasks, PPO performs consistently better than the differentiable RL methods. This seems to contradict the experiment results of SHAC (Xu et al., 2022), which shows stronger performance than PPO in its own simulator on a wide range of context-rich MuJoCo tasks. How-\never, for long-horizon tasks with low-level controls, the differentiable RL methods, especially APG, outperform PPO by a large margin.\nMain Challenge for the differentiable-physics-based RL methods: Exploration. We argue that, on high-level macro-action-based tasks, the main limiting factor of the performance of differentiable-physics-based RL is the lack of exploration during training. The need to balance this trade-off is exaggerated by the enormous DoFs and the complex dynamics unique to the DOM tasks. Especially for the high-level macro-action-based tasks, their non-smooth/discontinuous/nonconvex optimization landscape necessitates using a good exploration strategy. However, the existing differentiable-physics-based RL methods rely exclusively on the simulation gradient to optimize the policy. They are not entropy regularized; hence, they may quickly collapse to the local optimal. Without explicit exploration, the differentiable-physics-based RL methods fail to reach the near-optimal state using the simulation gradients alone. Taking Fold-Cloth-3 as an example, if the randomly initialized policy never touches the cloth, the local simulation gradients cannot guide the agent to touch the cloth since the rewards and, consequently, the gradients for all not-in-contact actions are zero.\nDifferentiable-physics-based RL are sensitive to the Optimization Landscape. Differentiablephysics-based RL can largely improve the performance of some low-level control tasks if the optimization landscape is smooth and convex. Taking Whip-Rope as an example, the optimal low-level control sequence is a smooth motion trajectory that lies on a relatively smooth and well-behaved plane for the gradient-based methods. This explains why APG outperforms the non-differentiablephysics-based PPO by a large margin."
},
{
"heading": "4.3 IMITATION LEARNING",
"text": "Methods. We benchmark Transporter (Seita et al., 2020) and ILD (Chen et al., 2022) as the two representative Imitation Learning DOM methods. Transporter is a popular DOM method that is wellknown for its simple implementation and competitive performance. ILD is the latest differentiablephysics-based IL method; it utilizes the simulation gradients to reduce the required number of expert demonstrations and to improve the performance. Note that since Transporter abstracts its action space to pick-and-place actions to simplify the learning, it cannot be extended to the low-level control tasks. Our comparison for the IL methods will first focus on the high-level macro-actionbased tasks, then we will analyze the performance of ILD with the expert.\nOverall performance For the high-level macro-action-based tasks, ILD outperforms Transporter by a large margin for all tasks. For example, ILD triples the performance Transporter in Fold-Cloth-1 and doubles it in Fold-Cloth-3 and Unfold-Cloth-1. We highlight that this significant performance gain is achieved by using a much smaller number of expert demonstrations. We attribute this improvement to the additional information provided by the simulation gradients, where now ILD can reason over the ground-truth dynamics to bring the trajectory closer to the expert, at least locally. As illustrated in Figure 4, the gradients used to optimize each action comes from the globally optimal expert demonstrations 1) at the current step and 2) in all the steps after. In contrast, the gradients used in SHAC only count the reward at the current step. The differentiable RL faces the challenge of the exploding/vanishing gradients and lack of exploration. Differentiable IL alleviates these problems by having step-wise, globally optimal guidance from expert demonstrations.\nChallenge for ILD: Unbalanced State Representations. ILD does not perform well on tasks like Pour-Water and Pour-Soup with unbalanced state representations, for example, 1000 \u00d7 3 features of particles and 6 gripper states. The learned signals from the particles overwhelm the signals from the gripper states. Therefore, ILD cannot learn useful strategies with unbalanced learning signals. Softgym (Lin et al., 2020) also concurs with the results that policies with reduced state representation perform better than those with full state. A more representative representation of the states needs to be considered and we leave it for future study.\nIn conclusion, differentiable methods can optimize the policies efficiently if the optimization landscape is well-behaved. For the tasks that require tremendous explorations or with sparse rewards, gradient-based methods suffer severely from the local minima and gradient instability issues."
},
{
"heading": "4.4 PLANNING",
"text": "Methods. We benchmark the classic CEM-MPC (Richards, 2005) as a representative random shooting method that uses a gradient-free method to optimize the action sequences for the highest final reward. To study the effect of using differentiable physics, we implement two versions of differentiable MPC. The basic computation graph for both is illustrated in Figure 4c. In particular, both differentiable MPC baselines maximize the immediate reward of each action using differentiable physics. The difference lies in the initial action sequences: diff-MPC is initialized with an random action sequence, while diff-CEM-MPC with an action sequence optimized by CEM-MPC.\nOverall Performance. The diff-CEM-MPC performs consistently better than or on par with CEMMPC, while Diff-MPC fails to plan any reasonable control sequences for most of tasks.\nSimulation Gradients as Additional Optimization Signals. The vastly different performance between the two versions of the differentiable baselines brings us insights on how the differentiable physics helps in planning. Simulation gradients provide the additional signals in the direction to optimize the actions, thereby reducing the sample complexity. However, similar to any gradient-based optimization methods, differentiable planning methods are sensitive to the non-convex/non-smooth optimization landscape and face a major limitation of being stuck at the local optimal solution. DOM\u2019s enormous DoFs of DOM and complex dynamics exacerbate these challenges for the differentiable DOM planning methods.\nDifferentiable Planning needs Good Initialization. We attribute the poor performance of DiffMPC to its random control sequence initialization. In particular, this control sequence may significantly deviate from the optimal one over the non-smooth/non-convex landscape, such that the simulation gradients alone cannot bring the control sequence back to the optimal. In contrast, DiffCEM-MPC is initialized with the optimized control sequence by CEM-MPC, which relies on iterative sampling followed by elite selection to escape the local optimality. Hence, the control sequence fed into the differentiable step is already sufficiently close to the globally optimal solution. The simulation gradients help diff-CEM-MPC to optimize the action sequences further locally, which explains the performance gain.\nTo summarize, the differentiable simulator can further optimize the near-optimal control sequences locally with much low sample complexity. Yet, its performance critically relies on a good initialization and a locally smooth optimization landscape."
},
{
"heading": "4.5 INSIGHTS AND FUTURE WORKS",
"text": "The experiment results from the eight algorithms reveal a few shared insights. We believe these insights can inspire the researchers to improve and develop DOM methods from each community. Firstly, differentiable-physics-based methods fundamentally use the simulation gradients as additional signals to optimize the policy/control sequence. Therefore, these methods inherit the generic benefits of additional ground-truth gradients, such as faster convergence and more stable optimization under mild conditions. At the same time, the problems borne with the gradient-based methods are exacerbated in differentiable DOM methods. For example, the performance of the differentiable methods critically relies on good initialization and a well-behaved optimization landscape. Without these assumptions, additional effort is required to sufficiently explore the enormous state space and assist the simulation gradient-based optimization in escaping its local minima. We believe more research into the exploration strategy for differentiable DOM methods is necessary to further advance the progress in deformable object manipulation."
},
{
"heading": "4.6 SIM-2-REAL GAP",
"text": "The physics engine of DaX is built upon state-of-the-art simulation method. This ensures that the dynamics of our simulated tasks are of high fidelity and with a small sim2real gap. To testify to the correctness of our simulated dynamics to the real-world physics, we carry out a real robot experiment on a Kinova Gen 3 robot. We deploy CEM-MPC with DaX as the predictor model to a Push-Rope task, as shown in Figure 5. Our study finds that the resultant state trajectories are similar in simulation and in reality. This testifies that the tasks implemented based on our engine are of high fidelity, and the performance of the DOM methods in our simulator provides strong hints about their performance on the real robot. We refer the reader to the appendix for details."
},
{
"heading": "5 CONCLUSION",
"text": "We introduced DaXBench, a differentiable benchmark for various deformable object manipulation tasks. To our knowledge, DaXBench is the first benchmark that simulates liquid, rope, fabric, and elastoplastic materials while being differentiable and highly parallelizable. Our task coverage goes beyond the common DOM tasks; we innovate several novel tasks, such as WipeRope and PourSoup, to study the performance of the DOM methods on long-horizon tasks with low-level controls. Our rich task coverage provides a fairground to compare the existing DOM methods from all paradigms; further, our benchmark provides insights into how the differentiable-physics-based methods help in DOM and the caveats of using this new tool. We believe DaXBenchcan help assist with the algorithmic development of DOM methods, especially the differentiable-physics-based ones of all paradigms.\nETHICS STATEMENT\nDaXBench provides a benchmark for deformable object manipulation (DOM). We believe that DaXBench and its simulator DaX have no potential negative societal impacts. In addition, our work is on benchmarking the DOM methods in simulator, so we do not collect any data or conduct experiments with human subjects. In summary, we have read the ICLR Code of Ethics and ensured that our work conforms to them.\nREPRODUCIBILITY STATEMENT\nIn this paper, all experiments are averaged over 5 random seeds. We have included our source code as an easy-to-install package in the supplementary material."
},
{
"heading": "B SIM-2-REAL GAP",
"text": "To verify that the dynamics of our simulated tasks has high-fidelity and small Sim2Real gap, we carry out a real robot experiment. We deploy CEM-MPC to the PushRope task on a Kinova Gen3 robot. CEM-MPC uses our DaX simulator as the predictive model. Given a state, CEM-MPC plans the next best push action (x, y, z, x\u2032, y\u2032, z\u2032). The state is estimated from a point cloud image, and the Cartesian space push action is transformed to the joint space trajectory via an inverse kinematics module. Note we are not verifying the sim2real gap caused by the inaccuracy of state estimation or kinematics; we are interested in whether the dynamics of DaX can correctly guide the CEM-MPC to succeed in the task. The experiment is included in our supplementary video."
},
{
"heading": "C RL ALGORITHMS NUMERICAL EXPERIMENT RESULTS",
"text": "We report the numerical experiment performance for the RL algorithms in Table 4. This is to supplement the learning curve in the main text. This numerical results report the final performance of the learning curves in Figure 3."
}
],
"year": 2022,
"abstractText": "Deformable Object Manipulation (DOM) is of significant importance to both daily and industrial applications. Recent successes in differentiable physics simulators allow learning algorithms to train a policy with analytic gradients through environment dynamics, which significantly facilitates the development of DOM algorithms. However, existing DOM benchmarks are either single-object-based or non-differentiable. This leaves the questions of 1) how a task-specific algorithm performs on other tasks and 2) how a differentiable-physics-based algorithm compares with the non-differentiable ones in general. In this work, we present DaXBench, a differentiable DOM benchmark with a wide object and task coverage. DaXBench includes 9 challenging high-fidelity simulated tasks, covering rope, cloth, and liquid manipulation with various difficulty levels. To better understand the performance of general algorithms on different DOM tasks, we conduct comprehensive experiments over representative DOM methods, ranging from planning to imitation learning and reinforcement learning. In addition, we provide careful empirical studies of existing decision-making algorithms based on differentiable physics, and discuss their limitations, as well as potential future directions. The code and video can be found in the supplementary files.",
"creator": "LaTeX with hyperref"
},
"output": [
[
"1. I wonder how the mixture is handled in these demos: are they two-way coupled, one-way coupled, or any other coupling was used?",
"2. I personally did not follow the Lazy Dynamic Update part: MPM will require a grid for simulation. Does it mean the overall grid is 32632, or is the grid actually 128128128, but only a small region of it is activated? My personal feeling is this setting only applies to a small fraction of the proposed environments, is it?",
"3. My main reservation is the comparison at the framework level. JAX has a very similar position to taichi and Nvidia warp, which are more famous for physical simulation. And I would expect they have similar performance to JAX-implemented environments, is it? Is DaXBench distinguishable from them because of user-friendliness? Or better interface to deep learning modules? Or speed?"
],
[
"1. Since the research of differentiable physics is progressing fast, the authors might better illustrate and clarify its difference from related literature.",
"2. For example, PlasticineLab is another platform for differentiable soft body manipulation. Since both DaX (fluid, elastoplastic, and mixture) and PlasticineLab are using MPM as the governing dynamics, it is not apparent to me why PlasticineLab cannot support liquid and mixture (in Table 1) while DaX can.",
"3. Moreover, it is unclear to me how the solid ingredients are represented in Pour-Soup. Are they elastoplastic objects or purely rigid bodies? Is there a coupling issue between the fluid particles and the ingredients?",
"4. Furthermore, the Saving Memory strategy in Section 3.1 is the same as the checkpoint scheme in [1] (Section 4.2), where the back-propagation trades time for memory usage by recomputing the intermediate variables in each step."
],
[
"1. \"Some of the more recent DOM works are not referenced to, all of these works offer insights / simulator for DOM: ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation, Gan et al, Neurips 2021; ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation Bokui Shen, Zhenyu Jiang, Christopher Choy, Leonidas J. Guibas, Silvio Savarese, Anima Anandkumar, Yuke Zhu, RSS, 2022; VIRDO: Visio-tactile Implicit Representations of Deformable Objects., Youngsun Wi, Pete Florence, Andy Zeng, Nima Fazeli. ICRA 2022.\"",
"2. \"I'm not confident about the sim2real part. Saying the simulator has sim2real ability is a pretty big claim. However, only a demo video is provided without any systematic evaluation.\""
]
],
"review_num": 3,
"item_num": [
3,
4,
2
]
}