We propose KDPE, a Kernel Density Estimation-based strategy that filters out potentially harmful trajectories output of Diffusion Policy while keeping a low test-time computational overhead. For Kernel Density Estimation, we propose a manifold-aware kernel to model a probability density function for actions composed of end-effector Cartesian position, orientation, and gripper state.
KDPE overall achieves better performance than Diffusion Policy on simulated single-arm RoboMimic and MimicGen tasks, and on three real robot experiments: PickPlush, a tabletop grasping task, CubeSort, a multimodal pick and place task, and CoffeeMaking, a task that requires long-horizon capabilities and precise execution.
Lift
Can
Square
ToolHang
Coffee
Stack
Assembly
PickPlush
(success rate: 96%)
PickSponge
(success rate: 90%)
CubeSort
(success rate: 44%)
CoffeeMaking
(success rate: 70%)
PickPlush
(success rate: 90%)
PickSponge
(success rate: 88%)
CubeSort
(success rate: 41%)
CoffeeMaking
(success rate: 60%)
Our trajectory visualizer is tool designed to analyze the distribution of trajectories sampled by generative robotic policies. It helps in understanding the policy's behavior by revealing patterns in the sampled trajectories, identifying potential failure cases, and quantifying the amount of outliers. By visualizing hundreds of trajectories simultaneously, we can better assess the policy's consistency and robustness across different scenarios.