Shape Completion Enabled Robotic Grasping
Columbia University Robotics Group
This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). At runtime, a 2.5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. Runtime shape completion is very rapid because most of the computational costs of shape completion are borne during offline training. We explore how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data and how many object models were used to train the network. The ability of the network to generalize to novel objects allows the system to roughly complete previously unseen objects at runtime, representing a potentially significant improvement over purely database-driven completion approaches. Finally, experimentation is done both in simulation and on actual robotic hardware to explore the relationship between completion quality and the utility of the completed mesh model for grasp planning.
Convolutional Neural Network Architecture
Generation of training examples
In our Completion Examples page you'll find some shape completions generated by our technique using the feature preserving post processing, and the corresponding ground truth meshes. The page also shows grasps planned on the completed meshes.
Training data726 pointclouds for each of 590 objects, as well as the 428,340 corresponding 403 aligned x,y training example pairs from the Grasp Database:
726 pointclouds for 18 objects, as well as the corresponding 403 aligned x,y training example pairs from the YCB Dataset:
Combined grasp_database and ycb as binvox files rather than .pcd. Much smaller download:
Ground Truth MeshesThe ground truth meshes are not ours to distribute. To get them, please register at: