Shape Completion Enabled Robotic Grasping

Jacob Varley , Chad DeChant , Adam Richardson , Joaquín Ruales , and Peter Allen

Columbia University Robotics Group

Abstract

This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). At runtime, a 2.5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. Runtime shape completion is very rapid because most of the computational costs of shape completion are borne during offline training. We explore how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data and how many object models were used to train the network. The ability of the network to generalize to novel objects allows the system to roughly complete previously unseen objects at runtime, representing a potentially significant improvement over purely database-driven completion approaches. Finally, experimentation is done both in simulation and on actual robotic hardware to explore the relationship between completion quality and the utility of the completed mesh model for grasp planning.


Network

Convolutional Neural Network Architecture



Training Data Generation

Generation of training examples

Completion Examples

In our Completion Examples page you'll find some shape completions generated by our technique using the feature preserving post processing, and the corresponding ground truth meshes. The page also shows grasps planned on the completed meshes.

Video

Downloads

Source code

https://github.com/CURG/shape_completion_experiments

https://github.com/CURG/Mesh_Reconstruction

https://github.com/ShapeCompletion3D

Trained model

y17_m01_d27_h18_m32.tar.gz

Training data

726 pointclouds for each of 590 objects, as well as the 428,340 corresponding 403 aligned x,y training example pairs from the Grasp Database:
grasp_database.tar.gz
726 pointclouds for 18 objects, as well as the corresponding 403 aligned x,y training example pairs from the YCB Dataset:
ycb_dataset.tar.gz
Combined grasp_database and ycb as binvox files rather than .pcd. Much smaller download:
xy_40.tar.gz

Ground Truth Meshes

The ground truth meshes are not ours to distribute. To get them, please register at:
http://grasp-database.dkappler.de/
http://rll.eecs.berkeley.edu/ycb/

Citation

This work was accepted to IROS 2017. https://arxiv.org/pdf/1609.08546v2.pdf