Shape Completion Enabled Robotic Grasping

Jacob Varley , Chad DeChant , Adam Richardson , Joaquín Ruales , and Peter Allen

Columbia University Robotics Group


This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). At runtime, a 2.5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. Runtime shape completion is very rapid because most of the computational costs of shape completion are borne during offline training. We explore how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data and how many object models were used to train the network. The ability of the network to generalize to novel objects allows the system to roughly complete previously unseen objects at runtime, representing a potentially significant improvement over purely database-driven completion approaches. Finally, experimentation is done both in simulation and on actual robotic hardware to explore the relationship between completion quality and the utility of the completed mesh model for grasp planning.


Convolutional Neural Network Architecture

Training Data Generation

Generation of training examples

Completion Examples

In our Completion Examples page you'll find some shape completions generated by our technique using the feature preserving post processing, and the corresponding ground truth meshes. The page also shows grasps planned on the completed meshes.



Source code + Smaller Trained Model (Keras 2.0)

ROS workspace with setup instructions:

Trained Model: depth_y17_m05_d26_h14_m22_s35_bare_keras_v2.tar.gz

Trained Model (Google Drive Link): depth_y17_m05_d26_h14_m22_s35_bare_keras_v2.tar.gz

Source code + Trained Model (Keras 1.0) [Model used in paper]

This model requires this version of keras , and this version of theano.

Training Code:

Runtime Post-Processing code to merge completion with observed partial view:

github org for running shape completion as part of ROS system:

Trained Model: y17_m01_d27_h18_m32.tar.gz

Training data

726 pointclouds for each of 590 objects, as well as the 428,340 corresponding 403 aligned x,y training example pairs from the Grasp Database:
726 pointclouds for 18 objects, as well as the corresponding 403 aligned x,y training example pairs from the YCB Dataset:
Combined grasp_database and ycb as binvox files rather than .pcd. Much smaller download:

Ground Truth Meshes

The ground truth meshes are not ours to distribute. To get them, please register at:


This work was accepted to IROS 2017.