The object segmentation training dataset contains 136,575 RGB-D images
of single objects (from the APC) in the shelf and tote. There are a
total of 8,181 unique poses of 39 objects seen from various camera
viewpoints. All images are labeled with binary foreground object masks,
which were automatically generated to train the self-supervised deep
models for 2D object segmentation. Details of the automatic labeling
algorithm can be found in the paper. The training dataset also contains HHA maps (Gupta et al.), pre-computed from the depth images.
Each scene (in addition to the files described here), contains:
• HHA/frame-XXXXXX.HHA.png - a 24-bit PNG of HHA maps, an encoding of every aligned depth image into three channels at each pixel: horizontal disparity, height above ground, and the angle between the surface normal and the inferred gravity direction (Gupta et al.). All channels are linearly scaled to the 0 - 255 range.
• masks/frame-XXXXXX.mask.png - an 8-bit PNG binary image of the foreground object mask for each RGB-D frame.