Select Language

AI社区

公开数据集

Semantic KITTI dataset 自动驾驶数据集

Semantic KITTI dataset 自动驾驶数据集

80G
1006 浏览
0 喜欢
3 次下载
0 条讨论
Autonomous Driving Classification

我们提出了一个基于KITTI视觉基准的大规模数据集,并且我们使用了dometry任务所提供的所有序列。我们为序列00-10的每个单独的扫......

数据结构 ? 80G

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    我们提出了一个基于KITTI视觉基准的大规模数据集,并且我们使用了dometry任务所提供的所有序列。我们为序列00-10的每个单独的扫描提供密集的注释,这使得多个序列扫描可以用于语义场景解释,如语义分割和语义场景完成。

    其余的序列,即序列11-21,被用作测试集,显示了大量具有挑战性的交通状况和环境类型。不提供测试集的标签,我们使用一个评估服务,对提交的数据进行评分并提供测试集的结果。

    Classes

    The dataset contains 28 classes including classes distinguishing non-moving and moving objects.            Overall, our classes cover traffic participants, but also functional classes for ground, like            parking areas, sidewalks.


    Folder structure and format

    Semantic Segmentation and Panoptic Segmentation

           

    We provide for each scan XXXXXX.bin of the velodyne folder in the sequence folder of the original KITTI Odometry Benchmark, a file XXXXXX.label in the labels folder that contains for each point a label in binary format. The label is a 32-bit unsigned integer (aka uint32_t) for each point, where the lower 16 bits correspond to the label. The upper 16 bits encode the instance id, which is temporally consistent over the whole sequence, i.e., the same object in two different scans gets the same id. This also holds for moving cars, but also static objects seen after loop closures.

    We furthermore provide the poses.txt file that contains the poses, which we used to annotate the data, estimated by a surfel-based SLAM approach (SuMa).

    Semantic Scene Completion



    We provide for each scan XXXXXX.bin of the velodyne folder in the sequence folder of the original KITTI Odometry Benchmark, we provide in the voxel folder:

        * a file XXXXXX.bin in a packed binary format that contains for each voxel if that voxel is occupied by laser measurements. This is the input to the semantic scene completion task and it corresponds to the voxelization of a single LiDAR scan.
        a file XXXXXX.label that contains for each voxel of the completed scene a label in binary format. The label is a 16-bit unsigned integer (aka uint16_t) for each voxel.
        a file XXXXXX.invalid in a packed binary format that contains for each voxel a flag indicating if that voxel is considered invalid, i.e., the voxel is never directly seen from any position to generate the voxels. These voxels are also not considered in the evaluation.
        a file XXXXXX.occluded in a packed binary format that contains for each voxel a flag that specifies if this voxel is either occupied by LiDAR measurements or occluded by a voxel in line of sight of all poses used to generate the completed scene.

    The blue files () are only given for the training data and the label file must be predicted for the semantic segmentation task.

    To allow a higher compression rate, we store the binary flags in a custom format, where we store the flags as bit flags,i.e., each byte of the file corresponds to 8 voxels in the unpacked voxel grid. Please see the development kit for further information on how to efficiently read these files using numpy.

    See also our development kit for further information on the labels and the reading of the labels using Python. The development kit also provides tools for visualizing the point clouds. 

    License

    Our dataset is based on the KITTI Vision Benchmark and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike license.           

    You are free to share and adapt the data, but have to give appropriate credit and may not use  the work for commercial purposes. Specifically you should cite our work (PDF):

    @inproceedings{behley2019iccv,
      author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
      title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
      booktitle = {Proc. of the IEEE/CVF International Conf.~on Computer Vision (ICCV)},
      year = {2019}}

             

    But also cite the original KITTI Vision Benchmark:

    @inproceedings{geiger2012cvpr,
      author = {A. Geiger and P. Lenz and R. Urtasun},
      title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
      booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
      pages = {3354--3361},
      year = {2012}}

          

    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:20 去赚积分?
    • 1006浏览
    • 3下载
    • 0点赞
    • 收藏
    • 分享