Description
The tasks are based on BDD100K, the largest driving video dataset to date supporting heterogenous
multi-task learning. It contains 100,000 videos representing more than 1000 hours of driving
experience with more than 100 million frames. The videos comes with GPU/IMU data for trajectory
information. The BDD100K dataset now provide annotations of the 10 tasks: image tagging, lane
detection, drivable area segmentation, object detection, semantic segmentation, instance segmentation,
multi-object detection tracking, multi-object segmentation tracking, domain adaptation and
imitation learning. These diverse tasks make the study of heterogenous multi-task learning
possible.
For the CVPR 2020 Workshop on Autonomous Driving, we host the multi-object detection tracking challenge on CodaLab detailed below. Challenges on the other tasks will be announced on our dataset website.
Video Data
Explore 100,000 HD video sequences of over 1,100-hour driving experience across many different
times in the day, weather conditions, and driving scenarios. Our video sequences also include
GPS locations, IMU data, and timestamps.
Road Object Detection
2D Bounding Boxes annotated on 100,000 images for
bus, traffic light, traffic sign, person, bike, truck, motor, car, train, and rider.
Instance Segmentation
Explore over 10,000 diverse images with pixel-level and rich instance-level annotations.
Driveable Area
Learn complicated drivable decision from 100,000 images.
Lane Markings
Multiple types of lane marking annotations on 100,000 images for driving guidance.