Last modified on March 5, 2024

Datasets on automated driving (2021)

Generally, all naturalistic driving and image classification datasets are usable for automated driving studies, as they can be used as training data. Naturalistic driving data indicates how humans behave in different scenarios and the data can be used to identify different testing scenarios for automation. Most of such naturalistic driving datasets from around the world are already featured on the Data Catalogue. They are multi-purpose, enabling a wide set of research questions not limited to automated driving development.

Publicly available automated driving datasets have seen a significant boost over the past few years. These datasets, that have specifically been recorded with automated vehicles (or the like) or collected for development of automated vehicle functionality, usually consist of entirely anonymized data. Another rising area is synthetic data, either generated from simulations, but in some cases also based on collected data where the personal attributes have been replaced by avatars (e.g. a number plate having a different combination or replacing the face of a driver).

To date, publicly available automated driving datasets are quite different from FOT datasets from large-scale user tests (which the FOT-Net project has created an online catalogue for). The following datasets can be classified as development data. Data from large-scale user tests of automated driving has not yet been made widely available, much due to competitive development status of current prototype vehicles.

The catalogue information was originally compiled by the CARTRE project in 2018, with some pointers coming from the ENABLE-S3 project. The latest update was made by the ARCADE project in September 2021.

AI City Challenge

The AI City Challenge dataset contains video data from US traffic cameras covering intersections, highway segments and city streets, having a resolution of 960p (or better) at 10 frames per second. The dataset has been extended with 190k synthetically generated images, including more than 1300 vehicles. The dataset is used in annual challenges where different topics are being addressed. Read more at https://www.aicitychallenge.org (accessed 19 April 2021). 

Baidu Apollo project

Apollo is an automated driving ecosystem and open platform initiated by Baidu. It features source code, data and collaboration options. The platform offers various types of development data, e.g. annotated traffic sign videos, vehicle log data from demonstrations, training data for multi-sensor localization and scenarios for their simulation environment. More information is available at http://apollo.auto (accessed 15 April 2021).

ApolloScape, a part of Apollo, additionally offers training data for semantic segmentation (pixel-level classification of video frames, usually input for training neural networks). As of April 2021, the dataset contained 100k video frames, 80k LiDAR point clouds and trajectories covering 1000 km in urban traffic. ApolloScape also includes a scene parsing dataset covering almost 150k frames with corresponding pixel-level annotations and pose information, depth maps for static background. More information is available at http://apolloscape.auto  (accessed 15 April 2021).

Data uploaded by partners is considered to be private by default (http://apollo.auto/docs/promise.html, accessed 15 April 2021), but it can be marked public or even that specific partners cannot access the data. Sample data is available but wider access to data requires negotiated licenses. Apollo features a business model where one part of the model is about getting wider access to the resources through data and SW contributions.

Audi Autonomous Driving Dataset

The A2D2 dataset features approximately 40 000 frames of annotated data and additionally 390 000 unannotated frames. The dataset consists of both lidar point clouds and front video images. 12 500 images have 3D bounding boxes of the vehicles, representing 14 different classes relevant to driving.

The dataset is released under the CC BY-ND 4.0 license and available at https://www.a2d2.audi/a2d2/en/dataset.html (accessed 3 September 2021).

Berkeley DeepDrive

The consortium has released 100k HD video sequences, 1100 hours in total, including also GPS and inertial measurement unit data. There are also datasets for road object detection including 100k 2D annotated images (bounding boxes of different vehicle types), instance segmentation of 10k images, 100k images of annotated lane markings, and 100k images on what is referred as “drivable area” which means learning drivable decision based on free areas of the road.

The basic license is limited for personal use. More information is available at https://bdd-data.berkeley.edu (accessed 15 April 2021). The PATH consortium is housed at Berkeley and with partners like GM, Google, VW, Nvidia. Also Apollo (Baidu) joined the consortium in 2018. In co-operation with Nexar, Berkeley DeepDrive made 100 000 videos available in June 2018. The dataset, BDD100K, includes 40 second clips of data collected in multiple cities in the US. The videos are complemented with GPS information and annotations of objects and lane markings. More information describing this particular part of DeepDrive can be found at https://bair.berkeley.edu/blog/2018/05/30/bdd (accessed 15 April 2021).

Bosch Boxy vehicles dataset

Bosch has released a dataset of 200k annotated images, containing close to two million vehicles. Each image has a resolution of 5 megapixels, various weather and traffic conditions. The dataset is available at https://boxy-dataset.com/boxy  (accessed 19 April 2021).

Cityscapes

The Cityscapes dataset features 5000 images with high quality annotations and 20k images with coarse annotations from 50 different cities. The images are annotated at pixel-level and offer training material for neural network studies. When the dataset is used in studies, the users are requested to cite related dataset papers. More information is available at https://www.cityscapes-dataset.com (accessed 19 April 2021).

D2 – City

D2 – City is a large dataset containing 10k dashcam videos collected in five different cities in China, having different weather, road and traffic conditions. About one thousand videos come with annotations on road objects, including bounding boxes and tracking identifiers (https://outreach.didichuxing.com/d2city/d2city, accessed 19 April 2021).  

Drive & Act

Drive & Act is a dataset focusing on in-cabin and the driver. The dataset includes 12 hours from 29 long sequences, 3D head and body pose, annotated data in terms of secondary tasks, semantic actions and interactions (https://www.driveandact.com, accessed 19 April 2021).  

FLIR Thermal Dataset for Algorithm Training

FLIR has released a dataset for ADAS development that enables developers to start training convolutional neural networks. The dataset consists of 14k images from video recorded at 30 frames per second. The images are annotated with bounding boxes. Read more at https://www.flir.com/oem/adas/adas-dataset-form (accessed 19 April 2021). 

Ford Multi-AV Seasonal Dataset

The seasonal dataset was collected by a fleet of Ford vehicles at different days and times during 2017–18. The vehicles were manually driven on an average route of 66 km in Michigan that included a mix of driving scenarios like the Detroit Airport, freeways, city-centres, university campus and suburban neighborhoods, etc. Each vehicle used in this data collection is a Ford Fusion outfitted with an Applanix POS-LV inertial measurement unit (IMU), four HDL-32E Velodyne 3D-LiDAR scanners, 6 Point Grey 1.3 MP Cameras arranged on the rooftop for 360 degree coverage and 1 Pointgrey 5 MP camera mounted behind the windshield for forward field of view. The dataset holds seasonal variation in weather, lighting, construction and traffic conditions experienced in dynamic urban environments.
The dataset can be used for non-commercial purposes under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

https://avdata.ford.com/downloads/default.aspx (accessed 3 September 2021)

IKA High-D dataset

High-D dataset is a collected from drones over German highways. The dataset includes more than 110k vehicle trajectories. More information is available at https://www.highd-dataset.com (accessed 19 April 2021).

KITTI Vision Benchmark Suite

The Karlsruhe Institute of Technology has open sourced six hours of data captured while driving in Karlsruhe (2011). The dataset is famous for its use in vision benchmarks. Annotations / evaluation metrics are provided along with raw data. The dataset cannot be used for commercial purposes. More information is available at http://www.cvlibs.net/datasets/kitti (accessed 19 April 2021).

Level 5 Prediction and Perception dataset

Level 5 (previously Lyft), has released two datasets at https://level-5.global/data (accessed 3 September 2021).

The Prediction dataset consists of 170 000 scenes, each 25 seconds long at 10 Hz, including the trajectories of a self-driving vehicle and over 2 million other traffic participants.  The dataset is attached with HD map as well as aerial footage over the route. The HD map is enriched with more than 15 000 human annotations. The dataset was collected in autumn 2019 to spring 2020 and represents 1118 hours of driving, equal to 26 344 km.

The dataset is described in a paper available at https://tinyurl.com/lyft-prediction-dataset (accessed 3 September 2021).

The Perception dataset consists of raw camera and LiDAR data. The dataset includes more than 350 epochs of data between 60 and 90 minutes. The external traffic participants are coded with 3D bounding boxes. The dataset uses the nuScenes data format.

The two datasets are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Málaga Urban Dataset

This stereo camera and laser dataset was collected on a 37 km route in urban Malaga. The files are downloadable right away, under BSD open source license, requesting referral to a scientific paper by authors from universities of Almeria and Malaga. More information is available at https://www.mrpt.org/MalagaUrbanDataset (accessed 19 April 2021).

Mapillary datasets

Mapillary has released four different datasets where global context is the common theme and collected on six continents.  The Vistas dataset consists of 25 000 HD images holding semantic segmentation and manual annotations of 152 different object categories.

Mapillary also released a Traffic sign dataset including more than 100k images and over 300 classes of traffic signs, including annotated bounding boxes. The data was collected under various conditions such as weather, season, time of day, camera and viewpoint.

The Street Level Sequence Dataset consists of more than 1.6 million images, all tagged with sequence information and geo-location. The images were collected over 9 years in various traffic conditions from 30 cities.

The Planet-Scale Depth Dataset consists of 750 000 images with metric depth information. The dataset was collected with more than 100 different camera models.

The datasets are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

More information and links to the datasets are available at https://www.mapillary.com/datasets (accessed 3 September 2021).

Motional nuScenes and nuImages

Motional has released two datasets: nuScenes and nuImages. nuScenes is a dataset of a thousand 20-second-long scenes collected in Boston and Singapore. The scenes are selected to give a diverse and challenging set of situations. The nuScenes dataset was first released with camera images and lidar point clouds (2019) and later complemented with lidar annotations (2020). The dataset includes scenes for both training and validation.

The nuImages dataset is a set of 93 000 annotated images as part of an additional 1.2 million images. 75% of annotated images include more challenging classes (such as bicycles, animals, categorization of road constructions sites), where the remaining 25% holds more conventional situations and objects, to avoid strong bias. The dataset includes different driving conditions (sun, snow, rain) for both night and day.

Both datasets are available for non-commercial purposes.

https://www.nuscenes.org (accessed 3 September 2021).

Oxford RobotCar Dataset

The Oxford University has collected a dataset consisting of 1000 km recorded driving in central Oxford over the period of 1.5 years (2014–2015) (W. Maddern, G. Pascoe, C. Linegar and P. Newman, 2016). One needs an academic e-mail address to register, ending with .edu or .ac.uk. Alternatively, the university can be contacted for negotiating a commercial license. The data is mainly intended for non-commercial academic use. The dataset features almost 20 million images. Information on the dataset is available at http://robotcar-dataset.robots.ox.ac.uk (accessed 15 April 2021).

Playing for data

This Darmstadt University dataset is an example on efforts in the academic community to extract neural network training data from computer games. In games, every pixel belongs to known objects. This takes away the need for manual annotations, but certainly the data is limited to the details the game can generate. The datasets consists of 24966 densely labelled frames and it is compatible with the Cityscapes dataset. More information is available at http://download.visinf.tu-darmstadt.de/data/from_games (accessed 19 April 2021).

Synthia dataset

Synthia dataset consists of more than 200k HD images from video streams and 20k HD images from independent snapshots. The dataset is generated synthetically from an European style town, a modern city and highways. It includes different weather conditions and dedicated themes for winter (http://synthia-dataset.net, accessed 19 April 2021).

Udacity

Udacity is offering education and training in matters relevant to autonomous driving. A dataset, used in their tutorials, offers example data recordings from ten hours of driving and annotated driving datasets, where objects in video have been marked with surrounding boxes (https://academictorrents.com/userdetails.php?id=5125, accessed 19 April 2021). Udacity publishes programming challenges to further the development. The project plans to attract students from around the world.  More information is available at (https://www.udacity.com, accessed 19 April 2021).

Waymo open data

The Waymo Perception dataset was first released in 2019 (updated 2020) including nearly 2000 20-second-long segments. The dataset includes images, lidar, labeled data of object categories, and bounding boxes.

The Motion dataset was released in 2021 including over 100k 20 second segments of interesting interactions with other road users. There are 3D bounding boxes for each of the over 10 million objects and the purpose relates mainly to the behavior of unprotected road users. The data was recorded in California, Utah and Ohio.

For both datasets there is code available at Github. The dataset is available for non-commercial purposes. The dataset including information is available at https://waymo.com/open  (accessed 3 September 2021).

Feedback form

Have feedback on this section??? Let us know!

Send feedback

Feedback

Please add your feedback in the field below.

Your feedback has been sent!
Thank you for your input.

An error occured...
Please try again later.