An open benchmark corpus for mobile RGB-D related algorithms
Examples from our recordings. From left to right, Experimental room (soft dummy), corridor, Living Lab (2 dummies).
This benchmark corpus is intended for «low level» RGB-D algorithm family like 3D-SLAM, body/skeleton tracking or face tracking using a mobile robot. Using this open corpus, researchers can find a way to answer several questions:
  • what is the algorithm performance in multiples conditions?
  • on a mobile robot, what is the maximum linear/angular speed supported by the algorithm?
  • which variables impact the algorithm?
  • evaluate suitable height/angle of the mounted RGB-D sensor to reach goals: monitoring everyday live is different from searching fallen persons on the floor;
  • finally, what is the performance on an algorithm with regards to others?
Environmental setup
First, our experimental room is a flexible space designed to be representative of a home-like environment. It has an «L» shape with a kitchen place with a sink, a diner and a lounge space. The size of the room is 6x8 m. Except the sink, furniture can be moved ad libitum to reflect experimental needs. In this setup, dummies are in the kitchen. The robot moves from the dining room toward the dummies. Some furniture is available to create different setups and placed to make some occlusions: sofa, table and chairs. Augmented reality tags are on the walls and let us have more 3D information that information gathered from the robot laser range finder. Second, we recorded while the robot is going forward and backward, slaloming along a corridor. 2 dummies and a plant are presents. Last, we made experiments with dummies (standing and soft ones) in a living lab from the Amiqual4Home facilities, i.e. a real appartment in realistic conditions.
Data acquisition
This project started with the release of the new Kinect 2 sensor. We recorded synchronously all available streams at maximum frame rate from our mobile platform (features are all robot centered). Features are all timestamped and C++ source code for reading files are provided. this table summurizes all features:
DataSensorInformationFrame rate (max)
BodyKinect 2maximum 6 skeletons with 25 joints, body parts rotations and faces from Kinect2, face detection using OpenCV are provided.30Hz
RGB VideoKinect 21900x1080p30Hz
Deptd VideoKinect 2512x424p from 0.4 to 4.5m30Hz
IR and long exposure IR VideoKinect 2512x424p30Hz
Telemeters distancesLaser range finder20 meters maximum12.5Hz
Ultrasound distancesUltrasound telemeters3 meters maximum12.5Hz
IR distancesIR telemeters1.5 meters maximum12.5Hz
CommandsControl programlinear and angular speeds, stop command12.5Hz
OdometryControl programlocalization information computed with an ICP using room map12.5Hz
OdometryRobotic platforminternal localization information12.5Hz
Battery levelRobotic platformin percentage1Hz
The following table presents the (lossless) compressed size of each part of the corpus. As it is not mandatory to uncompress data to process them, this is at least the size of the corpus on your hard drive. Uncompressing is interesting only as it can speed up your processing (you can find a table containing uncompressed data size on this page). One can download each part of the corpus (depth, video,) and every subset separately. You can increase the size of your local copy later without any problem.
The whole corpus is 1.66TiB for almost 9 hours 30 minutes of recording time. If you just need the depth and skeleton data, the size is less than 160 GiB.
Conditions Video Depth Skeleton Body index Face Infrared Robot data Total size Recording time
Experimental room 1 and 2 dummies, soft dummy 411.31 GiB 46.72 GiB 52.57 MiB 74.09 MiB 0 B 80.70 GiB 279.61 MiB 539.11 GiB 2:40:14
Corridor 2 dummies 63.16 GiB 6.62 GiB 6.86 MiB 9.04 MiB 0 B 11.97 GiB 39.70 MiB 81.81 GiB 0:21:32
Living lab 1 and 2 dummies, soft dummy 790.75 GiB 104.01 GiB 237.06 MiB 207.24 MiB 3.10 MiB 182.04 GiB 708.58 MiB 1.05 TiB 6:27:37
Total 1.24 TiB 157.34 GiB 296.48 MiB 290.37 MiB 3.11 MiB 274.70 GiB 1.00 GiB 1.66 TiB 9:29:24
The corpus is freely available for research teams. As we said, it comes with C++ source code provided to read synchronously data. These source codes work under Windows and Linux. We also provide a ROS «bag» wraper in order to facilitate integration of our corpus. The source code to read synchronously all our data is released under the LGPL3.0 licence and available online on RGBDSync repository on Github.
If you are interested in using our corpus, please send us an email at You must register to download the corpus. Registering is free but mandatory: MobileRGBD corpus is huge, we can not let webots download these data without penalizing our servers. You can select data types your are interested in: you can choose subsets in term of recording scenarios, data type, or both. For instance, the biggest stream is the HD RGB stream from the Kinect2. You can download everything but this stream. If later you are interested in, you will be able to download the new data only. Download scripts work under Linux and Windows (our wget based sh script should work under MacOSX also).
If you are a researcher and this corpus and/or software helps you, please cite our publication on MobileRGBD: MobileRGBD, An Open Benchmark Corpus for mobile RGB-D Related Algorithms, Dominique Vaufreydaz, Amaury Nègre, 13th International Conference on Control, Automation, Robotics and Vision, Dec 2014, Singapore, 2014. (go to author version)
This work was done using the Amiqual4Home facilities (ANR-11-EQPX-0002) and the SED Team of Inria Rhône-Alpes. We would like to thank Laurence Boissieux, Jean-François Cuniberto, Nicolas Bonnefond and Stan Borkowski for their help in gathering this corpus. Thank you to William Didier, Théo Lambert, Maxence Menager and Vadim Sushko for their work on labelling the corpus.