2nd February 2017

We have a Dyson sponsored fellowship available for an outstanding researcher. The deadline for submitting an application is 16th of February. Please follow the link for more information regarding this position.

 

 

17th January 2017

We have four papers accepted for ICRA 2017!

Robert Lukierski, Stefan Leutenegger and Andrew J. Davison. Room Layout Estimation from Rapid Omnidirectional Exploration, 2017, IEEE International Conference on Robotics and Automation (ICRA). (dyson funded)

John McCormac, Ankur Handa, Andrew J Davison, Stefan Leutenegger. SemanticFusion: Dense 3D Semantic Mapping with Convolutional Neural Networks, 2017, IEEE International Conference on Robotics and Automation (ICRA). (dyson funded)

Sajad Saeedi, Luigi Nardi, Edward Johns, Bruno Bodin, Paul H J Kelly, Andrew J Davison. Application-oriented Design Space Exploration for SLAM Algorithms, 2017, IEEE International Conference on Robotics and Automation (ICRA). (non-dyson funded)

Lukas Platinsky, Andrew J. Davison, and Stefan Leutenegger. Monocular Visual Odometry: Sparse Joint Optimisation or Dense2017, IEEE International Conference on Robotics and Automation (ICRA). (non-dyson funded)

 

18th December 2016

dataset

We introduce SceneNet RGB-D, expanding the previous work of SceneNet to enable large scale photorealistic rendering of indoor scene trajectories. It provides pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction. Random sampling permits virtually unlimited scene configurations, and here we provide a set of 5M rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses. Each layout also has random lighting, camera trajectories, and textures. The scale of this dataset is well suited for pre-training data-driven computer vision techniques from scratch with RGB-D inputs, which previously has been limited by relatively small labelled datasets in NYUv2 and SUN RGB-D. It also provides a basis for investigating 3D scene labelling tasks by providing perfect camera poses and depth data as proxy for a SLAM system.

SceneNet RGB-D