logo
banner

Research Projects

3D Reconstruction of Urban Scenes Based on Semantic Constraints and Multi-source Data
Apr 15, 2016Author:
PrintText Size A A

3D Reconstruction of Urban Scenes Based on Semantic Constraints and Multi-source Data 

  

Abstract3D reconstruction of urban scenes is an important research topic in the field of computer graphics, computer vision and geographic information science, with widely applications in smart city and virtual reality. However, due to the complexity of objects in urban scenes and limitation of data acquisition devices, existing methods still lack the ability of reconstructing the 3D representation of urban scenes efficiently and accurately. To solve these problems, this project aims to develop novel 3D reconstruction methods of urban scenes based on semantic constraints and multi-source data. The proposed methods utilize high-level semantic features to deal with the ambiguity caused by noise and data missing, thus overcome the deficiency of traditional geometric feature based methods which is sensitive to the quality of original point cloud data. The project will cover three major aspects: semantic information mining of urban scenes, feature fusion of multi-source data and geometric modeling based on semantic constraints. Several key scientific problems will be tackled, including semantic feature extraction of urban scenes, consistency control of feature fusion from multi-source data and description of semantic constraints in geometric space. Technical innovations of the project include multi-dimension and multi-scale semantic feature extraction of urban scenes, multi-source data feature fusion based on probabilistic graph model and semantic constraints based mesh generation. 

  

Keywords: image segmentation; semantic feature; multi-source data fusion; point cloud processing; geometric modeling 

  

Contact: 

LI Er 

E-mail: er.li@ia.ac.cn 

National Laboratory of Pattern Recognition