Publications / 2011 Proceedings of the 28th ISARC, Seoul, Korea
Laser scanners are increasingly used to create semantically rich 3D models of buildings for civil engineering applications such as planning renovations, space usage planning, and building maintenance. Currently these models are created manually a time-consuming and error-prone process. This paper presents a method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a building into a compact, semantically rich model. Our algorithm is capable of identifying and modeling the main structural components of an indoor environment (walls, floors, ceilings, windows, and doorways) despite the presence of significant clutter and occlusion, which occur frequently in natural indoor environments. Our method begins by extracting planar patches from a voxelized version of the input point cloud. We use a conditional random field model to learn contextual relationships between patches and use this knowledge to automatically label patches as walls, ceilings, or floors. Then, we perform a detailed analysis of the recognized surfaces to locate windows and doorways. This process uses visibility reasoning to fuse measurements from different scan locations and to identify occluded regions and holes in the surface. Next, we use a learning algorithm to intelligently estimate the shape of window and doorway openings even when partially occluded. Finally, occluded regions on the surfaces are filled in using a 3D inpainting algorithm. We evaluated the method on a large, highly cluttered data set of a building with forty separate rooms yielding promising results.