Multi-Modality Registration for Aligning Photographs, Videos, and LIDAR Range ScansDate: 2015-11-05 Add to Google Calendar
Time: 10:00 am - 11:00 am
Location: Holmes Hall 389
Speaker: Brittany Morago, PhD, National Science Foundation Graduate Research Fellow
2D images and 3D LIDAR range scans provide very different but complementing information about a single subject and, when registered, can be used for a variety of exciting purposes and applications. Video sets can be fused with a 3D model and played in a single multi-dimensional environment, imagery with temporal changes can be visualized simultaneously, unveiling changes in architecture, foliage, and human activity, depth information for 2D photos and videos can be computed, and real-world measurements can be provided to users through simple interactions with ordinary photographs. However, fusing multimodality data is a very challenging task given the changing visual properties of imagery captured with different sensors even when the same subject is depicted. Image sets collected over a period of time during which the lighting conditions and scene content may have changed, different artistic renderings, varying sensor types, focal lengths, and exposure values can all contribute to visual variations in data sets. This presentation addresses these obstacles using the common theme of incorporating contextual information to visualize regional properties that intuitively exist in each imagery source. We combine both hard features that quantify the strong, stable edges that are often present in imagery along object boundaries and depth changes with soft features that capture distinctive texture information that can be unique to specific areas. We show that our detector and descriptor techniques can provide more accurate keypoint match sets between highly varying imagery than many traditional and state-of-the-art techniques, allowing us to align and fuse photographs, videos, and range scans containing a combination of man-made and natural content.