Congrats to Dr. Chelhwon Kim!

Date: 
Friday, October 14, 2016

Chelhwon, a longtime member of the UCSC Computer Vision Lab, successfully defended his PhD thesis.

He is new already at work in his new job at FXPAL. Congrats Chelhwon! We will miss you!

Here is the abstract of Chelhwon's thesis:

Indoor Manhattan Spatial Layout Recovery From Monocular Videos

Traditional Structure-from-Motion (SfM) is hard in indoor environments with only a few detectable point features. These environments, however, have other useful characteristics: they often contain severable visible lines, and their layout typically conforms to a Manhattan world geometry.

In this thesis, I present a novel approach for structure and motion computation in a Manhattan layout from monocular videos. Unlike most SfM algorithms that rely on point feature matching, only line matches are considered in this work. This may be convenient in indoor environment characterized by extended textureless walls, where point features may be scarce. The proposed system relies on the notion of "characteristic lines", which are invariants of two views of the same parallel line pairs on a surface of known orientation. Finding coplanar sets of lines becomes a problem of clustering characteristic lines (CL), which can be accomplished using a modified mean shift procedure. The CL algorithm is fast and robust, and computationally light and produces good results in real world situations.

The CL algorithm is extended to the case of multiple views for the analysis of videos from a monocular camera. I present a novel multi-view CL technique that looks for clusters of vectors formed by characteristic lines over multiple view pairs. This technique requires individual lines to be tracked across multiple views; an algorithm for reliable line matching between two frames leading to the formation of "line chains" across multiple frames is presented here. Cluster centers of multi-view characteristic lines represent estimates of the camera motion between any two views, normalized by the distance from a planar surface of the first camera location in the pair. This information is passed on to a modified "least unsquared deviations" (LUD) algorithm that computes the global camera motion. Finally, I introduce a new technique for planar fitting of the reconstructed lines that makes explicit use of the Manhattan world geometry.