Department of Computer Engineering
S E M I N A R
Improving Visual SLAM by Segmenting Out Moving Object Regions
Computer Engineering Department
Simultaneous Mapping and Localization (SLAM) for mobile robots has been one of the challenging problems for the robotics community. Extensive study of this problem in recent years has somewhat saturated the theoretical and practical background on this topic. As a result, in mobile-robotics, maps of kilometers long paths can be built with satisfactory precision in real-time. However, this requires the use of rather expensive sensors such as laser scanners, ladars. Moreover, interpretation of data gathered from these sensors usually cannot go behind simple geometric primitives. In contrast, Visual SLAM (VSLAM) uses relatively low cost cameras to accomplish the same task. VSLAM also allows us to estimate the 3D model of the environment and 6-DOF pose of the robot. Being applied to robotics only recently, VSLAM still has a lot of room for improvement. In particular, a common weakness of both normal and visual SLAM algorithms is the assumption of a stationary environment. However, moving objects such as cars, pedestrians etc. can substantially decrease the performance of these algorithms. In this study, we aim to perform VSLAM in dynamical environments while keeping the accuracy on mapping and localization high by segmenting out moving object regions.
DATE: 22 March, 2010, Monday @ 16:40
PLACE: EA 409