Bilkent University
Department of Computer Engineering
MS THESIS PRESENTATION

 

Perceived Disparity Refinement in Virtual Environments

 

Ufuk Çelikcan
MSc Student
Computer Engineering Department
Bilkent University

In recent years, significant progress has been made on controlling the perceived depth range in post-production pipeline. On the other hand, different from offline production, in a virtual environment with a mobile camera, there remains a need to keep the perceived depth in the comfortable target range for the viewer. For instance, in a game environment where the stereoscopic output changes dynamically based on the user input, finding optimized stereoscopic camera parameters brings about a great challenge. Addressing such challenges of presenting a comfortable viewing setting, this work demonstrates several methods that are developed towards the goal of providing better stereo 3D experience in virtual environments.

The first part presents an approach for controlling the two stereo camera parameters, camera convergence distance and interaxial separation, in interactive 3D environments in a way that specifically addresses the interplay of binocular depth perception and saliency of scene contents. The proposed Dynamic Attention-Aware Disparity Control (DADC) method produces depth-rich stereo rendering that improves viewer comfort through joint optimization of stereo parameters. While constructing the optimization model, the importance of scene elements is considered, as well as their distance to the camera and the locus of attention on the display. The method also optimizes the depth effect of a given scene by considering the individual user's stereoscopic disparity range and comfortable viewing experience by controlling accommodation/convergence conflict. The method is validated in a formal user study that also reveals the advantages, such as superior quality and practical relevance, of considering the presented method.

In the second part, a novel method is introduced for automatically adjusting the stereo camera parameters, now also including focal length in addition to the previous two, in a given 3D virtual scene for the scenario where there are scene elements that already have their camera parameters set for a certain perimeter and viewing angle range by the content developer and/or editor. The method, in a nutshell, computes the stereo camera parameters online by continuously scanning the scene as the virtual camera moves about it for changes in the number and the relative distribution of scene elements and the preset parameters, as well. Taking these variables into account, the method produces the camera parameters for the rest of the entire scene mainly by the employment of a radial basis function interpolation-based approach. As it works online, the framework allows for adjustment of camera parameters per scene element on-demand with an intuitively-designed interface so that the user can fine-tune the overall depth feeling of the scene.

 

DATE: 22 January, 2015, Thursday @ 14:00
PLACE: EA-409