WORKSHOP

Keynote Speakers


Six Not So Easy Pieces of Medical Simulation and Training

Cagatay Basdogan, Ph.D. College of Engineering, Koc University




Today, the operating room and the patient are the most common and often the only setting where hands-on medical training takes place. Novice surgeons acquire skills by observing experienced surgeons in action and then progressively performing additional surgical procedures under varying degrees of supervision as their training advances and skill levels increase. This so-called “see one, do one, teach one” paradigm has been considered as the only training method in medicine more than 2,500 years. However, an extensive report prepared by Institute of Medicine of the National Academy of Sciences in USA shows that more people die from medical mistakes each year than from highway accidents, breast cancer, or AIDS. Hence, there is an immediate need for improved training methods in medicine. The virtual reailty based medical simulation and training can be an alternative solution, but developing a simulator for medical training is a highly challenging task, which requires expertise in many disciplines. In this talk, I will discuss the six "Not-So-Easy" pieces of this development and highlight the open research problems in each piece; 1. Anatomical model and training scene generation, 2. Measurement and characterization of soft organ tissues, 3. Development of real-time soft tissue models, 4. Simulation of surgical tool-soft tissue interactions, 5. System integration, and 6. Assessment, validation and training transfer.

Cagatay Basdogan is a faculty member in the Mechanical Engineering and Computational Sciences and Engineering programs of Koc University, Istanbul, Turkey. Before joining to Koc University, he was a senior member of technical staff at Information and Computer Science Division of NASA-Jet Propulsion Laboratory of California Institute of Technology from 1999 to 2002. He moved to JPL from Massachusetts Institute of Technology (MIT) where he was a research scientist and principal investigator at MIT Research Laboratory of Electronics from 1996-1999. He received his Ph.D. degree from Southern Methodist University in 1994 and worked as a research scientist at Northwestern University Research Park with MusculoGraphics Inc. for two years before moving to Boston. Dr. Basdogan conducts interdisciplinary research in the areas of man-machine interfaces (in particular haptics), control systems, mechatronics, robotics, biomechanics, physics-based modeling and simulation, 3D computer graphics, and virtual reality technology.

Video-based Human Motion Capture: Challenges, Progress and Future Directions

Jinxiang Chai, Associate Professor, Department of Computer Science and Engineering, Texas A&M University





Motion capture technologies have made revolutionary progress in computer animation in the past decade. With the detailed motion data and editing algorithms, we can directly transfer expressive performance of a real person to a virtual character, interpolate existing data to produce new sequences, or compose simple motion clips to create a rich repertoire of motor skills. In addition to computer graphics applications, motion capture technologies have enabled tremendous advancement in computer vision, robotics, biomechanics, virtual reality and natural user interactions.

However, current motion capture technologies are often restrictive, cumbersome, expensive and therefore not accessible to most users. Video-based motion capture offers an appealing solution because they require no markers, no sensors or no special suits. Researchers have been actively exploring the problem of video-based motion capture for many years, and have made great advances. However, these results are often vulnerable to ambiguities in video data (e.g., significant occlusions), lighting changes, degeneracies in camera motion, and a lack of discernible features on the body and hand.

In this talk, I will describe our recent efforts on acquiring human motion using inexpensive cameras. First, I will show how to capture physically realistic 3D full-body performances (e.g., gymnastics) from a monocular video sequence taken by an ordinary video camera. In the second part of my talk, I will describe a fast robust automated method that accurately captures full-body motion data using a single depth camera. Lastly, I will present a novel motion capture method for acquiring physically realistic hand manipulation data from video. I will conclude my talk by outlining possible future directions of human motion capture.

Jinxiang Chai is currently an associate professor in the Department of Computer Science and Engineering at Texas A&M University. He received his Ph.D in robotics from the School of Computer Science, Carnegie Mellon University in 2006. His primary research is in the area of graphics, animation and vision with broad applications in other disciplines such as robotics, human computer interaction, biomechanics, virtual and augmented reality. He is particularly interested in developing representations and efficient computational models that allow acquisition, analysis, understanding, simulation, and control of natural human movement. He draws on ideas from graphics, vision, machine learning, robotics, biomechanics, neuroscience and applied math. He received an NSF CAREER award for his work on theory and practice of Bayesian human motion synthesis.