An Augmented Reality Environment for Interactive Crowd Simulation

Project Description

We propose a general purpose system for interactive and real-time crowd simulation within an Augmented Reality environment. Users will be able to interact with virtual objects and autonomous characters that are modeled with computer aided design tools. Behaviors of these objects and characters will be modeled and simulated in virtual domain with respect to a specified scenario. Information on the positions, directions and actions of the real users will be transferred to this virtual domain. This information will alter the behavior of the virtual models. Similarly, the states and actions of the computer controlled autonomous characters and virtual objects will be streamed to the Augmented Reality visualization devices of the users in real-time.

An example of an AR environment that can be achieved with this system is given in Figures 1 and 2. Two synthetic agents are simulated by the server in this example system. At each instant, the server updates the positions and behavior of these two synthetic agents to the connected AR client. Having received the up- dates, AR client software then registers and renders these agents on the attached HMD. device worn by the real user.


Figure 1: Example AR environment. Left: The real view of the user taking a walk in İ̇stiklal Street of the city of İ̇stanbul. Right: The view the user sees through the AR goggles from the same position. In this second view two synthetic agents belonging to a crowd are rendered. Synthetic agents are navigating in the same street naturally.


Figure 2: Example on the integration of client avatars in virtual space. Red sphere corresponds to the avatar of the client given in Figure 1. White pyramid represents the view frustum of this client’s HMD device. The position of the avatar and orientation of its view frustum is updated by the AR client regularly and transformed into virtual space frame. The arrows on the bottom left side represent coordinate frame of the virtual space. The two synthetic agents in the scenario take the avatar into consideration during path and behavior planning.

A framework will be developed to define simulation scenarios (specifically for crowd simulation). Some example scenarios include urban warfare simulation for armed forces; training simulations for police and gendarme forces in order to manage crowds in mass demonstrations; or training applications to manage emergency situations like fire. Scenarios can also be defined to satisfy needs of training in many fields of formal education. An example scenario is provided in Figure 3.


Figure 3: An emergency scenario for efficient intervention of medical personnel to a civillian with a health condition. Autonomous characters are rendered in colors of red and grey. Green characters represent real users(medical personal) wearing AR googles and equipment. Red character represents the autonmous character experiencing a serious health problem.The scenario takes place in a stadium exit at the end of a sports event. Autonmous crowd is synthetically rendered on the AR devices

Compared to the existing approaches, Augmented Reality has many advantages in terms of presenting and reinforcing knowledge within a digital environment. Users are not restricted to work on stationary computers. Up to date hardware for Augmented Reality systems are portable and can be carried with ease. It enables users to communicate face to face and also collaborate to realize goals of the scenario during a simulation. In Augmented Reality applications synthetic images are merged with real images. Thus, Augmented Reality reinforces learning through physical experience in the users’ natural environment. Users interact with the environment through common body motion. Hence perpetual usage of Augmented Reality does not have a negative effect on health due to inactivity.

The project will make use of stereoscopic head mounted display (HMD) devices with integrated cameras and Augmented Reality support. Three dimensional virtual objects will be rendered and laid on top of the real image sequences obtained by the HMD devices. The system will enable users to interact with other users, virtual objects and autonomous characters. The positions, velocities of the real users and the view frustums of their head mounted display devices (HMDs) will be tracked by the global positioning system (GPS), Gyroscopes and Accelerometers. This information will then be transferred to the simulation domain and will be processed. The physical components of the system involving HMD and other AR devices(and device combinations) are presented in Figure 4.


Figure 4: Physical components of the proposed system.

Static entities within the real space (like walls, building, etc.) will be modeled and simulated as well. Therefore actions of the autonomous agents and other virtual objects will be affected by these static entities.