On the Convergence of a Class of Multilevel Methods for Large, Sparse Markov ChainsThis paper investigates the theory behind the steady state analysis of large, sparse Markov chains (MCs) with a recently proposed class of multilevel (ML) methods using concepts from algebraic multigrid and iterative aggregation-disaggregation. The motivation is to better understand the convergence characteristics of the class of ML methods and to have a clearer formulation that will aid their implementation. In doing this, restriction (or aggregation) and prolongation (or disaggregation) operators of multigrid are used, and the Kronecker based approach for hierarchical Markovian models (HMMs) is employed, since it suggests a natural and compact definition of grids (or levels). However, the HMM formalism used to describe the class of ML methods for large, sparse MCs has no influence on the theoretical results derived.
Keywords: Markov chains, multilevel methods, multigrid, aggregation-disaggregation, Kronecker based numerical techniques.
Conservative Occlusion Culling for Urban Visualization using a Slice-wise Data StructureIn this paper, we propose a new shrinking process for conservative from-point occlusion culling algorithms and a data structure for the visualization of urban environments. The visible geometry in a typical urban walkthrough mainly consists of partially visible buildings. Occlusion-culling algorithms, in which the granularity is based on buildings, render these partially visible buildings completely. We observe that the visibility in urban walkthroughs shows certain characteristics. The proposed slice-wise data structure represents the buildings, exploiting these characteristics, in terms of slices parallel to the coordinate axes. This forms the base for occlusion culling where the occlusion granularity is at slice level. The proposed slice-wise data structure has minimal storage requirements. The visible parts can be accessed at constant time during navigation with the help of a preprocessing stage. We also propose to shrink the occluders in a scene. This is necessary for a conservative from-point occlusion culling algorithm, which can also be applied to nonconvex general 3D occluders. Empirical results show that a 54% decrease in the number of processed polygons and 46% speed-up in frame-rate can be achieved by using the proposed conservative occlusion-culling algorithm with rather than an occlusion-culling method where the granularity is based on individual buildings.
Keywords: octree, occluder shrinking, from-point occlusion culling, urban visualization, visibility preprocessing.
NOTE: This is a revised version of the Technical Report BU-CE-0415.
Motion Capture from Single Video Sequence (M. Sc. Thesis)3D human pose reconstruction is a popular research area since it can be used in various applications. Currently most of the methods work for constrained environments, where multi camera views are available and camera calibration is known, or a single camera view is available, which requires intensive user effort. However most of the currently available data do not satisfy these constraints, thus they cannot be processed by these algorithms. In this thesis a framework is proposed to reconstruct 3D pose of a human for animation from a sequence of single view video frames. The framework for pose construction starts with background estimation. Once the image background is estimated, the body silhouette is extracted by using image subtraction for each frame. Then the body silhouettes are automatically labeled by using a model-based approach. Finally, the 3D pose is constructed from the labeled human silhouette by assuming orthographic projection. The proposed approach does not require camera calibration. The proposed framework assumes that the input video has a static background and it has no significant perspective effects and the performer is in upright position.
Keywords: motion capture, framework, single camera, uncalibrated camera, vision-based, animation.
Ghostware and Rootkit Detection Techniques for Windows (M. Sc. Thesis)Spyware is a significant problem for most computer users. In public, the term spyware is used with the same meaning as adware, a kind of malicious software used for showing advertisements to the user against his will. Spyware programs are also known for their tendency to hide their presence, but advanced stealth techniques used to be either nonexistent or relatively primitive in terms of effectiveness. This made spyware easily detectable with simple file-scanning and registry-scanning techniques. New spyware programs have merged with rootkits and gained stealth abilities, forming spyware with advanced stealth techniques. In this work we focus on this important subclass of spyware, namely ghostware. Ghostware programs hide their resources from the Operating System Application Programming Interfaces that were designed to query and enumerate them. We enumerated these hiding techniques and studied the stealth detection methodologies and investigated the effectiveness of the hiding techniques against popular anti-virus programs and anti-spyware programs together with publicly available ghostware detection and rootkit detection tools. The results show that, anti-virus programs or anti-spyware programs are not effective for detecting or removing ghostware applications. Hidden object detection or rootkit detection tools can be useful, however, these tools can only work after the computer is infected and they do not provide any means for removing the ghostware. As a result, our work shows the need for understanding the potential dangers and applications of ghostware and implementing new detection and prevention tools.
Keywords: spyware, ghostware, rootkit, stealth, detection.
Models and Algorithms for Parallel Text Retrieval (Ph. D. Thesis)In the last decade, search engines became an integral part of our lives. The current state-of-the-art in search engine technology relies on parallel text retrieval. Basically, a parallel text retrieval system is composed of three components: a crawler, an indexer, and a query processor. The crawler component aims to locate, fetch, and store the Web pages in a local document repository. The indexer component converts the stored, unstructured text into a queryable form, most often an inverted index. Finally, the query processing component performs the search over the indexed content. In this thesis, we present models and algorithms for efficient Web crawling and query processing. First, for parallel Web crawling, we propose a hybrid model that aims to minimize the communication overhead among the processors while balancing the number of page download requests and storage loads of processors. Second, we propose models for documentand term-based inverted index partitioning. In the document-based partitioning model, the number of disk accesses incurred during query processing is minimized while the posting storage is balanced. In the term-based partitioning model, the total amount of communication is minimized while, again, the posting storage is balanced. Finally, we develop and evaluate a large number of algorithms for query processing in ranking-based text retrieval systems. We test the proposed algorithms over our experimental parallel text retrieval system, Skynet, currently running on a 48-node PC cluster. In the thesis, we also discuss the design and implementation details of another, somewhat untraditional, grid-enabled search engine, SE4SEE. Among our practical work, we present the Harbinger text classification system, used in SE4SEE for Web page classification, and the K-PaToH hypergraph partitioning toolkit, to be used in the proposed models.
Keywords: Search engine, parallel text retrieval, Web crawling, inverted index partitioning, query processing, text classification, hypergraph partitioning.