Recently Completed Ph. D. Theses

In Reverse Chronological Order

BU-CE-1204: PDF

Visual Attention Models and Applications to 3D Computer Graphics

Author: Muhammed Abdullah Bülbül
Supervisor: Tolga Çapın

3D computer graphics, with the increasing technological and computational opportunities, have advanced to very high levels that it is possible to generate very realistic computer-generated scenes in real-time for games and other interactive environments. However, we cannot claim that computer graphics research has reached to its limits. Rendering photo-realistic scenes still cannot be achieved in real-time; and improving visual quality and decreasing computational costs are still research areas of great interest.

Recent efforts in computer graphics have been directed towards exploiting principles of human visual perception to increase visual quality of rendering. This is natural since in computer graphics, the main source of evaluation is the judgment of people, which is based on their perception. In this thesis, our aim is to extend the use of perceptual principles in computer graphics. Our contribution is two-fold: First, we present several models to determine the visually important, salient, regions in a 3D scene. Secondly, we contribute to use of defnition of saliency metrics in computer graphics.

Human visual attention is composed of two components, the first component is the stimuli-oriented, bottom-up, visual attention; and the second component is task-oriented, top-down visual attention. The main difference between these components is the role of the user. In the top-down component, viewer's intention and task affect perception of the visual scene as opposed to the bottom-up component. We mostly investigate the bottom-up component where saliency resides.

We define saliency computation metrics for two types of graphical contents. Our first metric is applicable to 3D mesh models that are possibly animating, and it extracts saliency values for each vertex of the mesh models. The second metric we propose is applicable to animating objects and finds visually important objects due to their motion behaviours. In a third model, we present how to adapt the second metric for the animated 3D meshes.

Along with the metrics of saliency, we also present possible application areas and a perceptual method to accelerate stereoscopic rendering, which is based on binocular vision principles and makes use of saliency information in a stereoscopic rendering scene.

Each of the proposed models are evaluated with formal experiments. The proposed saliency metrics are evaluated via eye-tracker based experiments and the computationally salient regions are found to attract more attention in practice too. For the stereoscopic optimization part, we have performed a detailed experiment and verified our model of optimization.

In conclusion, this thesis extends the use of human visual system principles in 3D computer graphics, especially in terms of saliency.

Keywords: Computer Graphics, Visual Perception, Saliency, Visual Attention, Binocular Vision, Stereoscopy, Motion Perception.

BU-CE-1106: PDF

Caching Techniques for Large Scale Web Search Engines

Author: Rıfat Özcan
Supervisors: Özgür Ulusoy

Large scale search engines have to cope with increasing volume of web content and increasing number of query requests each day. Caching of query results is one of the crucial methods that can increase the throughput of the system. In this thesis, we propose a variety of methods to increase the efficiency of caching for search engines.

We first provide cost-aware policies for both static and dynamic query result caches. We show that queries have significantly varying costs and processing cost of a query is not proportional to its frequency. Based on this observation, we develop caching policies that take the query cost into consideration in addition to frequency, while deciding which items to cache. Second, we propose a query intent aware caching scheme such that navigational queries are identified and cached differently from other queries. Query results are cached and presented in terms of pages, which typically includes 10 results each. In navigational queries, the aim is to reach a particular web site which would be typically listed at the top ranks by the search engine, if found. We argue that caching and presenting the results of navigational queries in this 10-per-page manner is not cost effective and thus we propose alternative result presentation models and investigate the effect of these models on caching performance. Third, we propose a cluster based storage model for query results in a static cache. Queries with common result documents are clustered using single link clustering algorithm. We provide a compact storage model for those clusters by exploiting the overlap in query results. Finally, a five-level static cache that consists of all cacheable data items (query results, part of index, and document contents) in a search engine setting is presented. A greedy method is developed to determine which items to cache. This method prioritizes items for caching based on gains computed using items' past frequency, estimated costs, and storage overheads. This approach also considers the inter-dependency between items such that caching of an item may affect the gain of items that are not cached yet.

We experimentally evaluate all our methods using a real query log and document collections. We provide comparisons to corresponding baseline methods in the literature and we present improvements in terms of throughput, number of cache misses, and storage overhead of query results.

Keywords: Search engine, caching techniques, cost-aware caching, navigational queries.

BU-CE-1013: PDF

Particle Based Modeling and Simulation of Natural Phenomena

Author: Serkan Bayraktar
Supervisors: Bülent Özgüç and Uğur Güdükbay

This thesis is about modeling and simulation of fluids and cloth-like deformable objects by the physically-based simulation paradigm. Simulated objects are modeled with particles and their interaction with each other and the environment is defined by particle-to-particle forces. We propose several improvements over the existing particle simulation techniques. Neighbor search algorithms are crucial for the performance efficiency and robustness of a particle system. We present a sorting-based neighbor search method which operates on a uniform grid, and can be parallelizable. We improve upon the existing fluid surface generation methods so that our method captures surface details better since we consider the relative position of fluid particles to the fluid surface. We investigate several alternatives of particle interaction schema (i.e. Smoothed Particle Hydrodynamics, the Discrete Element Method, and Lennard-Jones potential) for the purpose of defining fluid-fluid, fluid-cloth, fluid-boundary interaction forces. We also propose a practical way to simulate knitwear and its interaction with fluids. We employ capillary pressure-based forces to simulate the absorption of fluid particles by knitwear. We also propose a method to simulate the flow of miscible fluids. Our particle simulation system is implement to exploit parallel computing capabilities of the commodity computers. Specifically, we implemented the proposed methods on multicore CPUs and programmable graphics boards. The experiments show that our method is computationally efficient and produces realistic results.

Keywords: Physically-based simulation, particle system, fluid simulation, neighbor search algorithms, cloth animation, free fluid surface rendering, boundary conditions, mass-spring systems, shared memory parallel computing, GPU, CUDA.

BU-CE-1011: PDF

BilVideo-7: Video Parsing, Indexing and Retrieval

Author: Muhammet Baştan
Supervisors: Uğur Güdükbay and Özgür Ulusoy

Video indexing and retrieval aims to provide fast, natural and intuitive access to large video collections. This is getting more and more important as the amount of video data increases at a stunning rate. This thesis introduces the BilVideo-7 system to address the issues related to video parsing, indexing and retrieval.

BilVideo-7 is a distributed and MPEG-7 compatible video indexing and retrieval system that supports complex multimodal queries in a unified framework. The video data model is based on an MPEG-7 profile which is designed to represent the videos by decomposing them into Shots, Keyframes, Still Regions and Moving Regions. The MPEG-7 compatible XML representations of videos according to this profile are obtained by the MPEG-7 compatible video feature extraction and annotation tool of BilVideo-7, and stored in a native XML database. Users can formulate text, color, texture, shape, location, motion and spatio-temporal queries on an intuitive, easy-to-use visual query interface, whose composite query interface can be used to formulate very complex queries containing any type and number of video segments with their descriptors and specifying the spatio-temporal relations between them. The multi-threaded query processing server parses incoming queries into subqueries and executes each subquery in a separate thread. Then, it fuses subquery results in a bottom-up manner to obtain the final query result and sends the result to the originating client. The whole system is unique in that it provides very powerful querying capabilities with a wide range of descriptors and multimodal query processing in an MPEG-7 compatible interoperable environment.

Keywords: MPEG-7, video processing, video indexing, video retrieval, multimodal query processing

BU-CE-1005: PDF

From Audiences to Mobs: Crowd Simulation with Psychological Factors

Author: Funda Durupınar
Supervisor: Uğur Güdükbay

Crowd simulation has a wide range of application areas such as biological and social modeling, military simulations, computer games and movies. Simulating the behavior of animated virtual crowds has been a challenging task for the computer graphics community. As well as the physical and the geometrical aspects, the semantics underlying the motion of real crowds inspire the design and implementation of virtual crowds. Psychology helps us understand the motivations of the individuals constituting a crowd. There has been extensive research on incorporating psychological models into the simulation of autonomous agents. However, in our study, instead of the psychological state of an individual agent as such, we are interested in the overall behavior of the crowd that consists of virtual humans with various psychological states. For this purpose, we incorporate the three basic constituents of affect: personality, emotion and mood. Each of these elements contribute variably to the emergence of different aspects of behavior. We thus examine, by changing the parameters, how groups of people with different characteristics interact with each other, and accordingly, how the global crowd behavior is influenced.

In the social psychology literature, crowds are classified as mobs and audiences. Audiences are passive crowds whereas mobs are active crowds with emotional, irrational and seemingly homogeneous behavior. In this thesis, we examine how audiences turn into mobs and simulate the common properties of mobs to create collective misbehavior. So far, crowd simulation research has focused on panicking crowds among all types of mobs. We extend the state of the art to simulate different types of mobs based on the taxonomy. We demonstrate various scenarios that realize the behavior of distinct mob types.

Our model is built on top of an existing crowd simulation system, HiDAC (High-Density Autonomous Crowds). HiDAC provides us with the physical and low-level psychological features of crowds. The user normally sets these parameters to model the non-uniformity and diversity of the crowd. In our work, we free the user of the tedious task of low-level parameter tuning, and combine all these behaviors in distinct psychological factors. We present the results of our experiments on whether the incorporation of a personality model into HiDAC was perceived as intended.

Keywords: Crowd simulation, autonomous agents, simulation of affect, crowd taxonomy, mob behavior.

BU-CE-1002: PDF

Causality Analysis in Biological Networks

Author: Özgün Babür
Supervisor: Uğur Doğrusöz

Systems biology is a rapidly emerging field, shaped in the last two decades or so, which promises understanding and curing several complex diseases such as cancer. In order to get an insight about the system - specifically the molecular network in the cell - we need to work on following four fundamental aspects: experimental and computational methods to gather knowledge about the system, mathematical models for representing the knowledge, analysis methods for answering questions on the model, and software tools for working on these. In this thesis, we propose new approaches related to all these aspects.

In this thesis, we define new terms and concepts that helps us to analyze cellular processes, such as positive and negative paths, upstream and downstream relations, and distance in process graphs. We propose algorithms that will search for functional relations between molecules and will answer several biologically interesting questions related to the network, such as neighborhoods, paths of interest, and common targets or regulators of molecules.

In addition, we introduce ChiBE, a pathway editor for visualizing and analyzing BioPAX networks. The tool converts BioPAX graphs to drawable process diagrams and provides the mentioned novel analysis algorithms. Users can query pathways in Pathway Commons database and create sub-networks that focus on specific relations of interest.

We also describe a microarray data analysis component, PATIKAmad, built into ChiBE and PATIKAweb, which integrates expression experiment data with networks. PATIKAmad helps those tools to represent experiment values on network elements and to search for causal relations in the network that potentially explain dependent expressions. Causative path search depends on the presence of transcriptional relations in the model, which however is underrepresented in most of the databases. This is mainly due to insufficient knowledge in the literature.

We finally propose a method for identifying and classifying modulators of transcription factors, to help complete the missing transcriptional relations in the pathway databases. The method works with large amount of expression data, and looks for evidence of modulation for triplets of genes, i.e. modulator - factor - target. Modulator candidates are chosen among the interacting proteins of transcription factors. We expect to observe that expression of the target gene depends on the interaction between factor and modulator. According to the observed dependency type, we further classify the modulation. When tested, our method finds modulators of Androgen Receptor; our top-scoring result modulators are supported by other evidence in the literature. We also observe that the modulation event and modulation type highly depend on the specific target gene. This finding contradicts with expectations of molecular biology community who often assume a modulator has one type of effect regardless of the target gene.

Keywords: Computational biology, bioinformatics, systems biology, pathway informatics, causality analysis.

BU-CE-0911: PDF

Threshold Cryptography with Chinese Remainder Theorem

Author: Kamer Kaya
Supervisors: A. Aydın Selçuk

Information security has become much more important since electronic communication is started to be used in our daily life. The content of the term information security varies according to the type and the requirements of the area. However, no matter which algorithms are used, security depends on the secrecy of a key which is supposed to be only known by the agents in the rst place.

The requirement of the key being secret brings several problems. Storing a secret key on only one person, server or database reduces the security of the system to the security and credibility of that agent. Besides, not having a backup of the key introduces the problem of losing the key if a software/hardware failure occurs. On the other hand, if the key is held by more than one agent an adversary with a desire for the key has more exibility of choosing the target. Hence the security is reduced to the security of the least secure or least credible of these agents.

Secret sharing schemes are introduced to solve the problems above. The main idea of these schemes is to share the secret among the agents such that only predened coalitions can come together and reveal the secret, while no other coalition can obtain any information about the secret. Thus, the keys used in the areas requiring vital secrecy like large-scale nance applications and commandcontrol mechanisms of nuclear systems, can be stored by using secret sharing schemes.

Threshold cryptography deals with a particular type of secret sharing schemes. In threshold cryptography related secret sharing schemes, if the size of a coalition exceeds a bound t, it can reveal the key. And, smaller coalitions can reveal no information about the key. Actually, the rst secret sharing scheme in the literature is the threshold scheme of Shamir where he considered the secret as the constant of a polynomial of degree t - 1, and distributed the points on the polynomial to the group of users. Thus, a coalition of size t can recover the polynomial and reveal the key but a smaller coalition can not. This scheme is widely accepted by the researchers and used in several applications. Shamir's secret sharing scheme is not the only one in the literature. For example, almost concurrently, Blakley proposed another secret sharing scheme depending on planar geometry and Asmuth and Bloom proposed a scheme depending on the Chinese Remainder Theorem. Although these schemes satisfy the necessary and su cient conditions for the security, they have not been considered for the applications requiring a secret sharing scheme.

Secret sharing schemes constituted a building block in several other applications other than the ones mentioned above. These applications simply contain a standard problem in the literature, the function sharing problem. In a function sharing scheme, each user has its own secret as an input to a function and the scheme computes the outcome of the function without revealing the secrets. In the literature, encryption or signature functions of the public key algorithms like RSA, ElGamal and Paillier can be given as an example to the functions shared by using a secret sharing scheme. Even new generation applications like electronic voting require a function sharing scheme. As mentioned before, Shamir's secret sharing scheme has attracted much of the attention in the literature and other schemes are not considered much. However, as this thesis shows, secret sharing schemes depending on the Chinese Remainder Theorem can be practically used in these applications. Since each application has erent needs, Shamir's secret sharing scheme is used in applications with several extensions. Basically, this thesis investigates how to adapt Chinese Remainder Theorem based secret sharing schemes to the applications in the literature. We first propose some modications on the Asmuth-Bloom secret sharing scheme and then by using this modied scheme we designed provably secure function sharing schemes and security extensions.

Keywords: Threshold cryptography, secret sharing, function sharing, Asmuth-Bloom, Chinese Remainder Theorem, provable security.

BU-CE-0910: PDF

A Scenario-based Query Processing Framework for Video Surveillance

Author: Ediz Şaykol
Supervisor: Uğur Güdükbay and Özgür Ulusoy

Video surveillance has become one of the most interesting and challenging application areas in video processing domain. Automated access to the semantic content of surveillance videos to detect anomalies is among the basic tasks; however due to the high variability of the visual features and large size of the video input, it still remains a challenging issue. A considerable amount of research dealing with automated access to video surveillance have appeared in the literature; however, significant semantic gaps in event models and content-based access remain. In this thesis, we propose a scenario-based query processing framework for video surveillance archives, especially for indoor environments. A scenario is specified as a sequence of event predicates that can be enriched with object-based low-level features and directional predicates. We also propose a keyframe labeling technique, which assigns labels to keyframes extracted based on keyframe detection algorithm, hence transforms the video input to an event sequence based representation. The keyframe detection scheme relies on an inverted tracking scheme, which is a view-based representation of the actual content by an inverted index. We also devise mechanisms based on finite state automata using this event sequence representation to detect a typical set of anomalous events in the scene, which are also used for meta-data extraction. Our query processing framework also supports inverse querying and view-based querying, for after-the-fact activity analysis, since the inverted tracking scheme effectively tracks the moving objects and enables view-based addressing of the scene. We propose a specific surveillance query language to express the supported query types in a scenario-based manner. We also present a visual query specification interface devised to enhance the query-specification process. It has been shown through performance experiments that the keyframe labeling algorithm significantly reduces the storage space and the input size, and yields a reasonable anomaly detection performance. We have also conducted performance experiments to show that our query processing technique has an effective expressive power and satisfactory retrieval accuracy in video surveillance.

Keywords: video surveillance, scenario-based querying, keyframe labeling, inverse querying, view-based querying, anomaly detection, after-the-fact analysis.

BU-CE-0908: PDF

Improving The Efficiency of Search Engines: Strategies for Focused Crawling, Searching, and Index Pruning

Author: İsmail Sengör Altıngövde
Supervisor: Özgür Ulusoy

Search engines are the primary means of retrieval for text data that is abundantly available on the Web. A standard search engine should carry out three fundamental tasks, namely; crawling the Web, indexing the crawled content, and finally processing the queries using the index. Devising efficient methods for these tasks is an important research topic. In this thesis, we introduce efficient strategies related to all three tasks involved in a search engine. Most of the proposed strategies are essentially applicable when a grouping of documents in its broadest sense (i.e., in terms of automatically obtained classes/clusters, or manually edited categories) is readily available or can be constructed in a feasible manner. Additionally, we also introduce static index pruning strategies that are based on the query views.

For the crawling task, we propose a rule-based focused crawling strategy that exploits interclass rules among the document classes in a topic taxonomy. These rules capture the probability of having hyperlinks between two classes. The rule-based crawler can tunnel toward the on-topic pages by following a path of off-topic pages, and thus yields higher harvest rate for crawling on-topic pages.

In the context of indexing and query processing tasks, we concentrate on conducting efficient search, again, using document groups; i.e., clusters or categories. In typical cluster-based retrieval (CBR), first, clusters that are most similar to a given free-text query are determined, and then documents from these clusters are selected to form the final ranked output. For efficient CBR, we first identify and evaluate some alternative query processing strategies. Next, we introduce a new index organization, so-called cluster-skipping inverted index structure (CS-IIS). It is shown that typical-CBR with CS-IIS outperforms previous CBR strategies (with an ordinary index) for a number of datasets and under varying search parameters. In this thesis, an enhanced version of CS-IIS is further proposed, in which all information to compute query-cluster similarities during query evaluation is stored. We introduce an incremental-CBR strategy that operates on top of this latter index structure, and demonstrate its search efficiency for different scenarios.

Finally, we exploit query views that are obtained from the search engine query logs to tailor more effective static pruning techniques. This is also related to the indexing task involved in a search engine. In particular, query view approach is incorporated into a set of existing pruning strategies, as well as some new variants proposed by us. We show that query view based strategies significantly outperform the existing approaches in terms of the query output quality, for both disjunctive and conjunctive evaluation of queries.

Keywords: Search engine, focused crawling, cluster-based retrieval, static index pruning.

BU-CE-0903: PDF

Storage and Access Schemes for Aggregate Query Processing on Road Networks

Author: Engin Demir
Supervisor: Cevdet Aykanat

Abstract: A well-known example of spatial networks is road networks, which form an integral part of many GIS applications such as location-based services, intelligent traveling systems, vehicle telematics, location-aware advertising. In practice, road network data is too large to fit into the volatile memory, and a considerable portion of the data must be stored on the secondary storage since several spatial and non-spatial attributes as well as points of interest data are associated with junctions and links. In network query processing, the spatial coherency that exists in accessing data leads to a temporal coherency, that is, connected junctions are accessed almost concurrently. Taking this fact into consideration, it seems reasonable to place the data associated with connected junctions in the same disk pages so that the data can be fetched to the memory with fewer disk accesses. We show that the state-of-the-art clustering graph model for allocation of data to disk pages is not able to correctly capture the disk access cost of successor retrieval operations. We propose clustering models based on hypergraph-partitioning, which correctly encapsulate the spatial and temporal coherency in query processing via the utilization of query logs in order to minimize the number of disk accesses during aggregate query processing. We introduce the link-based storage scheme for road networks as an alternative to the widely used junction-based storage scheme. We present Get-Undone-Successors (GUS) as a new successor retrieval operation for network queries, where the candidate successors to be retrieved are pruned during processing a query. We investigate two different instances of GUS operation: the Get-Unprocessed-Successors operation typically arises in Dijkstra's single source shortest path algorithm, and the Get-Unvisited-Successors operation typically arises in the incremental network expansion framework. The simulation results show that our storage and access schemes utilizing the proposed clustering hypergraph models are quite effective in reducing the number of disk accesses during aggregate query processing.

Keywords: Spatial network databases, road networks, storage management, query processing, clustering, hypergraph partitioning.

BU-CE-0902: PDF

Modeling Interestingness of Streaming Association Rules as a Benefit Maximizing Classification Problem

Author: Tolga Aydın
Supervisor: H. Altay Güvenir

Abstract: In a typical application of association rule learning from market basket data, a set of transactions for a fixed period of time is used as input to rule learning algorithms. For example, the well-known Apriori algorithm can be applied to learn a set of association rules from such a transaction set. However, learning association rules from a set of transactions is not a one-time only process. For example, a market manager may perform the association rule learning process once every month over the set of transactions collected through the previous month. For this reason, we will consider the problem where transaction sets are input to the system as a stream of packages. The sets of transactions may come in varying sizes and in varying periods. Once a set of transactions arrive, the association rule learning algorithm is run on the last set of transactions, resulting in a new set of association rules. Therefore, the set of association rules learned will accumulate and increase in number over time, making the mining of interesting ones out of this enlarging set of association rules impractical for human experts. We refer to this sequence of rules as "association rule set stream" or "streaming association rules" and the main motivation behind this research is to develop a technique to overcome the interesting rule selection problem. A successful association rule mining system should select and present only the interesting rules to the domain experts. However, definition of interestingness of association rules on a given domain usually differs from one expert to the other and also over time for a given expert. In this thesis, we propose a post-processing method to learn a subjective model for the interestingness concept description of the streaming association rules. The uniqueness of the proposed method is its ability to formulate the interestingness issue of association rules as a benefit-maximizing classification problem and obtain a different interestingness model for each user. In this new classification scheme, the determining features are the selective objective interestingness factors, including the rule's content itself, related to the interestingness of the association rules; and the target feature is the interestingness label of those rules. The proposed method works incrementally and employs user interactivity at a certain level. It is evaluated on a real market dataset. The results show that the model can successfully select the interesting ones.

Keywords: Interestingness learning, incremental learning, classification learning, association rules, data mining.

BU-CE-0802: PDF

Understanding Human Motion: Recognition and Retrieval of Human Activities

Author: Author: Nazlı İkizler
Supervisor: Pınar Duygulu-Şahin

Abstract: Within the ever-growing video archives is a vast amount of interesting information regarding human action/activities. In this thesis, we approach the problem of extracting this information and understanding human motion from a computer vision perspective. We propose solutions for two distinct scenarios, ordered from simple to complex. In the first scenario, we deal with the problem of single action recognition in relatively simple settings. We believe that human pose encapsulates many useful clues for recognizing the ongoing action, and we can represent this shape information for 2D single actions in very compact forms, before going into details of complex modeling. We show that high-accuracy single human action recognition is possible 1) using spatial oriented histograms of rectangular regions when the silhouette is extractable, 2) using the distribution of boundary-fitted lines when the silhouette information is missing. We demonstrate that, inside videos, we can further improve recognition accuracy by means of adding local and global motion information. We also show that within a discriminative framework, shape information is quite useful even in the case of human action recognition in still images.

Our second scenario involves recognition and retrieval of complex human activities within more complicated settings, like the presence of changing background and viewpoints. We describe a method of representing human activities in 3D that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across time and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. Our query language makes use of finite state automata and requires simple text encoding and no visual examples. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.

Keywords: Human motion, action recognition, activity recognition, activity retrieval, image and video processing, classification.

BU-CE-0801: PDF

Counteracting Free Riding in Pure Peer-to-Peer Networks

Author: K. Murat Karakaya
Supervisors: Özgür Ulusoy and İbrahim Körpeoğlu

Abstract: The peer-to-peer (P2P) network paradigm has attracted a significant amount of interest as a popular and successful alternative to traditional client-server model for resource sharing and content distribution. However, researchers have observed the existence of high degrees of free riding in P2P networks which poses a serious threat to effectiveness and efficient operation of these networks, and hence to their future. Therefore, eliminating or reducing the impact of free riding on P2P networks has become an important issue to investigate and a considerable amount of research has been conducted on it.

In this thesis, we propose two novel solutions to reduce the adverse effects of free riding on P2P networks and to motivate peers to contribute to P2P networks. These solutions are also intended to lead to performance gains for contributing peers and to penalize free riders. As the first solution, we propose a distributed and localized scheme, called Detect and Punish Method (DPM), which depends on detection and punishment of free riders. Our second solution to the free riding problem is a connection-time protocol, called P2P Connection Management Protocol (PCMP), which is based on controlling and managing link establishments among peers according to their contributions.

To evaluate the proposed solutions and compare them with other alternatives, we developed a new P2P network simulator and conducted extensive simulation experiments. Our simulation results show that employing our solutions in a P2P network considerably reduces the adverse effects of free riding and improves the overall performance of the network. Furthermore, we observed that P2P networks utilizing the proposed solutions become more robust and scalable.

Keywords: Free riding, Peer-to-Peer networks, distributed computing, performance evaluation.

BU-CE-0706: PDF

Visualization of Urban Environments

Author: Türker Yılmaz
Supervisor: Uğur Güdükbay

Abstract: Modeling and visualization of large geometric environments is a popular research area in computer graphics. In this dissertation, a framework for modeling and stereoscopic visualization of large and complex urban environments is presented. The occlusion culling and view-frustum culling is performed to eliminate most of the geometry that do not contribute to the user.s final view. For the occlusion culling process, the shrinking method is employed but performed using a novel Minkowski-difference-based approach. In order to represent partial visibility, a novel building representation method, called the slice-wise representation is developed. This method is able to represent the preprocessed partial visibility with huge reductions in the storage requirement. The resultant visibility list is rendered using a graphics-processing-unit-based algorithm, which perfectly fits into the proposed slice-wise representation. The stereoscopic visualization depends on the calculated eye positions during walkthrough and the visibility lists for both eyes are determined using the preprocessed occlusion information. The view-frustum culling operation is performed once instead of two for both eyes. The proposed algorithms were implemented on personal computers. Performance experiments show that, the proposed occlusion culling method and the usage of the slice-wise representation increase the frame rate performance by 81%; the graphics-processing-unit-based display algorithm increases it by an additional 315% and decrease the storage requirement by 97% as compared to occlusion culling using building-level granularity and not using the graphics hardware. We show that, a smooth and real-time visualization of large and complex urban environments can be achieved by using the proposed framework.

Keywords: Stereoscopic visualization, slice-wise representation, space subdivision, octree, occlusion culling, occluder shrinking, Minkowski difference, fromregion visibility, urban visualization, visibility processing.

BU-CE-0605: PDF

Models and Algorithms for Parallel Text Retrieval

Author: Berkant Barla Cambazoğlu
Supervisor: Cevdet Aykanat

Abstract: In the last decade, search engines became an integral part of our lives. The current state-of-the-art in search engine technology relies on parallel text retrieval. Basically, a parallel text retrieval system is composed of three components: a crawler, an indexer, and a query processor. The crawler component aims to locate, fetch, and store the Web pages in a local document repository. The indexer component converts the stored, unstructured text into a queryable form, most often an inverted index. Finally, the query processing component performs the search over the indexed content. In this thesis, we present models and algorithms for efficient Web crawling and query processing. First, for parallel Web crawling, we propose a hybrid model that aims to minimize the communication overhead among the processors while balancing the number of page download requests and storage loads of processors. Second, we propose models for documentand term-based inverted index partitioning. In the document-based partitioning model, the number of disk accesses incurred during query processing is minimized while the posting storage is balanced. In the term-based partitioning model, the total amount of communication is minimized while, again, the posting storage is balanced. Finally, we develop and evaluate a large number of algorithms for query processing in ranking-based text retrieval systems. We test the proposed algorithms over our experimental parallel text retrieval system, Skynet, currently running on a 48-node PC cluster. In the thesis, we also discuss the design and implementation details of another, somewhat untraditional, grid-enabled search engine, SE4SEE. Among our practical work, we present the Harbinger text classification system, used in SE4SEE for Web page classification, and the K-PaToH hypergraph partitioning toolkit, to be used in the proposed models.

Keywords: Search engine, parallel text retrieval, Web crawling, inverted index partitioning, query processing, text classification, hypergraph partitioning.

BU-CE-0508: PDF

An Ontology for Computer-Aided Modeling of Cellular Processes

Author: Emek Demir
Supervisor: Uğur Doğrusöz

Abstract: Cellular processes form the hardware layer of living organisms. Malfunctions in cellular processes are responsible for most of the currently incurable diseases. Not surprisingly, knowledge about cellular processes are growing at an enormous rate. However, today's molecular biology suffers from lack of a formal representation system for cellular processes. Most of the knowledge is locked in literature, that are not accessible to computational analysis and modeling. Given the complexity of the system we are attacking, the need for a representation system and modeling tools for cellular processes are clear.

In this dissertation, we describe an ontology for modeling processes. Our ontology possesses several unique features, including ability to represent abstractions and multiple levels of detail, cellular compartments and molecular states. Furthermore, it was designed to meet several user and system requirements, including ease of integration, querying, analysis and visualization.

Based on this ontology we also implemented a set of software tools within the PATIKA project. Primary use cases of PATIKA are integration, querying and visualization, and we have obtained satisfactory results proving the feasibility of our ontology.

Compared with existing alternative methods of representing and querying information about cellular processes, PATIKA provides several advantages, including a regular representation system, powerful querying options, an advanced visualization. Moreover PATIKA models can be analyzed by computational methods such as flux analysis or pathway activity inference. Although it has a more steep learning curve compared to existing ad hoc representation systems, we believe that tools like PATIKA will be essential for molecular biology research in the future.

Keywords: Bioinformatics, ontology, pathway.

BU-CE-0509: PDF

Parallel sparse matrix-vector multiples and iterative solvers

Author: Bora Uçar
Supervisor: Cevdet Aykanat

Abstract: Sparse matrix-vector multiply (SpMxV) operations are in the kernel of many scientific computing applications. Therefore, efficient parallelization of SpMxV operations is of prime importance to scientific computing community. Previous works on parallelizing SpMxV operations consider maintaining the load balance among processors and minimizing the total message volume. We show that the total message latency (start-up time) may be more important than the total message volume. We also stress that the maximum message volume and latency handled by a single processor are important communication cost metrics that should be minimized. We propose hypergraph models and hypergraph partitioning methods to minimize these four communication cost metrics in one dimensional and two dimensional partitioning of sparse matrices. Iterative methods used for solving linear systems appear to be the most common context in which SpMxV operations arise. Usually, these iterative methods apply a technique called preconditioning. Approximate inverse preconditioning - which can be applied to a large class of unsymmetric and symmetric matrices - replaces an SpMxV operation by a series of SpMxV operations. That is, a single SpMxV operation is only a piece of a larger computation in the iterative methods that use approximate inverse preconditioning. In these methods, there are interactions in the form of dependencies between the successive SpMxV operations. These interactions necessitate partitioning the matrices simultaneously in order to parallelize a full step of the subject class of iterative methods efficiently. We show that the simultaneous partitioning requirement gives rise to various matrix partitioning models depending on the iterative method used. We list the partitioning models for a number of widely used iterative methods. We propose operations to build a composite hypergraph by combining the previously proposed hypergraph models and show that partitioning the composite hypergraph models addresses the simultaneous matrix partitioning problem. We strove to demonstrate how the proposed partitioning methods - both the one that addresses multiple communication cost metrics and the other that addresses the simultaneous partitioning problem - help in practice. We implemented a library and investigated the performances of the partitioning methods. These practical investigations revealed a problem that we call message ordering problem. The problem asks how to organize the send operations to minimize the completion time of a certain class of parallel programs. We show how to solve the message ordering problem optimally under reasonable assumptions.

Keywords: Sparse matrices, parallel matrix-vector multiplication, iterative methods, preconditioning, approximate inverse preconditioner, hypergraph partitioning.

BU-CE-0410: PDF

Metadata-Based and Personalized Web Querying

Author: Selma Ayşe Özel
Supervisor: Özgür Ulusoy

Abstract: The advent of the Web has raised new searching and querying problems. Keyword matching based querying techniques that have been widely used by search engines, return thousands of Web documents for a single query, and most of these documents are generally unrelated to the users' information needs. Towards the goal of improving the information search needs of Web users, a recent promising approach is to index the Web by using metadata and annotations.

In this thesis, we model and query Web-based information resources using metadata for improved Web searching capabilities. Employing metadata for querying the Web increases the precision of the query outputs by returning semantically more meaningful results. Our Web data model, named "Web information space model", consists of Web-based information resources (HTML/XML documents on the Web), expert advice repositories (domain-expert-specified metadata for information resources), and personalized information about users (captured as user profiles that indicate users' preferences about experts as well as users' knowledge about topics). Expert advice is specified using topics and relationships among topics (i.e., metalinks), along the lines of recently proposed topic maps standard. Topics and metalinks constitute metadata that describe the contents of the underlying Web information resources. Experts assign scores to topics, metalinks, and information resources to represent the "importance" of them. User profiles store users' preferences and navigational history information about the information resources that the user visits. User preferences, knowledge level on topics, and history information are used for personalizing the Web search, and improving the precision of the results returned to the user.

We store expert advices and user profiles in an object relational database management system, and extend the SQL for efficient querying of Web-based information resources through the Web information space model. SQL extensions include the clauses for propagating input importance scores to output tuples, the clause that specifies query stopping condition, and new operators (i.e., text similarity based selection, text similarity based join, and topic closure). Importance score propagation and query stopping condition allow ranking of query outputs, and limiting the output size. Text similarity based operators and topic closure operator support sophisticated querying facilities. We develop a new algebra called Sideway Value generating Algebra (SVA) to process these SQL extensions.

We also propose evaluation algorithms for the text similarity based SVA directional join operator, and report experimental results on the performance of the operator. We demonstrate experimentally the effectiveness of metadata-based personalized Web search through SQL extensions over the Web information space model against keyword matching based Web search techniques.

Keywords: Metadata based Web querying, topic maps, user profile, personalized Web querying, Sideway Value generating Algebra, score management, text similarity based join.

BU-CE-0211: PDF

Data Modeling and Querying for Video Databases

Author: Mehmet Emin Dönderler
Supervisors: Özgür Ulusoy and Uğur Güdükbay

Abstract: With the advances in information technology, the amount of multimedia data captured, produced and stored is increasing rapidly. As a consequence, multimedia content is widely used for many applications in today's world, and hence, a need for organizing this data and accessing it from repositories with vast amount of information has been a driving stimulus both commercially and academically. In compliance with this inevitable trend, first image and especially later video database management systems have attracted a great deal of attention since traditional database systems are not suitable to be used for multimedia data.

In this thesis, a novel architecture for a video database system is proposed. The architecture is original in that it provides full support for spatio-temporal queries that contain any combination of spatial, temporal, object-appearance, external-predicate, trajectory-projection and similarity-based object-trajectory conditions by a rule-based system built on a knowledge-base, while utilizing an object-relational database to respond to semantic (keyword, event/activity and category-based) and low-level (color, shape and texture) video queries. Research results obtained from this thesis work have been realized by a prototype video database management system, which we call BilVideo. Its tools, Fact-Extractor and Video-Annotator, its Web-based visual query interface and its SQL-like textual query language are presented. Moreover, the query processor of BilVideo and our spatio-temporal query processing strategy are also discussed.

Keywords: Video databases, multimedia databases, information systems, video data modeling, content-based retrieval, spatio-temporal relations, spatio-temporal query processing, video query languages.

BU-CE-0113: PDF

Active and Mobile Data Management Through Event History Mining

Author: Yücel Saygın
Supervisor: Özgür Ulusoy

Abstract: An event history is a collection of events that have occurred in an event-based system over a period of time. There can be various types of events, among which are the temperature changes and power demands in a power management system, client requests for data items in a broadcast system, price increase of a stock in stock market, and so on. There is a lot of interesting information that can be extracted from an event history via data mining techniques. Our purpose in this thesis is to propose methods for extracting this useful information in the form of event sequences and event associations from a single or two correlated event histories. We also aim to show how the results of the mining process can be used for active and mobile data management. The results of the mining process demonstrates the relationships among the events which are generally captured as associations or sequences. The relationships among the events are shown to be a useful tool to enhance an event-based system via event organization, predictive event detection, and proactive rule execution.

We consider the mining of both a single event history and two correlated event histories. We first propose a method for mining binary relationships from a single event history. The binary relationships among events are used to organize the events into related groups of event. This organization is important because the number of events in an event-driven system may become very high and unmanageable. The groups of related events and the relationships among the events are exploited for predictive event detection and proactive rule execution in active database systems. We also consider the mining of two correlated event histories which are disjoint and the events in one history are related to the events in the other history. We describe how we can efficiently extract associations among the events spanning different event histories, which we call cross associations.

We have chosen data broadcast in mobile computing environments as a case study for active data management. One of the important facts in mobile computing environments with wireless communication medium is that the server-to-client (downlink) communication bandwidth is much higher than the client-to-server (uplink) communication bandwidth. This asymmetry makes the dissemination of data to client machines a desirable approach. However, the dissemination of data by broadcasting may induce high access latency in case the number of broadcast data items is large. Our purpose is to show how active data management can be used to improve mobile data management through broadcast data organization and prefetching from the broadcast medium. In order to achieve this, the client requests of data items at the server are considered as events and the chronological sequence of items that have been requested by clients is considered as an event history. We call the event history in broadcast medium a broadcast history. The first step in our work is to analyze the broadcast history to discover sequential patterns describing the client access behavior. The sequential patterns are used to organize the data items in the broadcast disk in such a way that the items requested subsequently are placed close to each other. Then we utilize the predictive event detection techniques to improve the cache hit ratio to be able to decrease the access latency. Prediction of future client access behavior enables clients to prefetch the data from the broadcast disk based on the rules extracted from the broadcast history.

BU-CEIS-9915: PDF

Hypergraph Models for Sparse Matrix Partitioning and Reordering

Author: Ümit V. Çatalyürek
Supervisor: Cevdet Aykanat

Abstract: Graphs have been widely used to represent sparse matrices for various scientific applications including one-dimensional (1D) decomposition of sparse matrices for parallel sparse-matrix vector multiplication (SpMxV) and sparse matrix reordering for low fill factorization. The standard graph-partitioning based 1D decomposition of sparse matrices does not reflect the actual communication volume requirement for parallel SpMxV. We propose two computational hypergraph models which avoid this crucial deficiency of the graph model on 1D decomposition. The proposed models reduce the 1D decomposition problem to the well-known hypergraph partitioning problem. In the literature, there is a lack of 2D decomposition heuristic which directly minimizes the communication requirements for parallel SpMxV computations. Three novel hypergraph models are introduced for 2D decomposition of sparse matrices for minimizing the communication volume requirement. The first hypergraph model is proposed for fine-grain 2D decomposition of the sparse matrices for parallel SpMxV. The second hypergraph model for 2D decomposition is proposed to produce jagged-like decomposition of the sparse matrix. The checkerboard decomposition based parallel matrix-vector multiplication algorithms are widely encountered in the literature. However, only the load balancing problem is addressed in those works. Here, we propose a new hypergraph model which aims the minimization of communication volume while maintaining the load balance among the processors for checkerboard decomposition, as the third model for 2D decomposition. The proposed model reduces the decomposition problem to the multi-constraint hypergraph partitioning problem. The notion of multi-constraint partitioning has recently become popular in graph partitioning. We applied the multi-constraint partitioning to the hypergraph partitioning problem for solving checkerboard partitioning. Graph partitioning by vertex separator (GPVS) is widely used for nested dissection based low fill ordering of sparse matrices for direct solution of linear systems. In this work, we also show that the GPVS problem can formulated as hypergraph partitioning. We exploit this finding to develop a novel hypergraph partitioning-based nested dissection ordering. The recently proposed successful multilevel framework is exploited to develop a multilevel hypergraph partitioning tool PaToH for the experimental verification of our proposed hypergraph models. Experimental results on a wide range of realistic sparse test matrices confirm the validity of the proposed hypergraph models. In terms of communication volume, the proposed hypergraph models produce 30% and 59% better decompositions than the graph model in 1D and 2D decompositions of sparse matricies for parallel SpMxV computations, respectively. The proposed hypergraph partitioning-based nested dissection produces 25% to 45% better orderings than the widely used multiple mimimum degree ordering in the ordering of various test matrices arising from different applications.

Keywords: Sparse matrices, parallel matrix-vector multiplication, parallel processing, matrix decomposition, computational graph model, graph partitioning, computational hypergraph model, hypergraph partitioning, ll reducing ordering, nested dissection.

BU-CEIS-9812: PDF

An Information-Based Approach to Punctuation

Author: Bilge Say
Supervisor: Varol Akman

Abstract: Punctuation marks have special importance in bringing out the meaning of a text. Geoffrey Nunberg's 1990 monograph bridged the gap between descriptive treatments of punctuation and prescriptive accounts, by spelling out the features of a text-grammar for the orthographic sentence. His research inspired most of the recent work concentrating on punctuation marks in Natural Language Processing (NLP). Several grammars incorporating punctuation were then shown to reduce failures and ambiguities in parsing. Nunberg's approach to punctuation (and other formatting devices) was partially incorporated into natural language generation systems. However, little has been done concerning how punctuation marks bring semantic and discourse cues to the text and whether these can be exploited computationally.

The aim of this thesis is to analyse the semantic and discourse aspects of punctuation marks, within the framework of Hans Kamp and Uwe Reyle's Discourse Representation Theory (DRT) (and its extension by Nicholas Asher, Segmented Discourse Representation Theory (SDRT)), drawing implications for NLP systems. The method used is the extraction of patterns for four common punctuation marks (dashes, semicolons, colons, and parentheses) from corpora, followed by formal modeling and a modest computational prototype. Our observations and results have revealed interesting occurrences of linguistic phenomena, such as anaphora resolution and presupposition, in conjunction with punctuation marks. Within the framework of SDRT such occurrences are then tied with the overall discourse structure. The proposed model can be taken as a template for NLP software developers for making use of the punctuation marks more effectively. Overall, the thesis describes the contribution of punctuation at the orthographic sentence level to the information passed on to the reader of a text.

BU-CEIS-9604: PDF

Partial Query Evaluation for Vertically Partitioned Signature Files in Very Large Unformatted Databases

Author: Seyit Koçberber
Supervisor: Fazlı Can

Abstract: Signature file approach is a well-known indexing technique. The Bit Sliced Signature File (BSSF) method provides an efficient retrieval environment by only accessing on-bits of query signatures. However, its response time increases for increasing number of query terms. For BSSF we define a stopping condition that tries to avoid the processing of all on-bits of query signatures through partial evaluation. The aim of the stopping condition is to reduce the expected number of false drops to the level that also provides the lowest response time within the framework of BSSF. We propose the Partially evaluated Bit- Sliced Signature File (P-BSSF) method that employs the partial evaluation strategy and minimizes the response time in a multi-term query environment by considering the submission probabilities of the queries with different number of terms. Experiments show that P-BSSF provides 85% improvement in response time over BSSF depending on space overhead and the number of query terms. To provide better optimization of the signature file parameters in multi-term query environments, we propose Multi-Fragmented Signature File (MFSF) method as an extension of P-BSSF. In MFSF, a signature file is divided into variable sized vertical fragments with different on-bit densities to optimize the response time using a similar query evaluation methodology. In query evaluation the query signature on-bits of the lower on-bit density fragments are used first. As the number of query terms increases, the number of query signature on-bits in the lower on-bit density fragments increases and the query stopping condition is reached in fewer bit slice evaluation steps. Therefore, in MFSF, the response time decreases for an increasing number of query terms. The analysis shows that, with no space overhead, MFSF outperforms the P-BSSF and generalized frame-sliced signature file organizations. Due to hashing and superimposition operations used in obtaining signatures, the signature of an irrelevant record may match the query signature, i.e., it is possible to have false drops. In signature file processing the accurate estimation of the number of false drops is essential to obtain system parameters that will yield a desirable response time. We propose a more accurate false drop estimation method for databases with varying record lengths instead of using an average number of distinct terms per record. In this method, the records of a database are conceptually partitioned according to the number of distinct terms they contain and the number of false drops of each group is estimated separately. Experiments with real data show that depending on the space overhead, the proposed method obtains up to 33%, 25%, and 20% response time improvements for the sequential, generalized frame-sliced, and MFSF methods, respectively. For very large databases even one bit slice of MFSF may occupy several disk blocks. We propose the Compressed Multi-Fragmented Signature File (C-MFSF) method that stores the bit slices of MFSF in a compact form which provides a better response time. To minimize the number of disk accesses, the signature size and the disk block size can be adjusted such that most bit slices fit into a single disk block after compression. In such environments, C-MFSF evaluates the queries with more than two terms with only one disk access per query term rather than two disk accesses of the inverted file method which are respectively for the pointer of the query term posting list and the list itself. Our projection based on real data shows that for a database of one million records C-MFSF provides a response time of 0.85 seconds.