Ozcan Ozturk

Research Highlights


Research Assistant Positions Available: Looking for bright, creative, self-motivated, and hardworking students to work in our funded projects. If you are interested in one of the following research topics, you can contact Dr. Ozcan Ozturk to discuss research opportunities.


Parallel Systems: Cloud Computing, GPU Computing, Manycore Accelerators, Parallel Programming

Processor Architecture: Multicore Processors, Computer Organization, Reliability-Aware 3D Chip Multiprocessor Design, Heterogeneous Chip Multiprocessors, Network On Chip Architectures

Compilers: Automatic Parallelization, Dataflow Analysis, Optimizing Compilers, Memory Optimization

Cloud Computing

Manycore accelerators are being deployed in the Cloud to improve the processing capabilities and to provide heterogeneity as Cloud Computing Systems become increasingly complex and process ever-increasing datasets. In such systems, application scheduling and data mapping need to be enhanced to maximize the utilization of the underlying architecture. To achieve this, Cloud management schemes require a fresh look when the underlying components are heterogeneous in many different ways. Moreover, applications differ in the way they perform on various special accelerators. Our goal is to design a runtime management system for Cloud Systems with manycore accelerators.

GPU Computing - Manycore Accelerators

Manycore accelerators are being deployed in many systems to improve the processing capabilities. In such systems, application mapping need to be enhanced to maximize the utilization of the underlying architecture. Especially in GPUs, mapping becomes critical for multi-kernel applications as kernels may exhibit different characteristics. While some of the kernels run faster on GPU, others may refer to stay in CPU due to the high data transfer overhead. Thus, heterogeneous execution may yield to improved performance compared to executing the application only on CPU or only on GPU. We would like to design systems with smart kernel mapping.

Automatic Parallelization

The importance of parallel programming has dramatically increased with the emergence of multicore and manycore architectures. We specifically focus on tools and techniques that enables the programmer to develop parallel programs wasily. The task of identifying parallel code sections for different manycore architectures including Intel MIC can be carried out by the compiler. We aim to alleviate the compiler infrastructure to parallelize the applications automatically.

Reliability-Aware 3D Chip Multiprocessor Design

Ability to stack separate chips in a single package enables three- dimensional integrated circuits (3D ICs). Heterogeneous 3D ICs provide even better opportunities to reduce the power and increase the performance per unit area. An important issue in designing a heterogeneous 3D IC is reliability. To achieve this, one needs to select the data mapping and processor layout carefully. We try to address this problem using an ILP approach.

Heterogeneous Chip Multiprocessors

Increasing complexity of applications and their large dataset sizes make it imperative to consider novel architectures that are efficient from both performance and power angles. Chip Multiprocessors (CMP) are one such example where multiple processor cores are placed into the same die. As technology scales, the International Technology Roadmap for Semiconductors (ITRS) projects that the number of cores in a chip multiprocessor (CMP) will drastically increase to satisfy performance requirements of future applications. A critical question that needs to be answered in CMPs is the size and strength of the cores. Homogeneous chip multiprocessors provide only one type of core to match these various application requirements, consequently not fully utilizing the available chip area and power budget. The ability to dynamically switch between different cores, and power down unused cores gives a key advantage to heterogeneous chip multiprocessing. One of the challenging problems in the context of heterogeneous chip multiprocessor systems is the placement of processor cores and storage blocks within the available chip area. Focusing on such a heterogeneous chip multiprocessor, we address different design decision problems.

Dataflow Analysis

Memory is a key parameter in embedded systems since both code complexity of embedded applications and amount of data they process are increasing. While it is true that the memory capacity of embedded systemsis continuously increasing, the increases in the application complexity and dataset sizes are far greater. As a consequence, the memory space demand of code and data should be kept minimum. To reduce the memory space consumption of embedded systems, this paper proposes a control flow graph (CFG) based technique. Specifically, it tracks the lifetime of instructions at the basic block level. Based on the CFG analysis, if a basic block is known to be not accessible in the rest of the program execution, the instruction memory spaceallocated to this basic block is reclaimed. On the other hand, if the memory allocated to this basic block cannot be reclaimed, we try to compress this basic block. This way, it is possible to effectively use the available on-chip memory, thereby satisfying most of instruction/ data requests from the on-chip memory.

NoC Based Heterogeneous Systems

As commonly accepted, current performance trajectory to double the chip performance every 24 to 36 months, can be achieved by the integration of multiple processors on a chip rather than through increases in the clock rate of single processors due to the power limitations present in processor design. Mutlicore architectures have already made their way in the industry, with more aggressive configurations being prototyped such as the Intel's 80 core TeraFlop. Since future technologies offer the promise of being able to integrate billions of transistors on a chip, the prospects of having hundreds of processors on a single chip along with an underlying memory hierarchy and an interconnection system is entirely feasible. Point-to-point buses will no longer be feasible after a certain number of nodes as the communication requirements between nodes will exponentially increase with the number of processors. A viable interconnection system shown to be promising for these future CMPs is Network-on-Chip (NoC) since it provides scalable, flexible, and programmable communication. With this NoC based chip multiprocessor (NoC based CMP) as the computing platform, a very rich set of research challenges arise. Circuit and architectural challenges such as router design, IP placement, and sensor placement are currently being studied in both industry and academia. In comparison, the work on heterogeneous alternatives for these architectures has received considerably less attention.

Memory Optimization

A critical component of a chip multiprocessor is its memory subsystem. This is because both power and performance behavior of a chip multiprocessor is largely shaped by its on-chip memory. While it is possible to employ conventional memory designs such as pure private memory or pure shared memory, such designs are very general and rigid, and may not generate the best behavior for a given embedded application. Our belief is that, for embedded systems that repeatedly execute the same application, it makes sense to design a customized, software-managed on-chip memory architecture. Such a memory architecture should be a hybrid one that contains both private and shared components. In the hybrid architecture case, while some processors have private memories, others do not have one. Similarly, the different processor groups can share memory in different fashions. For example, a memory component can be shared by two processors, whereas another component can be shared by three processors.

Designing such a customized hybrid memory architecture is not trivial because of at least three main reasons. First, since the memory architecture to be designed changes from one application to another, a hand-waived approach is not suitable, as it can be extremely time consuming and error-prone to go through the same complex process each time we are to design a memory system for a new application. Therefore, we need an automated strategy that comes up with the most suitable design for a given application. Second, design of such a memory needs to be guided by a tool that can extract the data sharing exhibited by the application at hand. After all, in order to decide how the different memory components need to be shared by parallel processors, one needs to capture the data sharing pattern across the processors. Third, data allocation in a hybrid memory system is not a trivial problem, and should be carried out along with data partitioning if we are to obtain the best results.