S E M I N A R
M.S. in Computer Engineering
Supervisor: Prof. Dr. Fazlı Can
The empirical investigation of the effectiveness of information retrieval systems (search engines) requires a test collection composed of a set of documents, a set of query topics and a set of relevance judgments indicating which documents are relevant to which topics. The human relevance judgments are expensive and subjective. In addition to this databases and user interests change quickly. Hence there is a great need of automatic way of evaluating the performance of search engines. Furthermore, recent studies showed that differences in human relevance assessments do not affect the relative performance of information retrieval systems. Based on these observations, in this thesis, we propose and use data fusion to replace human relevance judgments and introduce an automatic evaluation method. The major contributions of this thesis are: (1) an automatic information retrieval performance evaluation method that uses data fusion algorithms for the first time in the literature (the thesis includes a comprehensive statistical assessment of the method with several TREC systems which shows that the method results correlates positively and significantly with the actual human-based results), (2) system selection methods (using the concept of system bias and iterative fusion) for data fusion aiming even higher correlation among automatic and human-based results, (3) several practical implications stemming from the fact that the precision values implied by the proposed method are statistically not different than (or close in value to) those of actual information retrieval systems.
DATE: August 14 2003, Thursday @ 11:00