Bilkent University
Department of Computer Engineering
PhD THESIS PRESENTATION
ROBUST DEEP LEARNING UNDER DISTRIBUTION SHIFT: INVARIANT FEATURE LEARNING AND RELIABLE TEST-TIME ADAPTATION
Saeed Karimi
PhD Student
(Supervisor: Assoc.Prof.Dr.Hamdi Dibeklioğlu)
Computer Engineering Department
Bilkent University
Abstract: Deep learning models often suffer significant performance degradation when deployed in environments whose data distributions differ from those encountered during training. This distribution shift remains a central challenge for robust visual recognition. Although Domain Generalization (DG) strives to learn models that generalize to unseen domains without accessing target data, recent studies show that many DG techniques yield limited improvements over empirical risk minimization due to reliance on spurious, domain-specific features. To address this issue, the first part of this thesis introduces Specific Domain Training (SDT), a method that disentangles spurious and invariant features via specific-domain sampling, masking, and variance-aware weight averaging. SDT improves both theoretical robustness and practical performance on DG benchmarks. The second part of the thesis focuses on Test-Time Adaptation (TTA), which adapts a pretrained model to incoming test samples without labels. Existing TTA methods often rely on noisy pseudo-labels and fail to leverage informative structure from source domains. To mitigate these limitations, we develop three complementary approaches. SATA uses source-domain style statistics to identify style-invariant test samples, ensuring stable entropy minimization while regularizing unreliable samples through consistency constraints. AdaPAC leverages subclass prototypes extracted using class-specific clustering to capture intra-class structure, selecting test samples that align well with source clusters and adapting the model with prototype-guided contrastive objectives. Shift-ACT introduces shift-aware, class-specific dynamic thresholding based on confidence discrepancies between source and target distributions, enabling reliable sample selection under class-wise distribution shifts. Together, these contributions advance the reliability of DG and TTA by reducing reliance on spurious cues, improving sample selection, and enabling robust adaptation under distribution shifts.
DATE: January 14, Wednesday @ 09:00 Place: EA 409