Machine learning is the backbone of modern computer vision systems. The increasing availability of computational power, large-scale datasets, and advances in learning algorithms have enabled models to learn complex patterns directly from data, moving beyond traditional rule-based programming. A key factor in this success is representation learning, which transforms data into compact, meaningful representations for tasks such as image classification, segmentation, and synthesis. However, the No Free Lunch (NFL) theorems state that no universal algorithm can perform well across all tasks without incorporating task-specific knowledge. To build effective models, practitioners must introduce inductive biases that guide learning toward desirable solutions. Regularization is a central tool for this purpose, as it constrains the solution space and encodes preferences into the learning process. This thesis presents novel regularization techniques to guide representation learning in computer vision, focusing on three complementary strategies.
Dieser Eintrag ist Teil der Universitätsbibliographie.
Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.