|
Multifaceted analysis of deep convolutional neural networks and novel fourier modules
Grabinski, Julia
![[img]](https://madoc.bib.uni-mannheim.de/70498/1.hassmallThumbnailVersion/Dissertation_Julia_Grabinski.pdf)  Vorschau |
|
PDF
Dissertation_Julia_Grabinski.pdf
- Veröffentlichte Version
Download (14MB)
|
|
URN:
|
urn:nbn:de:bsz:180-madoc-704985
|
|
Dokumenttyp:
|
Dissertation
|
|
Erscheinungsjahr:
|
2025
|
|
Ort der Veröffentlichung:
|
Mannehim
|
|
Hochschule:
|
Universität Mannheim
|
|
Gutachter:
|
Keuper, Margret
|
|
Datum der mündl. Prüfung:
|
2025
|
|
Sprache der Veröffentlichung:
|
Englisch
|
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Machine Learning (Keuper 2024-)
|
|
Fachgebiet:
|
004 Informatik
|
|
Freie Schlagwörter (Englisch):
|
computer vision , machine learning , fourier analysis , CNNs
|
|
Abstract:
|
The increasing reliance on neural networks in everyday applications underscores a critical challenge: ensuring their robustness and reliability beyond idealized conditions. Evaluating vision classification models solely through clean accuracy and spatial perspectives is insufficient. Thus, we employ a multifaceted analysis of robust models and leverage Fourier theory to enhance robustness, efficiency, and our fundamental understanding of convolutional neural networks. This thesis explores the interplay between adversarial robustness, confidence calibration, efficiency and sampling artifacts through the lens of Fourier Theory in convolutional neural networks. We first demonstrate that adversarially robust models exhibit significantly lower overconfidence compared to their non-robust counterparts. We demonstrate that even subtle modifications to fundamental network components can significantly improve confidence calibration, highlighting the power of architectural design. Building on this, we investigate aliasing effects in robust models, revealing that they
downsample more effectively than standard models, leading to reduced aliasing. To quantify this phenomenon, we introduce a novel aliasing measure and show its connection to catastrophic overfitting in FGSM adversarial training, inspiring an early stopping criterion based on our aliasing measure. Leveraging these discoveries, we present Frequency Low Cut (FLC) Pooling and Aliasing and Sinc Artifact-free Pooling (ASAP), novel Fourier-domain downsampling methods designed to be inherently aliasing-free. These techniques contribute to enhanced native robustness and improved adversarial training stability, effectively addressing catastrophic overfitting in FGSM adversarial training. Building upon our Fourier-domain investigations, we present Neural Implicit Frequency Filters (NIFFs), enabling efficient large onvolutions. By leveraging neural implicit functions for weight learning and efficient Fourier-domain convolution, NIFFs provide a feasible and fair comparison to large spatial convolutions. Using NIFFs, we analyse learned kernel size preferences, revealing insights that facilitate more efficient network design. We demonstrate that optimal feature extraction often requires kernels larger than the typical 3 × 3, with 9 × 9 kernels being predominately learned by the network, especially on ImageNet-1k.
Our multifaceted analysis and findings contribute to a deeper understanding of adversarial robustness, confidence calibration, sampling artifacts and the role of Fourier theory in convolutional neural network design, paving the way for more robust and efficient deep learning models.
|
 | Dieser Eintrag ist Teil der Universitätsbibliographie. |
 | Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
 |
Eintrag anzeigen |
|
|