Quantifying a critical training set size for generalization and overfitting using teacher neural networks


Lange, Rupert ; Männer, Reinhard


[img]
Preview
PDF
TR-95-003.pdf - Published

Download (102kB)

URL: http://ub-madoc.bib.uni-mannheim.de/802
URN: urn:nbn:de:bsz:180-madoc-8025
Document Type: Working paper
Year of publication: 1995
The title of a journal, publication series: None
Publication language: English
Institution: School of Business Informatics and Mathematics > Sonstige - Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik
MADOC publication series: Veröffentlichungen der Fakultät für Mathematik und Informatik > Institut für Informatik > Technical Reports
Subject: 004 Computer science, internet
Subject headings (SWD): Neuronales Netz
Abstract: Teacher neural networks are a systematic experimental approach to study neural networks. A teacher is a neural network that is employed to generate the examples of the training and the testing set. The weights of the teacher and the input parts of the examples are set according to some probability distribution. The input parts are then presented to the teacher neural network and recorded together with its response. A pupil neural network is then trained on this data. Hence, a neural network instead of a real or synthetic application defines the task, according to which the performance of the pupil is investigated. One issue is the dependence of the training success on the training set size. Surprisingly, there exists a critical value above which the training error drops to zero. This critical training set size is proportional to the number of weights in the neural network. A sudden transition exists for the generalization capability, too: the generalization error measured on a large independent testing set drops to zero, and the effect of overfitting vanishes. Thus, there are two regions with a sudden transition in-between: below the critical training set size, training and generalization fails, and severe overfitting occurs; above the critical training set size, training and generalization is perfect and there is no overfitting.
Additional information:




Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadata export


Citation


+ Search Authors in

+ Download Statistics

Downloads per month over past year

View more statistics



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item