Efficient learning of discrete-continuous computation graphs
Friede, David
;
Niepert, Mathias
URL:
|
https://proceedings.neurips.cc/paper_files/paper/2...
|
Document Type:
|
Conference or workshop publication
|
Year of publication:
|
2022
|
Book title:
|
35th Conference on Neural Information Processing Systems (NeurIPS 2021) : online, 6-14 December 2021
|
The title of a journal, publication series:
|
Advances in Neural Information Processing Systems
|
Volume:
|
34
|
Page range:
|
6720-6732
|
Conference title:
|
NeurIPS 2021
|
Location of the conference venue:
|
Online
|
Date of the conference:
|
06.-14.12.2021
|
Publisher:
|
Ranzato, Marc'Aurelio
;
Beygelzimer, Alina
;
Dauphin, Yann N.
;
Liang, Percy
;
Wortman Vaughan, Jennifer
|
Place of publication:
|
Red Hook, NY
|
Publishing house:
|
Curran Associates
|
Publication language:
|
English
|
Institution:
|
School of Business Informatics and Mathematics > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
|
Subject:
|
004 Computer science, internet
|
Abstract:
|
Numerous models for supervised and reinforcement learning benefit from combinations of discrete and continuous model components. End-to-end learnable discrete-continuous models are compositional, tend to generalize better, and are more interpretable. A popular approach to building discrete-continuous computation graphs is that of integrating discrete probability distributions into neural networks using stochastic softmax tricks. Prior work has mainly focused on computation graphs with a single discrete component on each of the graph's execution paths. We analyze the behavior of more complex stochastic computations graphs with multiple sequential discrete components. We show that it is challenging to optimize the parameters of these models, mainly due to small gradients and local minima. We then propose two new strategies to overcome these challenges. First, we show that increasing the scale parameter of the Gumbel noise perturbations during training improves the learning behavior. Second, we propose dropout residual connections specifically tailored to stochastic, discrete-continuous computation graphs. With an extensive set of experiments, we show that we can train complex discrete-continuous models which one cannot train with standard stochastic softmax tricks. We also show that complex discrete-stochastic models generalize better than their continuous counterparts on several benchmark datasets.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Search Authors in
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
|
Show item |
|
|