Ensembles of recurrent neural networks for robust time series forecasting
Krstanovic, Sascha
;
Paulheim, Heiko
DOI:
|
https://doi.org/10.1007/978-3-319-71078-5_3
|
URL:
|
https://link.springer.com/chapter/10.1007/978-3-31...
|
Weitere URL:
|
http://www.heikopaulheim.com/docs/sgai_2017.pdf
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2017
|
Buchtitel:
|
Artificial Intelligence XXXIV : 37th SGAI International Conference on Artificial Intelligence, AI 2017, Cambridge, UK, December 12-14, 2017, proceedings
|
Titel einer Zeitschrift oder einer Reihe:
|
Lecture Notes in Computer Science
|
Band/Volume:
|
10630
|
Seitenbereich:
|
34-46
|
Veranstaltungstitel:
|
SGAI International Conference on Artificial Intelligence
|
Veranstaltungsort:
|
Cambridge, UK
|
Veranstaltungsdatum:
|
12.-14.12.2017
|
Herausgeber:
|
Bramer, Max
|
Ort der Veröffentlichung:
|
Berlin [u.a.]
|
Verlag:
|
Springer
|
ISBN:
|
978-3-319-71077-8 , 978-3-319-71078-5
|
ISSN:
|
0302-9743 , 1611-3349
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Information Systems V: Web-based Systems (Bizer 2012-) Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Web Data Mining (Juniorprofessur) (Paulheim 2013-2017)
|
Fachgebiet:
|
004 Informatik
|
Abstract:
|
Time series forecasting is a problem that is strongly dependent on the underlying process which generates the data sequence. Hence,finding good model fits often involves complex and time consuming tasks such as extensive data preprocessing, designing hybrid models, or heavy parameter optimization. Long Short-Term Memory (LSTM), a variant of recurrent neural networks (RNNs), provide state of the art forecasting performance without prior assumptions about the data distribution. LSTMs are, however, highly sensitive to the chosen network architecture and parameter selection, which makes it difficult to come up with a one-size-fits-all solution without sophisticated optimization and parameter tuning. To overcome these limitations, we propose an ensemble architecture that combines forecasts of a number of differently parameterized LSTMs to a robust final estimate which, on average, performs better than the majority of the individual LSTM base learners, and provides stable
results across different datasets. The approach is easily parallelizable and we demonstrate its effectiveness on several real-world data sets.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|