Dynamic parameter allocation in parameter servers
Renz-Wieland, Alexander
;
Gemulla, Rainer
;
Zeuch, Steffen
;
Markl, Volker
![[img]](https://madoc.bib.uni-mannheim.de/style/images/fileicons/application_pdf.png) |
PDF
Dynamic Parameter Allocation in Parameter Servers.pdf
- Published
Download (522kB)
|
DOI:
|
https://doi.org/10.14778/3407790.3407796
|
URL:
|
https://madoc.bib.uni-mannheim.de/57320
|
Additional URL:
|
https://dl.acm.org/doi/10.14778/3407790.3407796
|
URN:
|
urn:nbn:de:bsz:180-madoc-573200
|
Document Type:
|
Conference or workshop publication
|
Year of publication:
|
2020
|
The title of a journal, publication series:
|
Proceedings of the VLDB Endowment
|
Volume:
|
13,12
|
Page range:
|
1877-1890
|
Conference title:
|
46th International Conference on Very Large Data Bases
|
Location of the conference venue:
|
Online
|
Date of the conference:
|
31.08.-04.09.2020
|
Place of publication:
|
New York, NY
|
Publishing house:
|
Association of Computing Machinery
|
ISSN:
|
2150-8097
|
Publication language:
|
English
|
Institution:
|
School of Business Informatics and Mathematics > Praktische Informatik I (Gemulla 2014-)
|
License:
|
|
Subject:
|
004 Computer science, internet
|
Abstract:
|
To keep up with increasing dataset sizes and model complexity, distributed training has become a necessity for large machine learning tasks. Parameter servers ease the implementation of distributed parameter management---a key concern in distributed training---, but can induce severe communication overhead. To reduce communication overhead, distributed machine learning algorithms use techniques to increase parameter access locality (PAL), achieving up to linear speed-ups. We found that existing parameter servers provide only limited support for PAL techniques, however, and therefore prevent efficient training. In this paper, we explore whether and to what extent PAL techniques can be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks. We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers.
|
 | Dieser Eintrag ist Teil der Universitätsbibliographie. |
 | Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
Search Authors in
BASE:
Renz-Wieland, Alexander
;
Gemulla, Rainer
;
Zeuch, Steffen
;
Markl, Volker
Google Scholar:
Renz-Wieland, Alexander
;
Gemulla, Rainer
;
Zeuch, Steffen
;
Markl, Volker
ORCID:
Renz-Wieland, Alexander ; Gemulla, Rainer ORCID: 0000-0003-2762-0050 ; Zeuch, Steffen ; Markl, Volker
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
 |
Show item |
|
|