You CAN teach an old dog new tricks! On training knowledge graph embeddings


Ruffinelli, Daniel ; Broscheit, Samuel ; Gemulla, Rainer


[img]
Preview
PDF
you_can_teach_an_old_dog_new_tricks_on_training_knowledge_graph_embeddings.pdf - Published

Download (428kB)

URL: https://madoc.bib.uni-mannheim.de/54954
Additional URL: https://openreview.net/forum?id=BkxSmlBFvr
URN: urn:nbn:de:bsz:180-madoc-549543
Document Type: Conference or workshop publication
Year of publication: 2020
Book title: ICLR 2020 : Eighth International Conference on Learning Representations : virtual conference, formerly Addis Ababa ETHIOPIA, Sun Apr 26th through May 1st
Page range: 1-12
Conference title: ICLR 2020
Location of the conference venue: Online
Date of the conference: 26.4.-1.5.2020
Publisher: Rush, Alexander
Place of publication: La Jolla, CA
Publishing house: ICLR
Publication language: English
Institution: School of Business Informatics and Mathematics > Practical Computer Science I: Data Analytics (Gemulla 2014-)
License: CC BY 4.0 Creative Commons Attribution 4.0 International (CC BY 4.0)
Subject: 004 Computer science, internet
Keywords (English): knowledge graph embeddings , hyperparameter optimization
Abstract: Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph. A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance. These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization. In this paper, we take a step back and aim to summarize and quantify empirically the impact of each of these dimensions on model performance. We report on the results of an extensive experimental study with popular model architectures and training strategies across a wide range of hyperparameter settings. We found that when trained appropriately, the relative performance differences between various model architectures often shrinks and sometimes even reverses when compared to prior results. For example, RESCAL~\citep{nickel2011three}, one of the first KGE models, showed strong performance when trained with state-of-the-art techniques; it was competitive to or outperformed more recent architectures. We also found that good (and often superior to prior studies) model configurations can be found by exploring relatively few random samples from a large hyperparameter space. Our results suggest that many of the more advanced architectures and techniques proposed in the literature should be revisited to reassess their individual benefits. To foster further reproducible research, we provide all our implementations and experimental results as part of the open source LibKGE framework.




Dieser Eintrag ist Teil der Universitätsbibliographie.

Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadata export


Citation


+ Search Authors in

+ Download Statistics

Downloads per month over past year

View more statistics



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item