Ontology Alignment Evaluation Initiative: six years of experience


Euzenat, Jérôme ; Meilicke, Christian ; Stuckenschmidt, Heiner ; Shvaiko, Pavel ; Trojahn, Cássia



DOI: https://doi.org/10.1007/978-3-642-22630-4_6
URL: https://link.springer.com/chapter/10.1007/978-3-64...
Additional URL: http://publications.wim.uni-mannheim.de/informatik...
Document Type: Book chapter
Year of publication: 2011
Book title: Journal on Data Semantics XV
The title of a journal, publication series: Lecture Notes in Computer Science
Volume: 6720
Page range: 158-192
Publisher: Spaccapietra, Stefano
Place of publication: Berlin [u.a.]
Publishing house: Springer
ISBN: 978-3-642-22629-8
ISSN: 0302-9743 , 1611-3349
Publication language: English
Institution: Außerfakultäre Einrichtungen > Institut für Enterprise Systems (InES)
School of Business Informatics and Mathematics > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
Subject: 004 Computer science, internet
Keywords (English): Benchmark, Ontology Matching, Evaluation
Abstract: In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.




Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item