Ontology Alignment Evaluation Initiative: six years of experience
Euzenat, Jérôme
;
Meilicke, Christian
;
Stuckenschmidt, Heiner
;
Shvaiko, Pavel
;
Trojahn, Cássia
DOI:
|
https://doi.org/10.1007/978-3-642-22630-4_6
|
URL:
|
https://link.springer.com/chapter/10.1007/978-3-64...
|
Weitere URL:
|
http://publications.wim.uni-mannheim.de/informatik...
|
Dokumenttyp:
|
Buchkapitel
|
Erscheinungsjahr:
|
2011
|
Buchtitel:
|
Journal on Data Semantics XV
|
Titel einer Zeitschrift oder einer Reihe:
|
Lecture Notes in Computer Science
|
Band/Volume:
|
6720
|
Seitenbereich:
|
158-192
|
Herausgeber:
|
Spaccapietra, Stefano
|
Ort der Veröffentlichung:
|
Berlin [u.a.]
|
Verlag:
|
Springer
|
ISBN:
|
978-3-642-22629-8
|
ISSN:
|
0302-9743 , 1611-3349
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Außerfakultäre Einrichtungen > Institut für Enterprise Systems (InES) Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
|
Fachgebiet:
|
004 Informatik
|
Freie Schlagwörter (Englisch):
|
Benchmark, Ontology Matching, Evaluation
|
Abstract:
|
In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI
campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and
discuss upcoming trends, both specific to ontology matching and generally relevant
for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
BASE:
Euzenat, Jérôme
;
Meilicke, Christian
;
Stuckenschmidt, Heiner
;
Shvaiko, Pavel
;
Trojahn, Cássia
Google Scholar:
Euzenat, Jérôme
;
Meilicke, Christian
;
Stuckenschmidt, Heiner
;
Shvaiko, Pavel
;
Trojahn, Cássia
ORCID:
Euzenat, Jérôme, Meilicke, Christian ORCID: https://orcid.org/0000-0002-0198-5396, Stuckenschmidt, Heiner ORCID: https://orcid.org/0000-0002-0209-3859, Shvaiko, Pavel and Trojahn, Cássia
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|