Profiling entity matching benchmark tasks
Primpeli, Anna
;
Bizer, Christian
DOI:
|
https://doi.org/10.1145/3340531.3412781
|
URL:
|
https://dl.acm.org/doi/10.1145/3340531.3412781
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2020
|
Buchtitel:
|
CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management :
|
Seitenbereich:
|
3101-3108
|
Veranstaltungstitel:
|
CIKM '20
|
Veranstaltungsort:
|
Online
|
Veranstaltungsdatum:
|
19.-23.10.2020
|
Herausgeber:
|
D'Aquin, Mathieu
|
Ort der Veröffentlichung:
|
New York, NY
|
Verlag:
|
Association for Computing Machinery
|
ISBN:
|
978-1-4503-6859-9
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Information Systems V: Web-based Systems (Bizer 2012-)
|
Fachgebiet:
|
004 Informatik
|
Freie Schlagwörter (Englisch):
|
profiling , baseline evaluation , reproducibility , entity matching , benchmarking
|
Abstract:
|
Entity matching is a central task in data integration which has been researched for decades. Over this time, a wide range of benchmark tasks for evaluating entity matching methods has been developed. This resource paper systematically complements, profiles, and compares 21 entity matching benchmark tasks. In order to better understand the specific challenges associated with different tasks, we define a set of profiling dimensions which capture central aspects of the matching tasks. Using these dimensions, we create groups of benchmark tasks having similar characteristics. Afterwards, we assess the difficulty of the tasks in each group by computing baseline evaluation results using standard feature engineering together with two common classification methods. In order to enable the exact reproducibility of evaluation results, matching tasks need to contain exactly defined sets of matching and non-matching record pairs, as well as a fixed development and test split. As this is not the case for some widely-used benchmark tasks, we complement these tasks with fixed sets of non-matching pairs, as well as fixed splits, and provide the resulting development and test sets for public download. By profiling and complementing the benchmark tasks, we support researchers to select challenging as well as diverse tasks and to compare matching systems on clearly defined grounds.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|