Entity matching using large language models
Peeters, Ralph
;
Steiner, Aaron
;
Bizer, Christian
DOI:
|
https://doi.org/10.48786/edbt.2025.42
|
URL:
|
https://openproceedings.org/2025/conf/edbt/paper-8...
|
URN:
|
urn:nbn:de:bsz:180-madoc-681857
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2025
|
Buchtitel:
|
Proceedings 28th International Conference on Extending Database Technology (EDBT 2025), Barcelona, Spain, March 25-March 28
|
Titel einer Zeitschrift oder einer Reihe:
|
OpenProceedings
|
Band/Volume:
|
2, Experiments & Analyses Track
|
Seitenbereich:
|
529-541
|
Veranstaltungstitel:
|
EDBT 2025, 28th International Conference on Extending Database Technology
|
Veranstaltungsort:
|
Barcelona, Spain
|
Veranstaltungsdatum:
|
25.-28.03.2025
|
Ort der Veröffentlichung:
|
Konstanz
|
Verlag:
|
OpenProceedings.org
|
ISBN:
|
978-3-89318-098-1
|
ISSN:
|
2367-2005
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Information Systems V: Web-based Systems (Bizer 2012-)
|
Bereits vorhandene Lizenz:
|
Creative Commons Namensnennung, nicht kommerziell, keine Bearbeitung 4.0 International (CC BY-NC-ND 4.0)
|
Fachgebiet:
|
004 Informatik
|
Freie Schlagwörter (Englisch):
|
entity matching , identity resolution , large language models
|
Abstract:
|
Entity matching is the task of deciding whether two entity descriptions refer to the same real-world entity. Entity matching is a central step in most data integration pipelines. Many stateof-the-art entity matching methods rely on pre-trained language models (PLMs) such as BERT or RoBERTa. Two major drawbacks of these models for entity matching are that (i) the models require significant amounts of task-specific training data and (ii) the fine-tuned models are not robust concerning out-of-distribution entities. This paper investigates using generative large language models (LLMs) as a less task-specific training data-dependent and more robust alternative to PLM-based matchers. The study covers hosted and open-source LLMs which can be run locally. We evaluate these models in a zero-shot scenario and a scenario where task-specific training data is available. We compare different prompt designs and the prompt sensitivity of the models. We show that there is no single best prompt but that the prompt needs to be tuned for each model/dataset combination. We further investigate (i) the selection of in-context demonstrations, (ii) the generation of matching rules, as well as (iii) fine-tuning LLMs using the same pool of training data. Our experiments show that the best LLMs require no or only a few training examples to perform comparably to PLMs that were fine-tuned using thousands of examples. LLM-based matchers further exhibit higher robustness to unseen entities. We show that GPT4 can generate structured explanations for matching decisions and can automatically identify potential causes of matching errors by analyzing explanations of wrong decisions. We demonstrate that the model can generate meaningful textual descriptions of the identified error classes, which can help data engineers to improve entity matching pipelines.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
| Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
Suche Autoren in
BASE:
Peeters, Ralph
;
Steiner, Aaron
;
Bizer, Christian
Google Scholar:
Peeters, Ralph
;
Steiner, Aaron
;
Bizer, Christian
ORCID:
Peeters, Ralph ORCID: https://orcid.org/0000-0003-3174-2616, Steiner, Aaron and Bizer, Christian ORCID: https://orcid.org/0000-0003-2367-0237
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|