Fine-tuning large language models for entity matching


Steiner, Aaron ; Peeters, Ralph ; Bizer, Christian



DOI: https://doi.org/10.1109/ICDEW67478.2025.00006
URL: https://ieeexplore.ieee.org/document/11107461
Document Type: Conference or workshop publication
Year of publication: 2025
Book title: 2025 IEEE 41st International Conference on Data Engineering Workshops : proceedings, 19-23 May 2025 Hong Kong SAR, China
Page range: 9-17
Conference title: ICDEW 2025, 2025 IEEE 41st International Conference on Data Engineering Workshops (ICDEW), First Workshop on Data-AI Systems (DAIS 25)
Location of the conference venue: Hong Kong, Hong Kong
Date of the conference: 19.-23.05.2025
Place of publication: Los Alamitos, CA [u.a.]
Publishing house: IEEE Computer Society
ISBN: 979-8-3315-9959-1
Publication language: English
Institution: School of Business Informatics and Mathematics > Information Systems V: Web-based Systems (Bizer 2012-)
Subject: 004 Computer science, internet
Keywords (English): entity matching , identity resolution , large language models , fine-tuning
Abstract: Generative large language models (LLMs) are a promising alternative to pre-trained language models for entity matching due to their high zero-shot performance and ability to generalize to unseen entities. Existing research on using LLMs for entity matching has focused on prompt engineering and in-context learning. This paper explores the potential of fine-tuning LLMs for entity matching. We analyze fine-tuning along two dimensions: 1) the representation of training examples, where we experiment with adding different types of LLM-generated explanations to the training set, and 2) the selection and generation of training examples using LLMs. In addition to the matching performance on the source dataset, we investigate how fine-tuning affects the model's ability to generalize to other in-domain datasets as well as across topical domains. Our experiments show that fine-tuning significantly improves the performance of the smaller models while the results for the larger models are mixed. Fine-tuning also improves the generalization to in-domain datasets while hurting cross-domain transfer. We show that adding structured explanations to the training set has a positive impact on the performance of three out of four LLMs, while the proposed example selection and generation methods, only improve the performance of Llama 3.1 8B while decreasing the performance of GPT-4o-mini.




Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item