Column property annotation using large language models
Korini, Keti
;
Bizer, Christian

DOI:
|
https://doi.org/10.1007/978-3-031-78952-6_6
|
URL:
|
https://link.springer.com/chapter/10.1007/978-3-03...
|
Additional URL:
|
https://www.researchgate.net/publication/388437234...
|
Document Type:
|
Conference or workshop publication
|
Year of publication:
|
2025
|
Book title:
|
The Semantic Web: ESWC 2024 Satellite Events : Hersonissos, Crete, Greece, May 26–30, 2024, Proceedings, Part I
|
The title of a journal, publication series:
|
Lecture Notes in Computer Science
|
Volume:
|
15344
|
Page range:
|
61-70
|
Conference title:
|
ESWC 2024, Extended Semantic Web Conference
|
Location of the conference venue:
|
Hersonissos, Crete, Greece
|
Date of the conference:
|
26.-30.05.2024
|
Publisher:
|
Meroño Peñuela, Albert
;
Corcho, Oscar
;
Groth, Paul
;
Simperl, Elena
;
Tamma, Valentina
;
Nuzzolese, Andrea Giovanni
;
Poveda-Villalón, Maria
;
Sabou, Marta
;
Presutti, Valentina
;
Celino, Irene
;
Revenko, Artem
;
Raad, Joe
;
Sartini, Bruno
;
Lisena, Pasquale
|
Place of publication:
|
Berlin [u.a.]
|
Publishing house:
|
Springer
|
ISBN:
|
978-3-031-78951-9 , 978-3-031-78952-6
|
ISSN:
|
0302-9743 , 1611-3349
|
Publication language:
|
English
|
Institution:
|
School of Business Informatics and Mathematics > Information Systems V: Web-based Systems (Bizer 2012-)
|
Subject:
|
004 Computer science, internet
|
Keywords (English):
|
table annotation , large language models , column property annotation
|
Abstract:
|
Column property annotation (CPA), also known as column relationship prediction, is the task of predicting the semantic relationship between two columns in a table given a set of candidate relationships. CPA annotations are used in downstream tasks such as data search, data integration, or knowledge graph enrichment. This paper explores the usage of generative large language models (LLMs) for the CPA task. We experiment with different zero-shot prompts for the CPA task which we evaluate using GPT-3.5, GPT-4, and the open-source model SOLAR. We find GPT-3.5 to be quite sensitive to variations of the prompt, while GPT-4 reaches a high performance independent of the variation of the prompt. We further explore the scenario where training data for the CPA task is available and can be used for selecting demonstrations or fine-tuning the model. We show that a fine-tuned GPT-3.5 model outperforms a RoBERTa model that was fine-tuned on the same data by 11% in F1. Comparing in-context learning via demonstrations and fine-tuning shows that the fine-tuned GPT-3.5 performs 9% F1 better than the same model given demonstrations. The fine-tuned GPT-3.5 model also outperforms zero-shot GPT-4 by around 2% F1 for the dataset on which is was fine-tuned, while not generalizing to tasks that require a different vocabulary.
|
 | Dieser Eintrag ist Teil der Universitätsbibliographie. |
Search Authors in
You have found an error? Please let us know about your desired correction here: E-Mail
Actions (login required)
 |
Show item |
|