ExtractGPT: Exploring the potential of Large Language Models for product attribute value extraction
Brinkmann, Alexander
;
Shraga, Roee
;
Bizer, Christian
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2025
|
Buchtitel:
|
Information integration and web intelligence : 26th International Conference, iiWAS 2024, Bratislava, Slovak Republic, December 2–4, 2024, Proceedings. Part I
|
Titel einer Zeitschrift oder einer Reihe:
|
Lecture Notes in Computer Science
|
Band/Volume:
|
15342
|
Seitenbereich:
|
38-52
|
Veranstaltungstitel:
|
iiWAS, International Conference on Information Integration and Web Intelligence
|
Veranstaltungsort:
|
Bratislava, Slovakia
|
Veranstaltungsdatum:
|
02.-04.12.2024
|
Herausgeber:
|
Delir Haghighi, Pari
;
Greguš, Michal
;
Kotsis, Gabriele
;
Khalil, Ismail
|
Ort der Veröffentlichung:
|
Berlin [u.a.]
|
Verlag:
|
Springer
|
ISBN:
|
978-3-031-78090-5
|
ISSN:
|
0302-9743 , 1611-3349
|
Verwandte URLs:
|
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Information Systems V: Web-based Systems (Bizer 2012-)
|
Fachgebiet:
|
004 Informatik
|
Freie Schlagwörter (Englisch):
|
information extraction , product attribute value extraction , Large Language Models
|
Abstract:
|
E-commerce platforms require structured product data in the form of attribute-value pairs to offer features such as faceted product search or attribute-based product comparison. However, vendors often provide unstructured product descriptions, necessitating the extraction of attribute-value pairs from these texts. BERT-based extraction methods require large amounts of task-specific training data and struggle with unseen attribute values. This paper explores using large language models (LLMs) as a more training-data efficient and robust alternative. We propose prompt templates for zero-shot and few-shot scenarios, comparing textual and JSON-based target schema representations. Our experiments show that GPT-4 achieves the highest average F1-score of 85% using detailed attribute descriptions and demonstrations. Llama-3-70B performs nearly as well, offering a competitive open-source alternative. GPT-4 surpasses the best PLM baseline by 5% in F1-score. Fine-tuning GPT-3.5 increases the performance to the level of GPT-4 but reduces the model's ability to generalize to unseen attribute values.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|