ExtractGPT: Exploring the potential of Large Language Models for product attribute value extraction


Brinkmann, Alexander ; Shraga, Roee ; Bizer, Christian



Document Type: Conference or workshop publication
Year of publication: 2025
Book title: Information integration and web intelligence : 26th International Conference, iiWAS 2024, Bratislava, Slovak Republic, December 2–4, 2024, Proceedings. Part I
The title of a journal, publication series: Lecture Notes in Computer Science
Volume: 15342
Page range: 38-52
Conference title: iiWAS, International Conference on Information Integration and Web Intelligence
Location of the conference venue: Bratislava, Slovakia
Date of the conference: 02.-04.12.2024
Publisher: Delir Haghighi, Pari ; Greguš, Michal ; Kotsis, Gabriele ; Khalil, Ismail
Place of publication: Berlin [u.a.]
Publishing house: Springer
ISBN: 978-3-031-78090-5
ISSN: 0302-9743 , 1611-3349
Related URLs:
Publication language: English
Institution: School of Business Informatics and Mathematics > Information Systems V: Web-based Systems (Bizer 2012-)
Subject: 004 Computer science, internet
Keywords (English): information extraction , product attribute value extraction , Large Language Models
Abstract: E-commerce platforms require structured product data in the form of attribute-value pairs to offer features such as faceted product search or attribute-based product comparison. However, vendors often provide unstructured product descriptions, necessitating the extraction of attribute-value pairs from these texts. BERT-based extraction methods require large amounts of task-specific training data and struggle with unseen attribute values. This paper explores using large language models (LLMs) as a more training-data efficient and robust alternative. We propose prompt templates for zero-shot and few-shot scenarios, comparing textual and JSON-based target schema representations. Our experiments show that GPT-4 achieves the highest average F1-score of 85% using detailed attribute descriptions and demonstrations. Llama-3-70B performs nearly as well, offering a competitive open-source alternative. GPT-4 surpasses the best PLM baseline by 5% in F1-score. Fine-tuning GPT-3.5 increases the performance to the level of GPT-4 but reduces the model's ability to generalize to unseen attribute values.




Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item