Supervised contrastive learning for product matching
Peeters, Ralph
;
Bizer, Christian
DOI:
|
https://doi.org/10.1145/3487553.3524254
|
URL:
|
https://dl.acm.org/doi/abs/10.1145/3487553.3524254
|
URN:
|
urn:nbn:de:bsz:180-madoc-626614
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2022
|
Buchtitel:
|
Companion Proceedings of the Web Conference 2022
|
Seitenbereich:
|
248-251
|
Veranstaltungstitel:
|
WWW '22
|
Veranstaltungsort:
|
Lyon, France, Online
|
Veranstaltungsdatum:
|
25.-29.04.2022
|
Herausgeber:
|
Laforest, Frédérique
;
Troncy, Raphaël
|
Ort der Veröffentlichung:
|
New York, NY
|
Verlag:
|
ACM
|
ISBN:
|
978-1-4503-9130-6
|
Verwandte URLs:
|
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Information Systems V: Web-based Systems (Bizer 2012-)
|
Bereits vorhandene Lizenz:
|
Creative Commons Namensnennung 4.0 International (CC BY 4.0)
|
Fachgebiet:
|
004 Informatik
|
Fachklassifikation:
|
CCS:
Information systems → Entity resolution; Data extraction and integration,
|
Freie Schlagwörter (Englisch):
|
e-commerce , product matching , entity matching , contrastive learning , transformers
|
Abstract:
|
Contrastive learning has moved the state of the art for many tasks in computer vision and information retrieval in recent years. This poster is the first work that applies supervised contrastive learning to the task of product matching in e-commerce using product offers from different e-shops. More specifically, we employ a supervised contrastive learning technique to pre-train a Transformer encoder which is afterward fine-tuned for the matching task using pair-wise training data. We further propose a source-aware sampling strategy that enables contrastive learning to be applied for use cases in which the training data does not contain product identifiers. We show that applying supervised contrastive pre-training in combination with source-aware sampling significantly improves the state-of-the-art performance on several widely used benchmarks: For Abt-Buy, we reach an F1-score of 94.29 (+3.24 compared to the previous state-of-the-art), for Amazon-Google 79.28 (+ 3.7). For WDC Computers datasets, we reach improvements between +0.8 and +8.84 in F1-score depending on the training set size. Further experiments with data augmentation and self-supervised contrastive pre-training show that the former can be helpful for smaller training sets while the latter leads to a significant decline in performance due to inherent label noise. We thus conclude that contrastive pre-training has a high potential for product matching use cases in which explicit supervision is available.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
| Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|