Improving hierarchical product classification using domain-specific language modelling
Brinkmann, Alexander
;
Bizer, Christian
URL:
|
http://sites.computer.org/debull/A21june/p14.pdf
|
Weitere URL:
|
http://sites.computer.org/debull/A21june/issue1.ht...
|
Dokumenttyp:
|
Zeitschriftenartikel
|
Erscheinungsjahr:
|
2021
|
Titel einer Zeitschrift oder einer Reihe:
|
Bulletin of the Technical Committee on Data Engineering / IEEE Computer Society
|
Band/Volume:
|
44
|
Heft/Issue:
|
2
|
Seitenbereich:
|
14-25
|
Ort der Veröffentlichung:
|
Washington, DC
|
Verlag:
|
IEEE Computer Soc.
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Information Systems V: Web-based Systems (Bizer 2012-)
|
Fachgebiet:
|
004 Informatik
|
Freie Schlagwörter (Englisch):
|
e-commerce , product categorization , deep learning , language modelling
|
Abstract:
|
In order to deliver a coherent user experience, product aggregators such as market places or price portals integrate product offers from many web shops into a single product categorization hierarchy. Recently, transformer models have shown remarkable performance on various NLP tasks. These models are
pre-trained on huge cross-domain text corpora using self-supervised learning and fine-tuned afterwards for specific downstream tasks. Research from other application domains indicates that additional selfsupervised pre-training using domain-specific text corpora can further increase downstream performance without requiring additional task-specific training data. In this paper, we first show that transformers outperform a more traditional fastText-based classification technique on the task of assigning product offers from different web shops into a product hierarchy. Afterwards, we investigate whether it is possible to improve the performance of the transformer models by performing additional self-supervised pretraining using different corpora of product offers, which were extracted from the Common Crawl. Our experiments show that by using large numbers of related product offers for masked language modelling, it is possible to increase the performance of the transformer models by 1.22% in wF1 and 1.36% in hF1 reaching a performance of nearly 89% wF1.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|