Recognizing grabbing actions from inertial and video sensor data in a warehouse scenario
Diete, Alexander
;
Sztyler, Timo
;
Weiland, Lydia
;
Stuckenschmidt, Heiner
DOI:
|
https://doi.org/10.1016/j.procs.2017.06.071
|
URL:
|
https://www.researchgate.net/publication/318382869...
|
Weitere URL:
|
http://www.sciencedirect.com/science/article/pii/S...
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2017
|
Buchtitel:
|
14th International Conference on Mobile Systems and Pervasive Computing (MobiSPC 2017) / 12th International Conference on Future Networks and Communications (FNC 2017) / Affiliated Workshops
|
Titel einer Zeitschrift oder einer Reihe:
|
Procedia computer science
|
Band/Volume:
|
110
|
Seitenbereich:
|
16-23
|
Veranstaltungstitel:
|
The 14th International Conference on Mobile Systems and Pervasive Computing, MobiSPC 2017
|
Veranstaltungsort:
|
Leuven, Belgium
|
Veranstaltungsdatum:
|
July 24-26, 2017
|
Herausgeber:
|
Shakshuki, Elhadi
|
Ort der Veröffentlichung:
|
Amsterdam [u.a.]
|
Verlag:
|
Elsevier
|
ISSN:
|
1877-0509
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
|
Fachgebiet:
|
004 Informatik
|
Abstract:
|
Modern industries are increasingly adapting to smart devices for aiding and improving their productivity and work flow. This includes logistics in warehouses where validation of correct items per order can be enhanced with mobile devices. Since handling incorrect orders is a big part of the costs of warehouse maintenance, reducing errors like missed or wrong items should be avoided. Thus, early identification of picking procedures and items picked is beneficial for reducing these errors. By using data glasses and a smartwatch we aim to reduce these errors while also enabling the picker to work hands-free. In this paper, we present an analysis of feature sets for classification of grabbing actions in the order picking process. For this purpose, we created a dataset containing inertial data and egocentric video from four participants performing picking tasks. As we previously worked with logistics companies, we modeled our test scenario close to real-world warehouse environments. Afterwards, we extracted features from the time and frequency domain for inertial data and color and descriptor features from the image data to learn grabbing actions. We were able to show that the combination of inertial and video data enables us to recognize grabbing actions in a picking scenario. We also show that the combination of different sensors improves the results, yielding an F-measure of 85.3% for recognizing grabbing actions.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
BASE:
Diete, Alexander
;
Sztyler, Timo
;
Weiland, Lydia
;
Stuckenschmidt, Heiner
Google Scholar:
Diete, Alexander
;
Sztyler, Timo
;
Weiland, Lydia
;
Stuckenschmidt, Heiner
ORCID:
Diete, Alexander, Sztyler, Timo, Weiland, Lydia and Stuckenschmidt, Heiner ORCID: https://orcid.org/0000-0002-0209-3859
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|