Recognizing grabbing actions from inertial and video sensor data in a warehouse scenario


Diete, Alexander ; Sztyler, Timo ; Weiland, Lydia ; Stuckenschmidt, Heiner



DOI: https://doi.org/10.1016/j.procs.2017.06.071
URL: https://www.researchgate.net/publication/318382869...
Additional URL: http://www.sciencedirect.com/science/article/pii/S...
Document Type: Conference or workshop publication
Year of publication: 2017
Book title: 14th International Conference on Mobile Systems and Pervasive Computing (MobiSPC 2017) / 12th International Conference on Future Networks and Communications (FNC 2017) / Affiliated Workshops
The title of a journal, publication series: Procedia computer science
Volume: 110
Page range: 16-23
Conference title: The 14th International Conference on Mobile Systems and Pervasive Computing, MobiSPC 2017
Location of the conference venue: Leuven, Belgium
Date of the conference: July 24-26, 2017
Publisher: Shakshuki, Elhadi
Place of publication: Amsterdam [u.a.]
Publishing house: Elsevier
ISSN: 1877-0509
Publication language: English
Institution: School of Business Informatics and Mathematics > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
Subject: 004 Computer science, internet
Abstract: Modern industries are increasingly adapting to smart devices for aiding and improving their productivity and work flow. This includes logistics in warehouses where validation of correct items per order can be enhanced with mobile devices. Since handling incorrect orders is a big part of the costs of warehouse maintenance, reducing errors like missed or wrong items should be avoided. Thus, early identification of picking procedures and items picked is beneficial for reducing these errors. By using data glasses and a smartwatch we aim to reduce these errors while also enabling the picker to work hands-free. In this paper, we present an analysis of feature sets for classification of grabbing actions in the order picking process. For this purpose, we created a dataset containing inertial data and egocentric video from four participants performing picking tasks. As we previously worked with logistics companies, we modeled our test scenario close to real-world warehouse environments. Afterwards, we extracted features from the time and frequency domain for inertial data and color and descriptor features from the image data to learn grabbing actions. We were able to show that the combination of inertial and video data enables us to recognize grabbing actions in a picking scenario. We also show that the combination of different sensors improves the results, yielding an F-measure of 85.3% for recognizing grabbing actions.




Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item