Fusing object information and inertial data for activity recognition


Diete, Alexander ; Stuckenschmidt, Heiner


[img] PDF
sensors-19-04119-v2.pdf - Published

Download (3MB)

DOI: https://doi.org/10.3390/s19194119
URL: https://www.mdpi.com/1424-8220/19/19/4119
Additional URL: https://www.x-mol.com/paper/5866858
URN: urn:nbn:de:bsz:180-madoc-522750
Document Type: Article
Year of publication: 2019
The title of a journal, publication series: Sensors
Volume: 19
Issue number: 19
Page range: 4119, 1-22
Place of publication: Basel
Publishing house: MDPI
ISSN: 1424-8220
Publication language: English
Institution: School of Business Informatics and Mathematics > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
Pre-existing license: Creative Commons Attribution 4.0 International (CC BY 4.0)
Subject: 004 Computer science, internet
Abstract: In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6.
Additional information: Online-Ressource




Dieser Eintrag ist Teil der Universitätsbibliographie.

Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadata export


Citation


+ Search Authors in

+ Download Statistics

Downloads per month over past year

View more statistics



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item