Vision and acceleration modalities: Partners for recognizing complex activities


Diete, Alexander ; Sztyler, Timo ; Stuckenschmidt, Heiner


DOI: tba
Additional URL: https://h-suwa.github.io/percomworkshops2019/paper...
Document Type: Conference or workshop publication
Year of publication: 2019
Book title: 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019 : Kyoto, Japan, March 11-15, 2019
Page range: 101-106
Conference title: CoMoRea '19 : 15th Workshop on Context Modeling and Recognition
Location of the conference venue: Kyoto, Japan
Date of the conference: 11.-15.03.2019
Place of publication: Piscataway, NJ
Publishing house: IEEE Computer Society
ISBN: tba
Publication language: English
Institution: School of Business Informatics and Mathematics > Praktische Informatik II (Stuckenschmidt 2009-)
Subject: 004 Computer science, internet
Abstract: Wearable devices have been used widely for human activity recognition in the field of pervasive computing. One big area of in this research is the recognition of activities of daily living where especially inertial and interaction sensors like RFID tags and scanners have been used. An issue that may arise when using interaction sensors is a lack of certainty. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g. when an object is only touched but no interaction occurred afterwards. Especially in health care and nursing scenarios this verification may be critical. In our work, we aim to overcome this limitation and present a multi-modal egocentric-based activity recognition approach which is able to recognize the critical objects. As it is unfeasible to expect always a high quality camera view, we enrich the vision features with inertial sensor data that represents the users' arm movement. This enables us to compensate the weaknesses of the respective sensors. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F1-measure up to 79.6%.

Dieser Datensatz wurde noch nicht in die Unibibliographie aufgenommen, es ist eine Unveröffentlichte Publikation. Diese Publikation als "Jetzt erschienen" melden.




+ Citation Example and Export

Diete, Alexander ; Sztyler, Timo ; Stuckenschmidt, Heiner ORCID: 0000-0002-0209-3859 Vision and acceleration modalities: Partners for recognizing complex activities. 101-106 In: 2019 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2019 : Kyoto, Japan, March 11-15, 2019 (2019) Piscataway, NJ CoMoRea '19 : 15th Workshop on Context Modeling and Recognition (Kyoto, Japan) [Conference or workshop publication]


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information




Actions (login required)

Show item Show item