Vision and acceleration modalities: Partners for recognizing complex activities


Diete, Alexander ; Sztyler, Timo ; Stuckenschmidt, Heiner



DOI: https://doi.org/10.1109/PERCOMW.2019.8730690
URL: https://ieeexplore.ieee.org/document/8730690
Additional URL: http://sig-iss.work/percomworkshops2019/papers/p10...
Document Type: Conference or workshop publication
Year of publication: 2019
Book title: 2019 IEEE International Conference on Pervasive Computing and Communications workshops (PerCom workshops) : took place 11-15 March 2019 in Kyoto, Japan
Page range: 101-106
Conference title: CoMoRea '19 : 15th Workshop on Context Modeling and Recognition
Location of the conference venue: Kyoto, Japan
Date of the conference: 11.-15.03.2019
Place of publication: Piscataway, NJ
Publishing house: IEEE Computer Society
ISBN: 978-1-5386-9152-6 , 978-1-5386-9151-9 , 978-1-5386-9150-2
Related URLs:
Publication language: English
Institution: School of Business Informatics and Mathematics > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
Subject: 004 Computer science, internet
Abstract: Wearable devices have been used widely for human activity recognition in the field of pervasive computing. One big area of in this research is the recognition of activities of daily living where especially inertial and interaction sensors like RFID tags and scanners have been used. An issue that may arise when using interaction sensors is a lack of certainty. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g. when an object is only touched but no interaction occurred afterwards. Especially in health care and nursing scenarios this verification may be critical. In our work, we aim to overcome this limitation and present a multi-modal egocentric-based activity recognition approach which is able to recognize the critical objects. As it is unfeasible to expect always a high quality camera view, we enrich the vision features with inertial sensor data that represents the users' arm movement. This enables us to compensate the weaknesses of the respective sensors. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F1-measure up to 79.6%.




Dieser Eintrag ist Teil der Universitätsbibliographie.




Metadata export


Citation


+ Search Authors in

+ Page Views

Hits per month over past year

Detailed information



You have found an error? Please let us know about your desired correction here: E-Mail


Actions (login required)

Show item Show item