Vision and acceleration modalities: Partners for recognizing complex activities
Diete, Alexander
;
Sztyler, Timo
;
Stuckenschmidt, Heiner
DOI:
|
https://doi.org/10.1109/PERCOMW.2019.8730690
|
URL:
|
https://ieeexplore.ieee.org/document/8730690
|
Weitere URL:
|
http://sig-iss.work/percomworkshops2019/papers/p10...
|
Dokumenttyp:
|
Konferenzveröffentlichung
|
Erscheinungsjahr:
|
2019
|
Buchtitel:
|
2019 IEEE International Conference on Pervasive Computing and Communications workshops (PerCom workshops) : took place 11-15 March 2019 in Kyoto, Japan
|
Seitenbereich:
|
101-106
|
Veranstaltungstitel:
|
CoMoRea '19 : 15th Workshop on Context Modeling and Recognition
|
Veranstaltungsort:
|
Kyoto, Japan
|
Veranstaltungsdatum:
|
11.-15.03.2019
|
Ort der Veröffentlichung:
|
Piscataway, NJ
|
Verlag:
|
IEEE Computer Society
|
ISBN:
|
978-1-5386-9152-6 , 978-1-5386-9151-9 , 978-1-5386-9150-2
|
Verwandte URLs:
|
|
Sprache der Veröffentlichung:
|
Englisch
|
Einrichtung:
|
Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik > Practical Computer Science II: Artificial Intelligence (Stuckenschmidt 2009-)
|
Fachgebiet:
|
004 Informatik
|
Abstract:
|
Wearable devices have been used widely for human activity recognition in the field of pervasive computing. One big area of in this research is the recognition of activities of daily living where especially inertial and interaction sensors like RFID tags and scanners have been used. An issue that may arise when using interaction sensors is a lack of certainty. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g. when an object is only touched but no interaction occurred afterwards. Especially in health care and nursing scenarios this verification may be critical. In our work, we aim to overcome this limitation and present a multi-modal egocentric-based activity recognition approach which is able to recognize the critical objects. As it is unfeasible to expect always a high quality camera view, we enrich the vision features with inertial sensor data that represents the users' arm movement. This enables us to compensate the weaknesses of the respective sensors. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F1-measure up to 79.6%.
|
| Dieser Eintrag ist Teil der Universitätsbibliographie. |
Suche Autoren in
BASE:
Diete, Alexander
;
Sztyler, Timo
;
Stuckenschmidt, Heiner
Google Scholar:
Diete, Alexander
;
Sztyler, Timo
;
Stuckenschmidt, Heiner
ORCID:
Diete, Alexander, Sztyler, Timo and Stuckenschmidt, Heiner ORCID: https://orcid.org/0000-0002-0209-3859
Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail
Actions (login required)
|
Eintrag anzeigen |
|
|