Dr Ren Omura
- Email: firstname.lastname@example.org
Dr. Ohmura’s vision: the motivation for open-ended activity recognition is based on the observation that in the general case of activity recognition, for context aware application, the set of relevant activities is not necessarily known at design-time. Instead the set of activities is open-ended, depend on the specific user, evolve over time following changes in the user's habits (for example) and grow as additional relevant situations are discovered. Open ended activity recognition must be realized with limited design-time assumptions, and smart run-time supervision, following unsupervised and semi-supervised principles, capitalizing on similar examples from peers when available, and drawing from autonomous operation principles seen in artificial intelligence.
Overall goals include: To envision novel approaches - comprising data processing techniques and associated methods implemented in networked embedded devices - to devise open-ended activity aware systems. These will be systems capable of recognizing hierarchical compositions of activities and including complex manipulative gestures. They will realise lifelong discovery and learning of an unbounded set of activities, whereby they can continuously discover and learn to recognize additional activities at run-time. Such systems will be adaptive, continuously tracking changes in the way activities map to sensor signals, in order to adapt to change in sensor characteristics, and in changes in the way users execute activities. Overall, open-ended activity recognition systems will autonomously adapt to new, likely unforeseen, situations encountered at run-time, such as the appearance of new sensors, new activities, or new sources of knowledge. Thus they achieve recognition of complex human activities in open-ended daily life environments, where the activities of interest and the sensors available are hard to predict at design-time. The approaches must be generic, and seamlessly applicable to various kind of sensors such as sensors detecting motion, presence, location, or sound, and including both ambient (in smart-environments) or wearable (including mobile phone-based) sensors. They must require minimal design-time training effort and thus must reduce the dependency on large training acquired in pre-defined sensor and activity configurations with traditional approaches.