This work package addresses the following research questions:
- How to build a scalable and power efficient context evaluation framework on smartphones that can continuously evaluate a high number of context conditions to effectively support social (group) applications?
- How to coordinate computational tasks between individual smartphones and a cloud platform?
This work package
will build a framework that enables capturing of a wide variety of sensor data from on-phone sensors (e.g., accelerometers, magnetometers, and GPS), from nearby Bluetooth sensors (e.g., activity and heart rate monitors like the new MIO ALPHA watch), or user interaction sensors (e.g. graphical user interface and speech input from WP1), or proximity information from COMMIT-P9. We will use the SWAN system of the VU and Sense’s CommonSense platform as starting points and extend their expressivity and scalability. SWAN allows for continuous monitoring of context conditions based on sensor data.
We will investigate
ways to combine SWAN with Sense's CommonSense platform, for example using sensors in SWAN to push to CommonSense or using "features" extracted using CommonSense to feed back into SWAN as local context.
We will study
the scalability of the framework to a large numbers of sensor types and instances,context conditions, and users.
Smartphones and cloud platforms have very different characteristics that are continuously changing. It is important to use the capabilities of all resources in this hybrid system optimally.