Visual Mining of Smartphone Sensor Data
Supervisors: Denis Lalanne
Student: Adel Rizaev
Project status: Finished
Recognition of smartphone users high level activities, such as travel, work, presentations, entertainment, and so on, from raw multi-sensor data streams can be a challenge. On one hand, there are technical challenges: sensors of smartphones are asynchronous and noisy; sensor data may be signicantly distorted or unavailable in some real life set- tings(microphone signal can be obscured because of the pocket's material or distorted, GPS signal is unavailable indoors, camera is ineffective, if the smartphone is in the user's pocket); battery lifetime of the smartphone(because, simultaneous work of the sensors require much energy, however high level activities are typically long term). On the other hand, there are methodological challenges: different persons can perform their high level activities differently, therefore it is difficult to provide a general model for such activities(e.g. someone works in an office, another one builds houses); sensor data may vary, depending on where the user carries his smartphone(in his/her pocket or in a bag, position, etc.); getting the ground truth labels for high level activities poses problems as well.
In context of the thesis we implement a system, which enables the capture and visual analysis of the smartphones multi-sensor data. We imagine a high level activity of the smartphone's user as series of actions, then we try to discover the high level action patterns directly from the multi sensor data without any model assumptions using the implemented visualization. We demonstrate our approach with two recorded high level activities.