Challenges with Real-World Smartwatch based Audio Monitoring

Daniyal Liaqat, Robert Wu, Andrea Gershon, Hisham Alshaer, Frank Rudzicz, Eyal de Lara

Workshop on Wearable Systems and Applications (WearSys 2018), Munich, Germany, June 2018



Audio data from a microphone can be a rich source of information. The speech and audio processing community has explored using audio data to detect emotion, depression, Alzheimer’s disease and even children’s age, weight and height. The mobile community has looked at using smartphone based audio to detect coughing and other respiratory sounds and help predict students’ GPA. However, audio data from these studies tends to be collected in more controlled environments using well placed, high quality microphones or from phone calls. Applying these kinds of analyses to continuous and in-the-wild audio could have tremendous applications, particularly in the context of health monitoring. As part of a health monitoring study, we use smartwatches to collect in the- wild audio from real patients. In this paper we characterize the quality of the audio data we collected. Our findings include that the smartwatch based audio is good enough to discern speech and respiratory sounds. However, extracting these sounds is difficult because of the wide variety of noise in the signal and current tools perform poorly at dealing with this noise. We also find that the quality of the microphone allows annotators to differentiate the source of speech and coughing, which adds another level of complexity to analyzing this audio.