Poster: Speech in Smartwatch Based Audio

Daniyal Liaqat, Robert Wu, Andrea Gershon, Hisham Alshaer, Frank Rudzicz, Eyal de Lara

International Conference on Mobile Systems, Applications, and Services (MobiSys), Munich, Germany, June 2018

 

Abstract

The speech analysis and audio processing community has explored using speech to detect emotion, depression and even Alzheimer’s disease. Audio data from these studies tends to be collected in more controlled environments with well placed, high-quality microphones. Applying these kinds of analyses to in-the- wild audio could have significant contributions, particularly in the context of health monitoring. However, it is expected that the quality of the data would drop in less controlled, mobile environments. As part of a health monitoring study, we collected in-the-wild audio from a smartwatch and in this paper we characterize speech in our collected audio. Our findings include that smartwatch based audio is good enough to discern speech. However, isolating speech is difficult because of the wide variety of noise in the signal and current tools perform poorly at dealing with this noise. We also find that a surprisingly high proportion of speech does not come from the user.

 

Poster

Pdf

 

Bibtex

Bib