Background: Cough is a pervasive symptom, yet its quantification traditionally relies on self-report of frequency which is known to be unreliable. Existing cough detectors have limited accuracy or require users wear dedicated equipment. By analyzing smartphone-captured audio recordings, we aim to develop an automatic cough-detection algorithm that can be widely deployed in user-friendly smartphone apps.
Methods: Patients with and without obstructive lung disease were recruited from the Brigham and Women’s Hospital Lung Center, Emergency Department (ED), and Pulmonary Function Lab during methacholine challenge test (MCT). Voluntary and involuntary coughs were recorded via a Samsung smartphone. â€œGround truthâ€ was determined by manual review of recordings or semi-automatically using a high-sensitivity filter. Machine learning algorithms were designed to maximize specificity. Models were trained on the clinic and ED datasets and independently validated using the MCT data.
Results: 127 subjects were enrolled. A random forest model trained on the clinic and ED datasets with top features related to signal energy/power and spectral features performed best. In the MCT validation dataset, the model correctly classified 875 of 1045 coughs and excluded 177,074 of 181,781 non-cough sounds, yielding a sensitivity of 86.7% [confidence interval (CI): 82.8-89.8%] and specificity of 97.5% (CI: 97.1-97.9%), when accounting for multiple cough instances per subject.
Conclusions: This study lays the groundwork for an automatic, smartphone-based cough detector that performs well in environments with noise levels similar to the home environment.