Vulnerability Assessment of Voice-Activated Assistants in Smart Homes Against Adversarial Audio Attacks

  • Vijay Kumar Meena
Keywords: Voice-activated assistants, Smart home security, Adversarial audio attacks, Deep learning, Audio watermarking, Robustness evaluation

Abstract

Voice-activated assistants (VAAs) such as Amazon Echo, Google Home, and Apple HomePod have become integral components of modern smart homes, enabling hands-free control over devices, information retrieval, and home automation. While these systems improve convenience and accessibility, they introduce novel security risks, particularly through adversarial audio attacks, where imperceptible perturbations in audio inputs can cause misclassification or unintended actions. This paper investigates the robustness of commercial voice assistants against AI-generated adversarial audio perturbations, focusing on targeted and untargeted attacks. We evaluate the efficacy of defense mechanisms including audio watermarking, robust feature extraction, and adversarial training. Using quantitative metrics such as attack success rate (ASR), command misinterpretation rate, and signal-to-noise ratio (SNR), we demonstrate that VAAs are vulnerable to adversarial inputs with ASR exceeding 92% under standard attacks. Implemented defense strategies can reduce ASR to below 25%, highlighting the importance of integrated security measures. Our findings emphasize the critical need for robust defenses in smart home environments to ensure user privacy and safety.

Author Biography

Vijay Kumar Meena

Vijay Kumar Meena
Lecturer,Govt. R.C Khaitan Polytechnic College,Jaipur
Email:-vijaysattawan22@gmail.com

Published
2022-02-04
Section
Regular Issue