Speech recognition is a technology that enables the recognition and translation of spoken language into data that can be interpreted by computers. With the development of speech recognition technologies, voice assistants on mobile devices, such as Siri or Google Voice have been introduced. Particularly, mobile device users can easily perform several commands on their mobile devices using these systems. However, this convenience causes a vulnerability in that an adversary can access the mobile device data and functions easily. This vulnerability could be exploited by the adversary because there are no authentication procedures. Recently, a remote attack on a voice assistant was introduced, but the attack can be considered to be unrealistic because of many assumptions. In this paper, we analyze the vulnerabilities of mobile device speech recognition systems. We also introduce the Toilet-time attack as a new realistic attack model. Furthermore, we prove the practicality of our attack model and evaluate attack scenarios using a new attack tool called BadVoice.