Voice commands given to our smartphones and stand-alone devices are all the rage. Known as called "voice command devices" (VCDs) these include Siri, Alexa, Google Assistant, Samsung S Voice, and Bixby. These VCDs listen for our voice commands and then take action. But recent research has found that attackers could issue voice commands to these devices that we cannot hear. Yet the risk seems pretty low.
Because all users’ voices are different, voice recognition can be used to authenticate users based on the unique characteristics of a person’s voice. Several characteristics make each person’s voice unique, from the size of the head to age. These differences can be quantified and a user voice template can be created that can be used to accept actual users while rejecting imposters. But voice recognition is not to be confused with speech recognition, which accepts spoken words for input as if they had been typed on the keyboard. Speech recognition has been available for several years for issuing commands to computers instead of clicking a mouse or typing on a keyboard. VCDs have become increasingly popular as integrated into our smartphones or as stand-alone devices like Amazon Echo or Amazon Echo Dot that listen for our commands, such as "Show me the weekend forecast" or "Show my calendar."
Now consider how we hear. As humans we can hear sounds that are between the ranges of about 20-20,000 Hz. But certain animals have much better hearing than we do, and can hear things that we cannot. Take dogs, for instance. Although puppies are born deaf and cannot hear for the first three weeks, they make up for it as they grow older. Dogs can hear higher pitched sounds and can detect a frequency range of 67-45,000 Hz. And certain dolphins are even better: some can hear from 40-160,000 Hz.
And what about VCDs? Like animals, VCDs can actually hear sounds that we cannot. And this could open the door for attackers to--unbeknownst to us--issue commands to our VCDs that we would not even hear.
Researchers have recently demonstrated that they can take human voice commands and translate them into ultrasound frequencies above what we can hear. These commands can then be played back, issuing commands to our VCDs like visit a specific website (which contains malware that can then be downloaded onto the device) or take the device offline. This attack is called the Dolphin Attack.
Is this something to be concerned about? Perhaps not. In order for an attack to be successful there must be several factors in play. The smartphone to which the signal must be received only has a range of five to six feet from the transmitting device, so an attacker would have to be very close to her victim (although in a crowd that would not be impossible). Also, when giving commands to Siri and Google Assistant the assistant must first be activated. And, the user is immediately alerted of a voice command because these assistants make a tone or reply back with a sound that humans can detect. Thus a user would know of any command issued that they did not hear.
So, the risk is fairly low of this resulting in a widespread issue. But nevertheless it reminds us that our everyday devices--like VCDs--could be at the risk of an attack.
You can read more about the Dolphin Attack and see demonstrations at http://dolphinattack.com/projects/dolphinAttack.html
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.