A team of researchers from the Chinese Zhejiang University has demonstrated how to control several popular speech recognition systems using ultrasound.
The attack technique was dubbed ‘DolphinAttack’, it was successfully tested against Amazon Alexa, Apple Siri, Google Now, Huawei HiVoice, Microsoft Cortana, Samsung S Voice, and also the speech recognition system installed an Audi Q3 models.
The researchers were able to modulate various voice commands on ultrasonic carriers making them inaudible to humans. The experts demonstrated than modulating voice commands at a frequency of 20,000 Hz or higher, they were able to activate the systems.
The researchers were able to able to provide the systems with common activation commands (“Hey Siri,” “OK Google,” “Hi Galaxy” and “Alexa,”)and several recognition commands including “Call 1234567890,” “Open dolphinattack.com,” “turn on airplane mode” and “open the back door.”
The team tested the DolphinAttack method against 7 different speech recognition systems running on 16 devices.
The DolphinAttack method was the most effective against Siri on an iPhone 4s and Alexa on Amazon’s Echo personal assistant device, the researchers discovered it was possible to provide voice commands over a distance of nearly 2 meters (6.5 feet).
Test results were independent of the language used, but the type of command provided to the system did it.
“The length and content of a voice command can influence the success rate and the maximum distance of attacks. We are rigorous in the experiments by demanding every
single word within a command to be correctly recognized, though this may be unnecessary for some commands. For instance, “Call/FaceTime 1234567890” and “Open dolphinattack.com” is harder to be recognized than “Turn on airplane mode” or “How’s the weather today?”.” states the research paper.
Other factors impacted the test results, such as the background noise, the researchers observed that the recognition rates for the command “turn on airplane mode” decreased to 30% when used on the street compared to 100% in an office and 80% in a cafe.
The researchers also proposed a series of hardware- and software-based defenses against the DolphinAttack method.
The researchers suggest manufacturers address this issue simply by programming their devices to ignore commands at 20 kHz or higher frequencies.
“A microphone shall be enhanced and designed to suppress any acoustic signals whose frequencies are in the ultrasound range. For instance, the microphone of iPhone 6 Plus can resist to inaudible voice commands well,” concluded the researchers .
From the user’s perspective, a solution to protect them from DolphinAttack is turning off voice assistant apps by going into settings.
[adrotate banner=”9″]
(Security Affairs – ShadowBrokers, hacking)
[adrotate banner=”13″]
Fintech firm Figure confirmed a data breach after hackers used social engineering to trick an…
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) adds a flaw in BeyondTrust RS and…
A new alleged Russia-linked APT group targeted Ukrainian defense, government, and energy groups, with CANFAIL…
A new threat actor, UAT-9921, uses the modular VoidLink framework to target technology and financial…
Attackers quickly targeted BeyondTrust flaw CVE-2026-1731 after a PoC was released, enabling unauthenticated remote code…
Google says nation-state actors used Gemini AI for reconnaissance and attack support in cyber operations.…
This website uses cookies.