Vulnerabilities in virtual assistants such as Amazon Alexa or Google Assistant may allow laser beams to imitate audio commands and hack into these devices, researchers at the University of Michigan and the University of Electro-Communications in Tokyo have found.
Using a laser beam with just five milliwatts of power for the virtual assistants and 60 milliwatts for smartphones and tablets, the researchers discovered the lasers can activate and hack different virtual assistants just by aiming light at different intensities into the microphones, calling the tactic “light commands.” The changing intensities of light act as sound waves that cause the microphone to react as if it is hearing sound.
Engineering graduate student Benjamin Cyr, a member of the light commands research team, said light commands can be dangerous as they can grant individuals access to important information from far distances.
“With these devices, you have them in a secure location within a home where a passerby can’t talk and have it activate,” Cyr said. “But light, if it’s focused, will travel through windows and from long distances.”
Cyr said some of the tests the team completed were able to activate the devices with a laser beam from a distance of more than 100 meters. The team tested 17 different virtual assistant devices using a tripod, telescope and telephoto lens.
Videos on the research project’s website show the team successfully injecting different commands from varying distances into the Google Home device to open a garage door and to say what time it is, once from the top of the North Campus bell tower to an office in the Bob and Betty Beyster building.
The researchers successfully hacked into virtual assistant devices using equipment, some bought from Amazon, for a total under $500. Though they had previously not heard of light commands being used to hack into virtual assistants, Cyr said the technique would be very easy for an individual to use.
Daniel Genkin, assistant professor of electrical engineering and computer science , also worked on the project and pointed out that most people think of sound rather than light in regards to a microphone, Genkin said the vulnerabilities in virtual assistants that let light commands control them could potentially create a serious safety issue.
“The system that responds to sound actually delivers the system that responds to sound and light,” Genkin said. “Every time you have this gap, you have a security problem … When you think about those gaps, and where we need to map them out, then we need to think, what are the implications and how do we close them?”
Using light commands, an individual could easily have access to unlock doors, go online shopping using the target’s information or unlock and start a vehicle connected to the target’s device, Cyr said.
LSA sophomore Kat Black uses an Amazon Alexa at home. She said the results of the study show light commands can be a serious threat.
“There have been several times where our Alexa at home kind of malfunctioned and that was unprompted,” Black said. “I think if someone is deliberately trying to alter its functioning, it’s very possible.”
Cyr said the team reached out to Amazon and Google to inform them of the security issues light commands may cause. They also plan to reach out to Facebook and other microphone manufacturers to help them find ways to fix this vulnerability.
“Our goal is to make sure that people know what can be done to the sensors currently and then to find solutions so that people can trust the sensors,” Cyr said.
As for now, Cyr said those who have a virtual assistant in their home should keep them away from windows and areas where they could be easily accessible from the outside.
The research team also included EECS professor Kevin Fu and postdoctoral student Sara Rampazzi from the University as well as Takeshi Sugawara from the University of Electro-Communications in Tokyo.