Reader Steve T. sent me a link to story confirming my decision to not own smart speakers. A woman going by the name my.data.not.yours on TikTok (I guess this is the new hip surveillance social media network) sent a request to Amazon for all of the data the company had on her. The result? Exactly what you would expect (I sanitize the TikTok link embedded in the source so I’ll apologize here if it doesn’t work):
TikToker my.data.not.yours explained: “I requested all the data Amazon has on me and here’s what I found.”
She revealed that she has three Amazon smart speakers.
Two are Amazon Dot speakers and one is an Echo device.
Her home also contains smart bulbs.
She said: “When I downloaded the ZIP file these are all the folders it came with.”
The TikToker then clicked on the audio file and revealed thousands of short voice clips that she claims Amazon has collected from her smart speakers.
Smart speakers like the ones provided by Amazon have an always on microphone to listen for voice commands. The problem isn’t necessarily the always on microphone but the fact that most smart speakers don’t perform on-site audio analysis (or only perform very limited on-site analysis). Instead they record audio and send it to an off-site server for processing. Why is the audio moved off-site? Ostensibly it’s because an embedded device like a smart speaker doesn’t have the same processing power as a data center full of computers. Though I suspect that gaining access to valuable information like household conversations has more to do with the data being moved off-site than the accuracy of the audio analysis.
The next question one might ask is, why is the data being stored? This is why I suspect moving the data off-site has more to do with gaining access to valuable information. Once the audio has been analyzed and the commands to be executed transmitted back to the smart speaker, the audio recording could be deleted. my.data.not.yours discovered that the audio isn’t deleted or at least not all of the audio is deleted. But even if Amazon promised to delete all of the audio sent to its servers, there would be no way for you as an end user to verify whether the company actually followed through. Once the data leaves your network, you lose control over it.
The problem with Amazon’s smart speakers is exacerbated by their proprietary nature. While Amazon provides the source code necessary to comply with the licenses of the open source components it uses, much of the stack involved with its smart speakers is proprietary. This means you have no insight into what your Amazon smart speaker is actually doing. You have a black box and promises from Amazon that it isn’t doing any shady shit. That’s not much of a guarantee. Especially when dealing with a device that is designed to listen to everything you say.
“Hey, Alexa, delete all of my audio recordings.”
It’s important to go through the settings to make sure Amazon isn’t doing something you don’t want, like saving these audio files. It doesn’t appear she bothered to do that. And I’m sure a large percentage of users don’t either. They just like to, uh, set it, and forget it.
And she makes videos about data privacy and tech.
The problem is even if you do set a setting that claims to delete audio recordings, you have no way to verify that the recording are actually deleted.