Privacy in the United States is a thorny concept–encompassing constitutionally-recognized rights, individual notions of privacy and autonomy, and a growing technological trade-off between confidentiality, consent and convenience. As we continue to move into the Information age more and more of our personal information is ending up digital and online through the use of our cell phones, social media accounts, wearable devices and other cloud based services.
In 2016, there were about 6 million smart speakers in US homes. A survey by Adobe estimated that by the end of 2018 almost half of US households will own a smart speaker totaling over 60 million devices with products from Amazon representing about 2/3rds of the market. With the growing popularity of personal voice assistants and smart speakers, information privacy issues will be a challenge for both individuals and businesses and there will likely be many unintended consequences yet to be imagined.
Because we have both information security and technology experts in our firm, we convened a panel of Vantage experts at our recent All Hands Meeting and posed a series of questions to them based on the following premise:
Alexa, Siri and Google Know All – Is This a Good Thing or a Bad Thing?
Participating in the panel moderated by principal Rooz Afzal were vice president Jon Young, associate Mike Niola and senior consultant Joanna Grama.
Q: Alexa, Siri and Google Know All – Is This a Good Thing or a Bad Thing?
Its a bad thing. Voice recognition and the potential advances in natural language processing could be pivotal and positive changes in our society. In particular, the potential to increase engagement with disadvantaged populations including those with various physical impairments could be huge. The problem is that the technology isn’t coming from a few players with a tight focus on natural language processing but on huge data aggregators who make their living building profiles to sell to (or about) us.
The big players have so much information about us individually and collectively that they can build a phenomenally accurate picture of who we are, how we relate to others, our politics and so many other details most of us consider private. Not only do they have an amazing portrait, the analytics they perform provides tremendous insight into what we will do. But that is a feedback loop based on algorithms and their inherent biases. Now, our predicted choice becomes the encouraged choice making it less and less of a choice. Think about “Amazon suggestions” when shopping – is that highly ranked product purchased so much because it is highly regarded or is this a feedback loop where a paid advertisement is placed just so to become your most likely choice. There is tremendous societal risk for the resulting algorithmic bias and potential monoculture. Not to even get into the risk of the turn-key autocracy we’re building.
I see good uses coming from these technologies which are seen throughout different areas. I do also believe that, just like with any technology, there are some companies that do have their ulterior motives along the way, which we need to carefully manage. However, we can’t be overly cautious to the point where we stifle innovation.
I am so torn on this one. I think that for some populations, voice-enabled assistants are really great things–perhaps providing opportunities for convenience and greater independence. For society, though, I think we struggle to understand how these devices work, the data that they collect, and a manufacturer’s responsibility to the buyers of these products.
Q: What are some of the ways that you use Alexa at home?
I use it as the hub for my “smart home” for things like asking it to turn my lights on and off or play music. I also ask it my schedule in the mornings as I prepare for the day. This way it reads to me the top 3-5 things on my schedule without me having to stop and go to my calendar. This even helps me prepare for the day. I actually wish I could get this information displayed nicely on my mirror as I brush my teeth in the morning. That would be cool!
You also told me something else about how you utilize it after you leave your home. Tell us more about that.
Yes, that is actually a new feature that is going to come down the line for Amazon’s Alexa device. When you leave your home you’re able to tell it that you’re heading out and then the device will listen for break-ins by listening to glass break sounds or voices that are not yours. It can also listen for fire or smoke alarms. If any of those are detected it would send you a notification to your phone which I think is very useful.
I use Siri to set alarms and timers when my hands are otherwise occupied like when I’m cooking. Most of the usage around the house is my children asking Siri questions and making fun of the useless responses.
I think I might be the Luddite in the group. I don’t use these devices in my home. I see using them as a trade off between my privacy and my convenience, and we just aren’t in a place yet where convenience has won. That said, I do like interacting with the devices when they are in other peoples’ homes, and I do use Siri on my phone (using the manual function to activate her).
Q: If Google, Alexa and Siri are always listening / watching and they hear/read/predict something based on your voice and/or behavior that indicates someone has done something illegal, are they in any way responsible to do something about it?
I think this is an area where the law is really unsettled. We are still trying to figure out what the legal parameters are for providing data from these devices when a crime has been committed (so, think of Alexa as a witness to a crime), or whether or not one of these always-on recording devices could destroy a legal privilege, or whether I need to warn guests as they enter my home that I have a voice-enabled device listening to our conversations. What else can the manufacturers could do with this type of data, and what types of conclusions could be drawn about a person’s mental state from gathered data, feels both promising and perilous to me.
What if it’s something like Alexa being able to tell that someone is becoming depressed, should they do something about it? Do these companies have a responsibility to step in or could they be sued for not intervening?
This also opens up interesting questions on the area of informed consent. We’re all (or at least most) carrying these listening and recording devices around with us everywhere. Should I have a sign on the entrance to my residence about the dozens of “smart devices” that might pick up and record your audio should it think it heard the trigger word? When I’m within audible range of someone else using one of these, do they need my informed consent to use their device knowing it might catch my audio?
I think the affirmative reporting requirement is another very interesting question. In most states, computer technicians have an obligation to report evidence of child pornography and there are many other areas (e.g., potential abuse found by a physician) where there may be an obligation to report to the relevant authorities. If and how those obligations apply to these AI technologies is a very interesting question.
The privacy debate over smart speakers and personal voice assistants will continue to simmer and will likely become turn into a boil as these devices become even more ubiquitous in our lives. There are many positive use cases for these devices especially in the healthcare and home or building automation areas, but it is important for everyone to recognize the potential downside as well.