People are increasingly using virtual assistants as their closest confidantes and Apple, Google and Amazon are responding. But are we telling them too much?
It’s three in the morning and my room is bathed in the glow of my phone. Like one in three people, I check my smartphone when I wake up in the middle of the night. I can’t sleep and so wander from one social-media app to another, my thumbs scrolling through what feels like miles of emptiness. “Siri, what is the meaning of life?” I ask without thinking. “I have stopped asking myself this kind of question,” she answers. I ask again, because I like it better when she says “nothing Nietzsche wouldn’t teach you”.
I am not the only one turning to Siri for life advice. Apple is currently recruiting a Siri engineer with a background in psychology to help make its virtual assistant better at answering these sorts of questions.
“People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life”, says the job ad.
More than half of interactions with Amazon’s virtual assistant Alexa are “non utilitarian and entertainment related”, according to the company, a category that includes existential questions and confidences.
Google has a full “personality team” in the US composed of comedy writers, video-game designers and mysteriously named “empathy experts”, in charge of defining answers to complicated questions asked of Google Assistant. Microsoft, meanwhile, has an “editorial team” responsible “for crafting and creating Cortana’s responses to make sure that all of our responses ladder up to Cortana’s core personality pillars”.
Have we all gone crazy, whispering confidences into our electronic devices? I don’t think so. Talking to my phone, I don’t feel any different from Ross in the fifth season of Friends, asking a Magic 8 ball if he should stop seeing Rachel.
“Discussions with our phones help us introspect,” says Alexandre Lacroix, a French philosopher who investigated how the internet disrupts our lives in Ce qui nous relie – Jusqu’où Internet changera nos vies (What connects us – How is the Internet Changing the Way we Live?).
“We don’t expect any precise answers when we ask Siri what the meaning of life is. We use it as tool in our quest for self-knowledge,” says Lacroix.
It seems like a new kind of diary. In the same way we feel free to write what we really think in the pages of a book, when we discuss our innermost feelings we now tend to disclose more talking to AI than to humans, according to a study conducted by the Institute for Creative Technologies in Los Angeles.
The difference is now the diary answers back – and records everything.
“This is a tremendous opportunity in terms of mental health care”, says Eleni Linos, an assistant professor of medicine at the University of California, who has advised Apple on how to improve Siri and co-authored a paper about how conversational agents such as voice assistants could improve our health.
“Conversational agents can direct us to the right resource, when needed,” Linos says.
Amazon claims to have trained Alexa to answer in a compassionate and helpful way when asked about loneliness or depression, and to provide the number of a depression hotline. Google does the same in certain regions of the world.
Alison Darcy is a former senior researcher at Stanford and founder of Woebot, a psychology chatbot that uses cognitive behavioural therapy. She says chatting with an AI can have a very positive impact on mental health, maybe just by simply reminding us to take some time to reflect and introspect. But, she says, “for ethical reasons, the patient must be aware of the science behind the service”.
Microsoft, Apple and Amazon do not disclose much about how they shape their answers to existential questions. Nor do we know much about the precise role of psychologists and the science they rely upon.
“This is critical because psychology is highly political,” says Luke Stark, a sociologist at Dartmouth university. “Your conversational agent may know way before you [do that you have] a mental illness.”
Research is underway at MIT, exploring how mental illness could be diagnosed just by analysing the way you speak. It seemed far-fetched to me at first, but this is only the beginning of what artificial intelligence can teach us about ourselves.
“Considering how sensitive this data is, it should be as protected as medical files,” adds Stark. Amazon, Microsoft, Apple and Google all keep track of your utterances to conversational agents for various amounts of time, in order to customise the experience.
My conversations with Siri are way more serious and sensitive than I thought. Especially considering that while hiring a Siri Health engineer, Apple is also looking for a “behavioural data scientist” whose responsibilities among others are to “translate insights into products and programs that consumers engage with and help drive behaviour change” and “understand how digital tools and software can influence behaviour”.
Sure, it does not say Apple analyses my Siri conversations to influence my behaviour or manipulate me, without me being aware. But it does mean it is, in theory, possible.
guardian.co.uk © Guardian News & Media Limited 2010