Headlines

Speaking To A Deceased Loved One May Soon Be Possible Through Alexa

Speaking To A Deceased Loved One May Soon Be Possible Through Alexa
In the future, Amazon Alexa could be able to talk to people in the voices of relatives and friends, a feature that aims to “make memories last.” ROBERT LEVER/AFP via Getty Images

In the future, Amazon Alexa could be able to talk to people in the voices of relatives and friends, a feature that aims to "make memories last."

The new voice assistant feature was highlighted Wednesday at Amazon's re: MARS (for machine learning, automation, robots, and space) conference.

Rohit Prasad, senior vice president of Alexa, demonstrated a bizarre new voice assistant feature: the purported capacity to imitate voices. Alexa would be able to mimic a person's voice when speaking after less than a minute of listening to that person's speech.

"While AI can't eliminate that pain of loss, it can definitely make the memories last," he said.

According to Sky News, a video of the function showed a little child requesting that their grandmother read them a story, to which Alexa agreed before changing her voice.

After being trained on the person whose voice it intends to be mimicking, the new skill can produce a synthetic voiceprint, according to an Amazon representative interviewed by Engadget.

Prasad emphasized that the tech firm was looking for ways to personalize AI as much as possible.

An Issue of Security

CNET reported that the voice-imitating tool is not designed primarily for family members who have passed away, as per an Amazon spokesperson. Based on recent developments in text-to-speech technology, as described in a recent Amazon report, the team has created high-quality voices with far fewer data by applying a voice filter rather than spending hours recording voices in a professional studio.

Deep fake audio tools, which produce synthetic sounds using text-to-speech technology, have long spurred security experts to worry that they could unlock the gates to a wave of new scams. A variety of crimes have been made possible by voice cloning software, including an incident two years ago in the United Arab Emirates, where scammers convinced a bank manager to transfer $35 million by pretending to be a company director.

Deep fake audio crimes are still rare, though, and the instruments that scammers have at their hands are quite unsophisticated at present.

Experts will also observe how the function is received. Although it appears to require user consent, there is a moral quandary around the rights of the deceased's voice and how long it may be stored on personal devices or business servers.

Security Concerns

Additionally, experts will observe how the public receives the feature. There is a moral dilemma surrounding the rights of the deceased's voice and how long it may be preserved on private devices or commercial servers, even if it requires user agreement, per the New York Post.

Several users on Twitter expressed reservations about the creepy function, which is still being developed, according to Engadget.

A critic smugly stated, "Of course, endless digital copies of memories are what makes things "last,"

One user quipped: "Uploading old voicemails so I can program Alexa to have my dead grandparents say 'beware! beware!' at 3 am."

Others who objected to it compared it to the dystopian "Black Mirror" episode "Be Right Back," in which a woman orders an AI replica of her deceased husband.

It's unclear how far along the feature is in development or when Alexa voice assistants could start receiving it. We might not see this functionality any time soon because the re: MARS event highlights what Amazon is doing in ambient computing, including developments in Alexa.

Tags
Tech, Amazon
Real Time Analytics