Anonymous
07/26/2022 (Tue) 23:50:21
No.7591
del
Amazon, just say no: The looming horror of AI voice replication
If an AI like Alexa really can convert less than a minute of recorded voice into real-time speech, it opens the door to dystopian gaslighting at a whole new level. It's frightening, creepy, and disturbing.
Last week, we ran a news article entitled, "Amazon's Alexa reads a story in the voice of a child's deceased grandma." In it, ZDNet's Stephanie Condon discussed an Amazon presentation at its re:MARS conference (Amazon's annual confab on topics like machine learning, automation, robotics, and space).
In the presentation, Amazon's Alexa AI Senior VP Rohit Prasad showed a clip of a young boy asking an Echo device, "Alexa, can grandma finish reading me 'The Wizard of Oz'?" The video then showed the Echo reading the book using what Prasad said was the voice of the child's dead grandmother.
Hard stop. Did the hairs on the back of your neck just raise up? 'Cause that's not creepy at all. Not at all.
Prasad, though, characterized it as beneficial, saying "Human attributes of empathy and affect are key for building trust. They have become even more important in these times of the ongoing pandemic, when so many of us have lost someone we love. While AI can't eliminate that pain of loss, it can definitely make their memories last."
Hmm. Okay. So let's deconstruct this, shall we?
I hear dead people
There is a psychological sensory experience clinically described as SED, for "sensory and quasi-sensory experiences of the deceased." This is a more modern clinical term for what used to be described as hallucinations.
Message too long. Click here to view full text.