If you know what to listen for, a person’s voice can tell you about their education level, emotional state and even profession and finances — more so than you could imagine. Now, scientists posit that technology in the form of voice-to-text recordings can be used in price gouging, unfair profiling, harassment or stalking.
While humans might be attuned to more obvious cues such as fatigue, nervousness, happiness and so on, computers can do the same — but with far more information, and much faster. A new study claims intonation patterns or your choice of words can reveal everything from your personal politics to the presence of health or medical conditions.
While voice processing and recognition technology present opportunities, Aalto University’s speech and language technology associate professor Tom Bäckström, lead author of the study, sees the potential for serious risks and harms. If a corporation understands your economic situation or needs from your voice, for instance, it opens the door to price gouging, like discriminatory insurance premiums.
And when voices can reveal details like emotional vulnerability, gender and other personal details, cybercriminals or stalkers can identify and track victims across platforms and expose them to extortion or harassment. These are all details we transmit subconsciously when we speak and which we unconsciously respond to before anything else.
Jennalyn Ponraj, Founder of Delaire, a futurist working in human nervous system regulation amid emerging technologies, told Live Science: “Very little attention is paid to the physiology of listening. In a crisis, people don’t primarily process language. They respond to tone, cadence, prosody, and breath, often before cognition has a chance to engage.”
Watch your tone
While Bäckström told Live Science that the technology isn’t in use yet, the seeds have been sown.
“Automatic detection of anger and toxicity in online gaming and call centers is openly talked about. Those are useful and ethically robust objectives,” he said. “But the increasing adaptation of speech interfaces towards customers, for example — so the speaking style of the automated response would be similar to the customer’s style — tells me more ethically suspect or malevolent objectives are achievable.”
He added that although he hasn’t heard of anyone caught doing something inappropriate with the technology, he doesn’t know whether it’s because nobody has, or because we just haven’t been looking.
The reason for me talking about it is because I see that many of the machine learning tools for privacy-infringing analysis are already available, and their nefarious use isn’t far-fetched.
Tom Bäckström, Aalto University assistant professor
We must also remember that our voices are everywhere. Between every voicemail we leave and every time a customer service line tells us the call is being recorded for training and quality, a digital record of our voices exists in comparable volumes to our digital footprint, comprising posts, purchases and other online activity.
If, or when, a major insurer realizes they can increase profits by selectively pricing cover according to information about us gleaned from our voices using AI, what will stop them?
Bäckström said even talking about this issue might be opening Pandora’s Box, making both the public and “adversaries” aware of the new technology. “The reason for me talking about it is because I see that many of the machine learning tools for privacy-infringing analysis are already available, and their nefarious use isn’t far-fetched,” he said. “If somebody has already caught on, they could have a large head start.”
As such, he’s emphatic that the public needs to be aware of the potential dangers. If not, then “big corporations and surveillance states have already won,” he adds. “That sounds very gloomy but I choose to be hopeful I can do something about it.”
Safeguarding your voice
Thankfully, there are potential engineering approaches that can help protect us. The first step is measuring exactly what our voices give away. As Bäckström said in a statement, it’s hard to build tools when you don’t know what you’re protecting.
That idea has led to the creation of the Security And Privacy In Speech Communication Interest Group, which provides an interdisciplinary forum for research and a framework for quantifying information contained in speech.
From there, it’s possible to transmit only the information that’s strictly necessary for the intended transaction. Imagine the relevant system converting the speech to text for the raw information necessary; either the operator at your provider types the information into their system (without recording the actual call), or your phone converts your words to a text stream for transmission.
As Bäckström said in an interview with Live Science: “The information transmitted to the service would be the smallest amount to fulfill the desired task.”
Beyond that, he said, if we get the ethics and guardrails of the technology right, then it shows great promise. “I’m convinced speech interfaces and speech technology can be used in very positive ways. A large part of our research is about developing speech technology that adapts to users so it’s more natural to use.”
“Privacy becomes a concern because such adaptation means we analyze private information — the language skills — about the users, so it isn’t necessarily about removing private information, it’s more about what private information is extracted and what it’s used for.”
Keumars Afifi-Sabet
Having your privacy violated is an awful feeling — whether it’s being hacked or social media pushing online ads that make you think a private conversation wasn’t so private. Studies like this, however, show we’ve barely scatched the surface when it comes to how we can be targeted — especially with something so intimate and personal to us as our own voice.
With AI improving and other technologies becoming far more sophisticated, it highlights the that we don’t truly have a grasp on how this will truly affect us — specifically, how technology might be abused by certain forces to exploit us. Although consumer privacy has been massively undermined in the last few decades, there’s plenty room left to use what we hold close to us to be commodified, at best, or in the worst cases, weaponized against us.
Source: www.livescience.com
