From Fast Company:
Devices like Amazon Echo could someday turn into a treasure trove for developers that make voice assistant skills, but first companies have to figure out where they draw the line when it comes to weighing data sharing against consumer privacy.
Now that dilemma is heating up: Citing three unnamed sources, The Information reported this week that Amazon is considering whether to provide full conversation transcripts to Alexa developers. This would be a major change from Amazon’s current policy in which the company only provides basic information—such as the total number of users, the average number of actions they’ve performed, and rates of success or failure for voice commands. Amazon declined to comment to The Information regarding the claims, but the change wouldn’t be unprecedented. Google’s voice assistant platform already provides full transcripts to developers.
The potential move by Amazon underscores how it is caught between two worlds with its Alexa assistant, especially in regards to privacy. By keeping transcripts to itself, Amazon can better protect against the misuse of its customers’ data and avoid concerns about eavesdropping. But because Alexa already gives developers the freedom to build virtually any kind of voice skill, their inability to see what customers are saying becomes a major burden.
. . . .
With Google Assistant, developers can view a transcript for any conversation with their particular skill. Uber, for example, can look at all recorded utterances from the moment you ask for a car until the ride is confirmed. (It can’t, however, see what you’ve said to other apps and services.) Google’s own documentation confirms this, noting that developers can request “keyboard input or spoken input from end user” during a conversation.
For developers, this data can be of immense utility. It allows them to find out if users are commonly speaking in the wrong syntax, or asking to do things that the developer’s voice skill doesn’t support.
. . . .
In terms of sharing data with developers, Apple’s Siri voice assistant is on the opposite side of the spectrum from Google. Developers who work with SiriKit get no information about usage from Apple, not even for basic things like how many people use voice commands to access an app, or which voice commands are most commonly used.
. . . .
But keep in mind that Siri’s approach to third-party development is entirely different from that of Google and Amazon. Instead of letting developers build any kind of voice application, Apple only supports third-party voice commands in a handful of specific domains, such as photo search, workouts, ride hailing, and messaging. And instead of letting those apps drive the conversation, Apple controls the back-and-forth itself. The apps merely provide the data and some optional on-screen information.
Because these apps don’t communicate with users directly, there’s no need for them to have conversation transcripts in the first place. Instead, Apple can look at what users are trying to accomplish and use that data to expand Siri on its own.
The downside to this approach is that Siri just isn’t as useful as other virtual assistants.
Link to the rest at Fast Company
If PG lived in China, he would be inclined not to use Alexa.