TechCrunch has reported on partnership talks between Apple and Nuance (makers of voice recognition technology), saying the deal may be well on its way, and might show its face at WWDC this summer.
The idea of advanced voice recognition on iOS devices is intriguing. From TechCrunch:
More specifically, we’re hearing that Apple is running Nuance software — and possibly some of their hardware — in this new data center. Why? A few reasons. First, Apple will be able to process this voice information for iOS users faster. Second, it will prevent this data from going through third-party servers. And third, by running it on their own stack, Apple can build on top of the technology, and improve upon it as they see fit.
Obviously, Nuance, which owns the technology, would have to sign off on all of this. And we now believe that they have. Hence, the big time partnership that should be formally announced soon.
While Apple may be able to build a solution like this themselves, it’s much better in the long run to license the technology, enabling the people who do it best to continue doing what they do, and saving Apple some money as well.
Apple even may be deploying it on their massive cloud so they can integrate it into iOS 5 and make SIRI-style voice recognition and “artificial intelligence” core to iPhone, iPad, and iPod touch moving forward.
The lingering question, I think, is how much people will use a technology like this. Your thoughts? Sound off in the comments!