This just in: QNX has announced a new framework that will help speech recognition systems in cars understand a speaker’s intent. The framework extracts meaning from the driver’s spoken words, enabling in-car systems to set complex navigation destinations, create calendar appointments, dictate email or text messages, or even perform general Internet searches.
The framework, which is a component of the QNX CAR application platform, will enable in-car systems to take advantage of AT&T Watson, a multilingual speech engine that runs on a cloud-based server to provide high-quality, low-latency voice recognition.
Determination of the driver's intent starts on the server, where the Watson engine begins to analyze the driver's words and fits them to known patterns. The results are then handed off to the car, where the intent engine from QNX performs further speech analysis to determine how to act.
According to my colleague Andy Gryc, "the server-side analysis provided by AT&T Watson is optimized for complex scenarios, such as a navigation application in which the driver may verbalize destinations in hundreds of different ways. The QNX client-side analysis grants car makers greater flexibility, enabling them to adapt the AT&T Watson results to a variety of in-car applications, regional aspects, or personal tastes.”
The intent system will be offered as a component of the QNX CAR application platform in 2013. For more information, read the press release.
Find me a Starbucks!
QNX and AT&T have already done a lot work to bring the Watson speech engine to cars. For an example, check out this Engadget video of the QNX concept car (a modified Porsche 911), filmed at an AT&T event this past April:
No comments:
Post a Comment