The automotive industry is experiencing rapid technological advances with new mobility landscapes emerging faster than expected; demanding more focused and streamlined strategies from industry players. Nuance Communications, a pioneer in voice recognition technology, decided to spin-off its automotive operations into a separate company.
In the latest of our Movers & Shakers series, Frost & Sullivan’s Anubhav Grover, Research Analyst – Connected Cars interviewed Eric Montague, Senior Director, Strategy & Product Marketing, Cerence to learn more about Nuance Communications spin-off, Cerence.
Key themes covered in the interview ranged from understanding newly formed Cerence’s mission, product portfolio, platform strategy, use cases targeted by automakers, competitive landscape, and much more.
Anubhav Grover (AG): Can you take us through the events that led up to the spin-off of Cerence from Nuance Communications?
Eric Montague (EM): Nuance was previously focused on four different businesses: healthcare, enterprise, document imaging and automotive. As Nuance’s new CEO, Mark Benjamin, joined the company in early 2018, he saw a clear need to simplify the business and focus on opportunities that would enable all aspects of the business to succeed. This led to the decision to sell the document imaging business (to Kofax in February 2018) and spin out the automotive business into a separate, publicly traded company – Cerence. The decision was meant to enable Cerence to deliver on its mission of building immersive experiences that make people feel happier, safer, more informed, and more entertained in their cars, with greater control over its own capital and investments and an increased ability to innovate with speed and agility.
AG: How would you describe Cerence’s mission? We’re particularly interested in knowing how your product strategy will help achieve the company’s mission.
EM: Cerence is a pure play automotive technology company focused on providing AI-powered voice assistant experiences for connected and, eventually, autonomous cars. The ability to focus solely on the automotive market will help us achieve the next level of technological innovation in this space.
As Cerence, we will continue our development of technologies that will make the button-free car a reality, including voice-activated controls that allow drivers to do things like make calls, ask for directions, and change the radio station while keeping their hands on the wheel and using their most natural way of speaking. We’ll also continue exploring new and innovative use cases for artificial intelligence, sensors, and in-cabin cameras, which have opened us up to a much broader range of services, such as emotion detection based on drivers’ facial expressions, gestures, and the pitch of their voices.
With our new, hyper-focused vision, we’ll not only be able to meet the most ambitious consumer expectations, but also challenge the industry to redefine the immersive driver experience.
AG: How would you describe your competitive landscape at this stage? How do you think your product portfolio will fare in the market, especially in terms of customer satisfaction?
EM: Our competitive landscape is segmented into two groups: large technology companies moving into the automotive space, and more specialized AI and voice recognition players. The big tech players pose a threat because they want to make the car their next big focus area as the data, revenue and consumer knowledge opportunities are huge.
That said, Cerence already has significant market presence compared to these big technology companies, with our Cerence Drive platform shipping in approximately one out of every two cars. We know that we are a valuable partner to our automaker customers, as we have been for more than 20 years. It’s our role to partner with and guide them through the changing landscape, as many of them are grappling with how to deliver the best of both worlds – a highly specialized, branded in-car voice assistant and the access to their broader digital ecosystems that drivers are demanding. As such, we’re prioritizing interoperability between these two worlds with our cognitive arbitration solution with the goal of delivering the best possible solution for our OEM partners and end users. For us, this means a seamless, AI-powered experience that supports third-party voice assistants through a consistent, OEM-branded interface.
Spinning off from Nuance will put us in a unique position to grow and reaffirm our value proposition to the automotive industry. As we continue to prioritize customer satisfaction, we will focus on meaningful, competitive innovation that furthers this mission. As Cerence, we will be well positioned to do so with speed and agility.
AG: How would Cerence defend its market position in the face of aggressive competition from technology giants like Amazon and Google?
EM: It’s less about competing and more about coexisting to deliver the best solutions to our customers and their drivers.
We know our automaker customers will want to keep Cerence in the mix as an independent, neutral partner, rather than possibly ceding control of their digital experience and data to a large tech company. We give carmakers the ability to maintain control of their car-related data while still incorporating the big-tech voice assistants that drivers and car buyers want to have available. Leveraging our cognitive arbitrator, a Cerence system built into the car can listen to the driver’s voice commands, and direct them to the voice assistant best suited to the task. For example, an order to buy something online could be routed to a general-purpose voice assistant, while a question about the car’s fuel level would be answered by the car’s voice assistant.
AG: We know that major automakers (Daimler, BMW, Ford, and some Chinese OEMs) and top tier 1 suppliers use the Cerence Mobility Assistant Platform. What are the current and future use cases that you foresee for OEMs that seek to use the platform?
EM: With voice as the most natural form of interaction, the Cerence Drive platform is key to many major automakers’ personal assistants, powering a multitude of features that are core to the in-car experience. Features could include:
- Customizable wake-up word – A unique capability in the voice assistant landscape, drivers can use the standard “Hey [Insert Car Brand]” wake-up word or change the name of the assistant to one of their choosing for a more personalized experience.
- Smart, voice-activated car manual – Drivers can now access the entire car manual using their voice, an increasingly important feature as cars become even more complex and printed manuals become harder to navigate.
- Voice-triggered experience and caring modes – Drivers can express their emotional and cognitive states using natural language, like being stressed or tired. The car then responds by switching several of the car systems into an appropriate mode for the situation.
- Multi-modality technologies – Multi-modal technologies will combine gaze, gesture, emotion recognition and more to further humanize automotive assistants and in-car experiences.
Looking toward the future, the Cerence Drive platform will continue to bring together voice, touch, gesture, emotion, and gaze innovations to create deeper connections between drivers, their cars and the digital world around them.
AG: What are the other products in the pipeline that you think will evolve into major drivers of your automotive business in the future? What new opportunities are you pursuing?
EM: There are many exciting products in the pipeline that have the potential to drive our auto business in the future, one of which is the solutions we are developing for the button-free car.
The button-free car will allow users to effectively control the bulk of the car’s features seamlessly by voice. As an example, this technology will allow users to say, “open the window,” as well as more specifically, “open the window halfway,” or “open the window a little more.”
Additionally, drivers will be able to interact with their vehicles without a wake-up word or pushing a button. For example, a driver can simply say, “find parking,” and the car will listen, help you find a place to park – while keeping your preferences, like payment method, in mind.
AG: Recently you joined the voice interoperability initiative with Amazon, Microsoft and other voice technology players. Could you tell us how this benefits Cerence’s services?
EM: The Voice Interoperability Initiative led by Amazon supports our mission to provide the flexibility that consumers want. It’s no secret that consumers are tied to more than one tech ecosystem. For instance, maybe they use Apple for their mobile device, Google for search, Microsoft for work applications, and Amazon for shopping. They want their car to be an extension of their digital life, and that’s where interoperability comes in. Our goal is to ensure that we provide our automaker customers and their drivers with choices and flexibility through multiple, interoperable voice services.
AG: With just a couple of months to go, what can we expect from Cerence at the Consumer Electronics Show (CES) 2020??
EM: I can’t go into too many details just yet, but our focus will be on showing enhanced solutions and technologies for the button-free car of the future that I described earlier. We’ll also be showcasing the latest updates to our AI-based automotive voice assistants, including:
- An immersive demo experience in a 220-degree theater that will showcase the latest in multi-modal, natural interaction with the in-car assistant, as well as new, advanced solutions like Emergency Vehicle Detection.
- An autonomous vehicle experience, which will showcase the role of voice and multi-modal interaction in shared, electric and autonomous vehicles of the future.
AG: Finally, what do you see Cerence accomplishing in the next couple of years? How would you define success for Cerence?
EM: It is our goal to deliver a fully personalized, safer and more enjoyable experience for every driver and passenger. In the next couple years, we will continue integrating the latest technologies, like AI, augmented reality, IoT and voice biometrics, to fully deliver on this goal and create a model that sets the standard for all human-tech interaction.