This year 169 Labs was invited as a Google Partner to attend Google IO. Goolge I/O is a yearly conference where Google presents new features, services and products to the public. The three day event takes place next to the Google Headquarters in Palo Alto California. It is attended by 5.000 participants from all over the world. There are many sessions spread over the course covering Android, VR/AR, Firebase, Web Technology, AI, machine learning and technology in general and hardware. Our main focus was the news around Google Assistant and Voice Technology provided by Google.
The I/O Keynote 2019
The biggest announcements are made during the opening keynote. We will cover the ones with the biggest impact on voice.
Google Duplex web enhancements
Last year Google Duplex made headlines when the assistant made a reservation at a restaurant via telephone. This year Google added capabilities to the assistant to fill out online forms in behalf of the user with his data. In the demonstration a rental car booing was made without the need for the user to type in any data:
Google claims that this functionality is possible without any need for the car rental website to add functionality to their site. I think this is a real assistant at work freeing yourself from repetitive tasks. Lets see how this will work in real life.
Google Assistant moves Natural Language Understanding (NLU) into Devices
In an interesting approach Google moved the AI for NLU from the cloud inside the device. They shrunk the voice model from 100GB to 0.5 GB and preserve the same accurateness in understanding human voice. Over the last years with the rise of cloud computing the clients got thinner and thinner and all he logic and heavy lifting was put into the cloud. This new approach is a shift in paradigm and puts the processing back onto the devices itself. The clear advantage is sheer speed (processing got 10x faster) and another aspect that interestingly has not been mentioned by Google yet: privacy. Since the device itself can process the input of the user, there is no need to send the user input via network to the cloud and process it there. Your voice (and data) does not leave the device and is processed there which does not expose your private conversations to Google.
Here is a demonstration of the speed gain:
More natural conversations and multitasking
Multitasking has not been able before with voice applications. You had to open and each application, complete your task there and continue to the next application. Google showcased a super natural way how to shift from one application to the other and pull photos into a message. I think that’s the future how we can interact with assistants:
https://youtu.be/lyRPyRKHO8M?t=1486All of these features should be coming to new pixel phones later this year.
Being in the car is one of the biggest use cases for an handsfree assistant. The assistant is already in Android Auto and Google Maps. The new driving mode feature locks down your phone while driving and tries to avoid distractions. But it will also enable you to do basic tasks like as continue a podcast where you left off, take or decline an incoming call with your voice or make a reservation at a restaurant with so called shortcuts.
Watch the demo here:
Nest Hub Max
Google is rebranding their Google Home Products under the Nest brand umbrella and is releasing a new device, the Nest Hub Max. A new version of the existing Home Hub with a high resolution camera (with an physical switch on the back), a bigger display.
The new camera will enable Face match (profiles encrypted and stored on the device) to let the device know who is in the room and personalize the functionality.
Another cool feature is gesture control to pause music via your hands instead of talking to your device.
The new model will be released later this summer for USD 229.