Android & Kotlin Tutorials

Learn Android development in Kotlin, from beginner to advanced.

Google I/O Reactions: What is New with the Google Assistant

I was very excited and honored this year to be chosen to attend the 2019 Google I/O conference at the Shoreline Amphitheater in Mountain View, California. One of the technologies I was most excited to hear about was the Google Assistant. The Google Assistant is a virtual assistant created by Google that is has grown […]

5/5 2 Ratings

I was very excited and honored this year to be chosen to attend the 2019 Google I/O conference at the Shoreline Amphitheater in Mountain View, California. One of the technologies I was most excited to hear about was the Google Assistant. The Google Assistant is a virtual assistant created by Google that is has grown to support 19 languages in 80 countries. There are over a million actions for the Assistant and is available on over a billion devices. Throughout its evolution, the Assistant has intrigued me greatly because it allows me to interact with my devices using only my voice. I am a busy mom and professional who is on the go a lot. I have many Google Assistant Home devices throughout my house and can talk to Google from every room. This has helped me in many ways to be a more happy and productive person. When I am not within range of a Google Assistant enabled device, I find myself calling out ‘OK, Google’ in vain :].

Meeting the next generation Assistant

Enhancing speed

As I settled in for the Keynote at Google I/O, I was not disappointed by the new announcements about the Assistant. One of the biggest challenges that I experience using the Google Assistant is that it needs internet connectivity to understand what I’ve said to it. This can be particularly frustrating if the request I’ve made does not require connectivity, i.e. setting a timer. Currently, processing speech for the Assistant is very complicated. It involves many machine learning models including one to map incoming sound bites into phonetic units, a second one to assemble the units into words, and then a third one to predict the sequences of the words. To do this requires 100 Gigabytes of space and the need for network connectivity. Google made the groundbreaking announcement that they were able to reduce this down to half a Gigabyte. This allows these models to be stored locally on the device. This allows the Assistant to process speech in Airplane mode. This will increase the Assistant’s speed 10x. I was ecstatic to hear about this.

Demoing the next generation Assistant

After this exciting announcement, the Keynote segued into a demonstration of the new and faster Assistant which was quite impressive. You can see this demonstration here. The demonstration shows the Google Assistant rapidly handling back to back commands. These commands include searching for specific photos, ordering a Lyft car, setting a timer, taking a photo, checking the weather and various other requests. I was impressed that “Hey, Google” only had to be said once. The Assistant was also able to navigate photos and check on flight time while responding to a text message. The ability to multi-task using the Assistant is greatly improved. The Assistant can handle more complicated speech scenarios allowing the user to compose and send an email. I can only imagine how different it will be to utilize the Next Generation Assistant without the need for a network roundtrip.

Adding more personalization

The Google Assistant will be more personalized in the future with features such as ‘Picks For you,’ a feature that chooses recipes on a personalized basis. This utilizes a technique called ‘reference resolution’ which allows it to understand phrases such as ‘mom’s house’. Obviously, this would most often refer to someone’s mother’s house. However, it could also be the name of a grocery store or a restaurant. By using personalized reference resolution, associations such as this one can be made by the Assistant.

The next generation Assistant has the ability to set personalized reminders. As a mom, this will be a wonderful addition to our household. I will be able to remind my teenager of things when I am not around. Lastly, it was exciting to hear that you no longer have to say ‘Hey Google’ to stop alarms!

Enabling a new driving mode

Earlier this year the Assistant was added to Google Maps. It was announced that Google Assistant will now work with Waze and there will be an enhanced driving mode. In the future, there will be a ‘driving mode’ for the Assistant. The dashboard will bring the most relevant activities to the forefront. It will display things such as the option to navigate to a destination for an upcoming appointment in your calendar. It may show you podcasts you often listen to at certain times of the day during your commute. You will be able to use the Google Assistant without leaving navigation mode to send texts and answer phone calls.

Are you excited to try it? The next generation Assistant will appear first on the Pixel 4 which is rumored to be available in October of 2019 :].

Learning all about the Google Assistant

After hearing all the announcements in the Keynote, I was excited to see some of the new features for developers in the talks and the Google Assistant dome.

There were many great presentations on how to get started with the Google Assistant. The talks were a great overview for those who are new to the Google Assistant and broke down the different groups of individuals who may want to utilize the Assistant:

  • Content Owner and Web Developers – Templates and markup that are available to enhance search, how-to tutorials, and an FAQ feature.
  • Android app developer – App actions and slices.
  • Innovator in the Conversational Space – Conversation Actions with Interactive Canvas for building experiences on smart displays.
  • Hardware Developer – Smart Home SDK.
Note: If you’re interested in writing your own action for the Google Assitant, I have written a tutorial on how to get started on this subject here.

Enhancing Existing Content for the Google Assistant

The talk Enhance Your Search and Assistant Presence with Structured Data went into detail for web developers about how to use Structured Data and announced two new types that it now supports. Structured data makes it easier for developers who have existing web content to make lush search results of that content without proliferating it out to all the various platforms. This can help a developer reach a wider audience. Structured data supported podcasts, recipes and news last year, and this year it will support how-tos and FAQ templates. Video objects can be used with the how-to template for people who have great how-tos on youtube. It is as simple as filling in a Google Sheet to create the how-to template. The how-to guided experiences looks great on a smart display. This talk also demonstrated how to use the Actions on Google Simulator and Actions on Google Analytics to view and test the structured markup as it is being developed. This can be a simple way to bring existing content to life in the Google Search window. This is all great information for web developers, but what about Android developers? The most exciting talk was yet to come.

Utilizing the Assistant in Your Android App

App actions for an Android app include Slices and Conversational Actions. Utilizing app actions in your app can really help expand the reach of your app. The most important reason to use app actions is to increase user re-engagement. Oftentimes, apps get buried in a long list of user-installed apps. Having an Action that can deep link into your app to a specific feature by the Assistant increases the chance of the user discovering that feature. It makes it more easy and convenient for the user to engage with your app.

Adding Actions to an App

It doesn’t take a lot of developer effort to add actions into an app. All a developer has to do is add an actions.xml file to the res/xml directory of the app. This file contains action blocks that represent the app actions. Within the action block there can be one or more fulfillment mechanisms that map the actions to the fulfillment intents. The fulfillment can contain parameters that can be extracted. Google uses its own Natural Language Processing to match the requests to the appropriate actions, so the developer does not have to worry about that and the intents can be built using the Dialogflow console.

Slicing the slices

Slices are the successor to app widgets in Android. By simply making a small change in the actions.xml file to designate the fulfillment be carried out via slices, the fulfillment will be shown directly in the Assistant. Slices are essentially the visual representation and enhancement of the app feature. They display rich, dynamic and interactive content

Finding ‘toothbrush’ moments

There were many talks about conversational design for good Actions. One of the terms I heard often was ‘toothbrush’ moments. It is not the point of utilizing actions in an app to create voice actions for every feature of the app. Rather, it is best to find those features which the user will be able to use when near a Google Assistant enabled device. ‘Start a run’ is a good example of a toothbrush moment, as is ‘how many calories have I burned?’ This was demonstrated during the talk with the Nike Run Club app.

Conversing through Actions

One problem with App Actions and Slices is that they are only available on devices where the app is installed. Because there are many devices that have the Assistant, some that are not even Android based, there are Conversational Actions that are universal across all devices. Many talks were focused on building conversational actions including one called Designing Quality Actions for the Google Assistant.

Now that this talk explained what was available for the Android developer, I was curious about building games with Interactive Canvas.

Building Interactive Experiences with Interactive Canvas

Interactive Canvas is a way to use HTML, CSS and JavaScript to build rich, interactive experiences for the Google Assistant. It can be used to build full-screen visuals and custom animations. There was a talk specific to creating games with Interactive Canvas. The Home Hub is a great target for these interactive Actions. Rich responses are used in conjunction with the URL of a web application to allow the developer to create an immersive experience. The developer used Dialogflow to create the custom conversations the user will have with the Action. The developer has complete control over the conversational flow. Then, using the Interactive Canvas API, the developer has pixel-level control over the display for games where any HTML, any CSS, and any JavaScript can be run.

Providing an SDK for the smart home

One of the biggest announcements at the Google I/O keynote was the Nest Hub Max. However, one of the lesser known announcements that could have a huge impact on developers is the developer kit known as the Local Home SDK. This was demonstrated in the sandbox demos with a visual representation of toy trains. Basically, it allows Home Devices like smart speakers and smart displays to send requests to third-party gadgets such as lights, thermostats and cameras on the local network instead of via the cloud. This would be great for those days when the internet is acting up! Google has also made it easier to set up equipment such as GE smart lights using the Google Home app. This will streamline the process of setting up devices for the consumer. They are releasing 16 new device types and three new device traits for developers of smart home Actions. It was announced that there would be more details about the Google Assistant Connect platform later this year. This is the program that allows smart home appliance developers to easily add the Assistant to their devices at a low cost. Google says it has been working to develop products through this program with Anker, Leviton and Tile.

Where to go from here?

This was an exciting Google I/O for those interested in the Assistant. There were many talks for every one concerned with the Assistant including web developers, Android developers, hardware developers and the most important, us, the consuming public. For the developer there were many talks on how to develop great Google Actions in a variety of contexts. If you are interested in checking out any of these talks or seeing a tour of the sandbox demos, please checkout the links below!

Assistant Related Talks from Google I/O 2019 can be found here.
A Tour of the Google Assistant Sandbox Demo Tent can be found here.

Average Rating

5/5

Add a rating for this content

2 ratings

Contributors

Comments