Android & Kotlin Tutorials

Learn Android development in Kotlin, from beginner to advanced.

Google I/O 2019: Opening Keynote Key Topics and Reactions

This year was amazing for Google I/O, and we’ve curated a list of all the fun things the Opening Keynote featured, and what those things are, to share!

3.7/5 3 Ratings

Early last May, at the Shoreline Amphitheater in Sunnyvale, California, an amazing land of technology and developers once again opened. You probably know what we’re talking about — Google I/O 2019!

This year was truly amazing for Google, as we’ve seen in the Google I/O 2019 Keynote. If you’re unfamiliar with the I/O event, it’s one of the biggest conferences Google organizes, created for Android lovers and developers — and anyone who’s got an interest in Google applications like Google Maps, Google Photos, Google Chrome and Google Assistant, or Google products like the Google Pixel phone, and Google Nest, previously known as Google Home.

The Opening Keynote was given by a series of Googlers, mainly featuring Google’s CEO Sundar Pichai, who explained Google’s vision and development over the past year, as well as the key elements the company is trying to cover with its numerous teams and products.

Sundar Pichai presenting at Google I/O

Sundar Pichai kicking off the Google I/O 2019 Keynote

Maybe you missed your chance to attend Google I/O this year. Or, if you were there, you probably attended the opening Keynote and saw the innovative Google ideas yourself, but with so many announcements, you probably haven’t managed to process everything — you’re not alone! :]

We’ve rounded up the insights and notes from the RayWenderlich.com team members who attended Google I/O, and we’ve curated a list of all the cool and important things the Opening Keynote featured. Let’s dive in!

Life According to Google

In the Keynote, Sundar explained that there are four aspect of life that Google is trying to improve for everyone: Knowledge, Success, Health and Happiness.

A good portion of these improvements will come with the next Android version — Android Q. Android Q hasn’t really been talked about much since Google is keeping the name and the version under wraps, but Sundar and others covered the set of features coming with it. Read on to see what every aspect of life means to Google and what features have been announced to improve each of them.

Knowledge and Success

When we talk about Google, knowledge seems pretty obvious. Looking at big Google products — search engine and Android — you can see that making information universally accessible has always been its business pursuit. The company is constantly improving the ways we can access data, simplifying it and speeding it up. And Google has revealed the new ways they are working to allow users to access valuable information without much of a sweat.

One of the things Google is improving, as we could have imagined, is Google Assistant. Assistant holds a strong first place in voice-and-AI-powered software, built to help users use their phones without having to type or click. This year, Google’s shown how its improving it even further — if that seems possible!

Improving Google Assistant

The pain point of Google Assistant was that it used to require, “OK, Google” or “Hey, Google” key phrases spoken before each query. Google changed that, giving us the ability to continuously ask for help and switch between apps on the fly, all without any latency. The way it did this was by using on-device computation, specifically for Assistant. All of this was made possible due to extraordinary efforts from the Google team to shrink the machine-learning model used for Assistant from 100 gigabytes in size to only a half gigabyte. Yes, you’ve read that right.

With a model now that small, Google managed to put it on a special chip, which will be integrated into the new Pixel phones. So, unfortunately, we won’t all get the opportunity to experience this amazing innovation in AI just yet. But, overall, performance and speed improvements for Assistant are promised across all devices, nonetheless.

Another cool thing about Assistant, which Google announced, is the Duplex for the Web feature. Surely, you remember seeing Duplex last year, allowing your Assistant to call various places and order food, book appointments for a haircut and similar ventures — and all that when you simply mentioned the time and date, or which piece of food you wanted!

This year, Google has expanded this idea to the Web, allowing Assistant to navigate to websites and completely fill out forms for rental cars, for example, and much more, with a simple mention of a few things like the car brand you want and the date of rental.

Machine Learning on the Device

Google strives to bring powerful machine learning to every device, and not just through its cloud models, but also through on-device computation. With the aforementioned chip being specifically built for that, it will give an astounding performance boost to every bit of ML that each app runs on your phone. Even if you don’t acquire a new Pixel phone, there are some changes to how internal Google models will work in the future.

For example, the Google Lens feature of Android phones relies on machine learning to give you information about the things you scan with your camera. How it does so is by sending the image data to the cloud, and receiving a result after the cloud model returns a prediction. This is fine, but with software and hardware constantly advancing, a better solution would be to have all this on the device, which is what Google has been working on.

This will not only allow for a huge amount of information to be processed without an internet connection but is also vital to keeping user privacy safe. The less information you send to the cloud, the more private your experience on your phone is. So you can expect more and more things to work without an internet connection and without sharing your information with cloud models.

Another really fun feature for on-device machine learning is the federated learning. Basically, each person will be able to further train Google’s ML models’ last layer privately, on their own device, without sending the information to the cloud. This, in turn, will build more precise models, which cover more data and use cases. And after the model has been further trained, its progress will be uploaded to the cloud, allowing the entire network of Android users to benefit from each other. It brings to mind the Borg hive, collectively learning and adapting by the knowledge gained from even the smallest drone in the hive!

Android Q presented at Google I/O

Presenting the new Android Q during the Google I/O 2019 Keynote

User Privacy

You may have noticed that a good part of the machine-learning changes focus on user privacy. This has been a huge focus for Google, as these days privacy is often easily lost. Most of the data for ML will be securely processed on user devices, but there are other features being implemented for Android, with the Android Q version update. These are a few.

Google Maps will enable an incognito mode, in which your browsing and querying will not be processed by Google and will not be sent anywhere. You can use this feature from within the application any time you want. As such, your location will not be observed. With that in mind, there are a few changes coming to location permissions and how applications get to consume your location data. By default, when prompting for location update permission, you’ll be able to choose between allowing the apps to use your data while in the foreground, to block location observing, or to always allow observing — even if the app is in the background or the process is killed.

This will allow applications, and users, to differentiate uses for the location data. It will also further improve security and privacy on Android.

Another really powerful feature is contained within the new Settings application in Q, which allows you to control which data is processed, how long it is cached on the device, and how often it gets cleared up. You can also manually delete all the data processed by Google applications if you want to. With these strides, it’s really clear how Google is trying to protect users and give them control over their electronic lives! :]

Health and Happiness

Technical changes, regarding performance and privacy, are not the only new things from Google I/O. Building technology for everyone is extremely important to Google, and, because of this, many changes regarding user wellbeing and accessibility have been announced. Most of these changes are coming with the new, tenth, Android version — Android Q.

Live Captions

If you are suffering from any degree of hearing impairment, live captions could be a life-changing feature for you, in the next Android update. The idea is to add video caption possibility, just like in YouTube videos, to any form of a video you can watch on your phone. The problem is that videos usually don’t have any captions. Google had to find a way to add captions on the fly in a way that is highly performant and that can be used everywhere, not just YouTube, but also, for example, on applications like Instagram, Facebook, Reddit and so on.

Once again, the answer lies in machine learning! Just like with the previously mentioned model of 100GB, Google has managed to compress a model that helps to process text down to 100KB in size. This model is thus accessible to virtually any device. And not only that, but the feature will also be located in the sound options, allowing you to use it on any video you’re watching.

It’s safe to say that this opens many doors for people with a need of hearing assistance since they’ll be able to experience currently uncaptioned videos in a whole new way! This feature will be available to everyone, through the sound menu, once you hold down the lower-sound button. This means that you can use it any time you cannot listen to the sound of the video as well, like when you’re riding on the train or the bus.

Live Relay

Having the ability to call people using our phones, from nearly anywhere in the world, is beautiful. But it might not be that beautiful for people who cannot hear. Just like with captions, there is room for improvement in making calls to make that everyday task more accessible.

Live relay is just that. First, it transcribes, for the hearing impaired, the voice of the person calling, showing text in a message box. Then, it allows the user to use writing when making a call to communicate with the person on the other line. The text is then read to the other person, trying to mimic a fully synchronous call. Additional help is offered through smart suggestions and replies, just like with emails in the Gmail app. This allows for the conversation to be as quick and efficient as possible, and for both parties to fully understand the other person while transferring their thoughts seamlessly.

Project Euphonia

One of the most emotional things we saw at Google I/O, other than live captions, was Project Euphonia. It is an experimental project at Google, with which the developers are trying to make everyone be understood. People who suffer from multiple sclerosis, who’ve had a stroke, or have lost the ability to speak, for example, often rely on very slow and unreliable systems to communicate with their loved ones and their friends. But, even with those systems, it’s still hard to convey thoughts for them, so Google is exploring new ways to provide these individuals with assistance with one of the core parts of being human — speech.

Whichever form of speech-impairment you might suffer from, this could be another life-changing innovation. With thorough data analysis and machine-learning models, Google has made good progress, training models that understand speech, which previously wasn’t understandable to the software we use every day. A really heartwarming story featuring Dimitri Kanevsky, one of Google’s research scientists in this area, who demonstrated the model recognizing his speech nearly perfectly, even though he’s had trouble with speech-to-text before, due to his speech impairment brought on by losing his hearing as an infant.

Other efforts of reading facial expressions and mere sounds, from people who have lost the ability to speak, have been made, and there’s progress in building models and software, which would allow them to communicate much faster than they did before, using camera-assisted-eye-contact-typing of words, letter by letter, using an on-screen keyboard.

This is really breathtaking work, showing how technology can truly be used for collective wellbeing, and helping those who are most in need.

Google making it easier for everyone to be understood and understand

Presenting new technologies for easier communication during the Google I/O 2019 Keynote

Digital Wellbeing Improvements

Speaking of wellbeing, there’s been some improvements to the Digital Wellbeing application, first shown in Android Pie. So far, we’ve had the ability to wind down and apply a grayscale filter to our screens and the do not disturb mode, whenever we reached a time in the day we wanted to relax and start going to bed. We could also see which applications took most of our screen time, and from which apps we’d received the most notifications. All of this was to help us use our phone in a smarter way, decreasing our total phone time and enjoying life more.

This year, Google has introduced two new features within the Digital Wellbeing app — Focus mode and Family Link.

Focus mode allows you to disable notifications and pings from disturbing applications for a while, but leaving texts from family members still available. You can easily choose which parts of the media you want to disable to be able to focus.

Family link, on the other hand, allows parents to connect to their kids’ accounts and set up timers and limits for applications so that their kids don’t spend too much time on the phone. However, parents can give kids five minutes of “bonus time” just in case they fall victim to said kids’ puppy eyes. :]

New Pixel Phone

Obviously, the crowd cheered for this new phone like crazy! Google announced the new Pixel phones — the Pixel 3a and Pixel 3a XL. Being a Pixel phone, it’s expected to have an amazing camera, clean Android, and many software updates. On top of that, it has sleek looks and a plethora of awesome Google Assistant services.

With all that in mind, you’d also expect the phone to be rather expensive since it sounds like a bunch of flagship features. With the 3a series, this is not the case! The starting price of these phones is only $399, with the XL version being only $80 more expensive ($479 total). The Pixel comes in three colors – Just Black, Clearly White and Purple-ish. And, to top it all off, the 3a version has brought back the 3.5mm headphone jack! :]

Showing the Pixel 3a

Introducing the new Pixel 3a during the Google I/O 2019 Keynote

Given that you can turn in your older Pixel phone for a price reduction, you can get the 3a version for as low as $150. Some of the features you would get on higher-end variants, like dust and water protection for the phone, are not available, hence the lower overall cost. But you still get cool stuff like free original quality Google Photos uploads, unlimited for a few years after your purchase, and guaranteed three years of Android updates.

Where to Go From Here?

There were many awesome things announced at Google I/O, and it was super fun to attend the Opening Keynote. We hope you’re as equally excited as we are, and that you’re dying to try them all out once they’re available to the public!

If you want to relive the excitement, you can watch a recording of the Keynote here. We also recapped the excitement of the first day of Google I/O in a podcast for you to listen to.

Let us know your favorite parts from Google I/O in the discussion forum below!

All screenshots included here were taken from the Google I/O 2019 Keynote, which you can view in full here.

Average Rating

3.7/5

Add a rating for this content

3 ratings

Contributors

Comments