Google I/O 2018 Keynote Reaction

See some of the major announcements from the first day of Google I/O 2018, and a detailed summary of the three keynotes at the beginning of the conference, in case you missed them. By Joe Howard.

Leave a rating/review
Save for later
Share
You are currently viewing page 2 of 3 of this article. Click here to view the first page.

What’s going on with Android?

Dave Burke then came on stage to discuss Android P and how it’s an important first step for putting AI and ML at the core of the Android OS.

The ML features being brought to Android P are:

  • Adaptive Battery: using ML to optimize battery life by figuring out which apps you’re likely to use.
  • Adaptive Brightness: improving auto-brightness using ML.
  • App Actions: predicting actions you may wish to take depending on things like whether your headphones are plugged in.
  • Slices: interactive snippets of app UI, laying the groundwork with search and Google Assistant.
  • MLKit: a new set of APIs available through Firebase that include: image labeling, text recognition, face detection, barcode scanning, landmark recognition, and smart reply. MLKit is cross-platform on both Android and iOS.

Dave then introduced new gesture-based navigation and the new recent app UI in Android P, and new controls like the volume control.

Sameer Samat came on to discuss in more detail how Android fits into the idea of Digital Wellbeing. The new Android Dashboard helps you to understand habits. You can drill down within the dashboard to see what you’re doing when and how often. There is an App Timer with limits. And Do Not Disturb improvements like the new Shush mode: turn you phone over on a table and hear no sounds or vibrations except from Starred Contacts. There’s a Wind Down mode with Google Assistant, that puts your phone in gray-scale to help ease you into a restful sleep.

Lastly, an Android P beta was announced, for Pixel phones and devices from seven other manufacturers, and available today. Many of the new Android P features introduce ways to keep your mobile phone usage from taking over your entire life but still being meaningful and useful.

Google Maps

Jen Fitzpatrick gave demos of the new For You feature in Google Maps, which uses ML to see trending events around you, and also a matching score that uses ML to tell you how well a suggestion matches your interests.

Aparna Chennapragada then gave a pretty cool demo of combining the device camera and computer vision to reimagine navigation by showing digital content as AR overlays on the real world. You can instantly know where you are and still see the map and stay oriented. GPS alone is not enough, instead it’s a Visual Positioning System. She also showed new Google Lens features that are integrated right inside the camera app on many devices:

  • Smart Text Selection: Recognize and understand words and copy and paste from the real world into the phone.
  • Style Match: Give me things like this.
  • Real-time Results: Both on device and cloud compute.

Self-Driving Cars

The opening keynote wrapped up with a presentation by Waymo CEO John Krafcik. He discussed an Early Rider program taking place in Phoenix, AZ.

Dmitri Dolgov from Waymo then discussed how self-driving car ML touches Perception, Prediction, Decision-making, and Mapping. He discussed having trained for 6M miles driven on public roads and 5B miles in simulation. He noted that Waymo uses TensorFlow and Google TPUs, with learning 15x more efficient with TPUs. They’ve now moved to using simulations to train self-driving cars in difficult weather like snow.

Developer Keynote

The Developer Keynote shifts the conference from a consumer and product focus towards a discussion of how developers will create new applications using all the new technologies from Google. It’s a great event to get a sense for which of the new tools will be discussed at the conference.

Jason Titus took the stage to start the Developer Keynote. He first gave a shoutout to all the GDGs and GDEs around the world. He mentioned that one key goal for the Google developer support team is to make Google AI technology available to everyone. For example, with TensorFlow, dropping models into your apps.

Android

Stephanie Cuthbertson then came up to detail all the latest and greatest on developing for Android. The Android developer community is growing, with the number of developers using the Android IDE almost tripling in two years. She emphasized that developer feedback drives the new features, like Kotlin last year. 35% of pro developers are now using Kotlin. Google is committed to Kotlin for the long term. Stephanie walked though current focuses:

  • Innovative distribution with Android App Bundles that optimizes your application size for 99% of devices and are almost no work for developers.
  • Faster development with the Android Jetpack that includes Architecture, UI, Foundation, and Behavior components (see more below in “What’s New in Android”) with new features including WorkManager for asynchronous tasks and the Navigation Editor for visualizing app navigation flow.
  • Increased engagement with App Actions and Slices, interactive mini-snippets of your app.

Stephanie then mentioned that Android Things is now 1.0 for commercial devices, and that attendees would be receiving an Android Things developer kit!

Google Assistant

Brad Abrams discussed Google Assistant actions. There are over 1M actions available on lots of categories of devices. He described a new era of conversational computing, and mentioned the Dialogflow library that builds natural and rich conversational experiences. He said you can think of an Assistant action as a companion experience to the main features of your app.

Web and Chrome

Tal Oppenheimer came on stage to discuss the Web platform and new features in ChromeOS. She emphasized that Google’s focus is to make the platform more powerful, but at the same time make web development easier. She discussed Google’s push on Progressive Web Apps (PWAs) that have reliable performance, push notifications, and can be added to the home screen. She discussed other Web technologies like Service Worker, WebAssembly, Lighthouse 3.0, and AMP. Tal then wrapped up by announcing that ChromeOS is gaining the ability to run full Linux desktop apps, which will eventually also include Android Studio. So ChromeOS will be a one-stop platform for consuming and developing both Web and Android apps. Sweet!

Material Theming

There was a lot of discussion prior to I/O about a potential Material Design 2.0. The final name is Material Theming, as presented by Rich Fulcher. Material Theming adds flexibility to Material Design allowing you to distinguish your brand to provide customized experiences. You can create a unified and adaptable design system for your app, including color, typography, and shape across your products.

There’s a new redline viewer for dimensions, padding and hex color values as part of two new tools:

  • Material Theme editor, a plugin for Sketch.
  • Material Gallery, with which you can review and comment on design iterations.

There are also now the open source Material Components for Android, iOS, Web, and Flutter, all with Material Theming.

Progress in AI

Jia Li came on to give more developer announcements related to AI. She discussed TPU 3.0 and Google’s ongoing commitment to AI hardware. She walked through Cloud Text-to-Speech, DeepMind Wavenet, and Dialogflow Enterprise Edition. She discussed TensorFlow.js for web and TensorFlowLite for mobile and Raspberry Pi. She finished up by giving more information on two new libraries:

  • Cloud AutoML, which can automate the creation of ML models. For example, to recognize images unique to your application without writing any code.
  • MLKit, the SDK to provide Google ML to mobile developers through Firebase, including text recognition and smart reply.

Firebase

Francis Ma discussed the Firebase goals of helping mobile developers solve key problems across the lifecycle of an app to build better apps, improve app quality, and grow your business. He mentioned that there are 1.2M active Firebase apps every month. He discussed the following Firebase technologies:

  • Fabric + Firebase. Google has brought Crashlytics into Firebase and integrated it with Google Analytics. Firebase is not just a platform for app infrastructure, but also lets you understand and improve your app.
  • MLKit for text recognition, image labeling, face detection, barcode scanning, and landmark recognition.

He mentioned that the ML technology works both on device or in the cloud, and that you can bring in custom TensorFlow models too. You upload to Google cloud infrastructure, and you can then update your model without redeploying your entire app.

ARCore

Nathan Martz came on to discuss ARCore, which launched as 1.0 three months ago. There are amazing apps already, like building a floor-plan from walking around a home. He announced a major update today, with three incredible new features:

  • Sceneform, which makes it easy to create AR applications or add to apps you’ve already built. There’s a Sceneform SDK, an expressive API with a powerful renderer and seamless support for 3D assets.
  • Augmented Images, which allow you to attach AR content and experiences to physical content in the real world. You can compute 3D position in real time.
  • Cloud Anchors for ARCore, where multiple devices create a shared understanding of the world. Available on both Android and iOS.