From May 18th to 20th, I was super lucky to have the opportunity to attend Google I/O.
This year was the 10th anniversary of Google’s annual conference, and it brought a change of venue: instead of the usual Moscone Center in San Francisco, it was outdoors at the Shoreline Amphitheatre right by the Google campus in Mountain View, CA.
The event certainly had a unique, more festival-like atmosphere at the outdoor venue than last year—as well as higher sunburn per capita. :]
But of course, it’s not all about the venue. A ton of Google I/O 2016 announcements and reveals aimed at us Android developers were at the heart of the conference. These announcements covered all sorts of areas of Android development from Android N to Android Studio to virtual reality to Firebase.
In this article, I will take you on a quick tour of the newest and shiniest of these announced Android goodies and let you know where you can get a closer look.
Here we go!
Let’s start with the basics: the SDK. During the I/O keynote, Google announced a new release of the newest version of the Android operations system: Android N Developer Preview 3!
A significant portion of the information presented on Android N included previously-announced features. However, DP3 also contains plenty of new features, APIs, and (of course) bug fixes. :]
One of these new APIs is the FrameMetricsListener API, which allows you to measure UI rendering performance at a per-frame granularity and collect data for a particular user interaction. While you could previously do this with
adb shell dumpsys gfxinfo framestats, the FrameMetricsListener API allows you to perform measurements from within the app and is not limited to the 120 frame limit of
For anyone interested in developing virtual reality applications, DP3 introduces a new VR Mode that is specifically for high-quality VR apps and provides access to an exclusive VR-specific CPU, single buffer rendering (which speeds up display drawing), increased sensor performance, and stereo notifications. Overall, apps running in VR Mode should run with high-performance as well as reduced “motion-to-photon ratio” (the latency between user head movement and screen updates that can induce motion sickness).
In actuality, VR Mode is not just a new Android N feature. It is actually part of a new virtual reality platform that was one of the big keynote announcements.
Virtual Reality: Daydream
This new VR platform powered by Android is called Daydream. While the previous Cardboard platform allowed any phone on Android 4.1 and above the ability to run VR experiences, Daydream focuses on bringing high-quality, next-level VR to users.
Daydream consists of both new software and new hardware components. On the software side is Android N’s VR Mode that tunes the platform for optimal rendering and interaction. On the hardware side is a VR headset reference design, a new controller, and “Daydream-ready” smartphone specifications.
The controller—which might remind some of you Nintendo fans of a Wiimote–is small but packed with several sensors providing users three degrees of freedom to move and gesture within VR apps. While the controller (and really any of the Daydream hardware) is not yet available, there is a controller emulator app available as part of the Daydream Development Kit.
The headset reference design outlines a headset geared towards long term use, though no concrete examples are available just yet. The high-end smartphone specifications follow the Daydream’s progression to more high-quality, high-performance VR experiences, but it is important to note that Google’s VR lead, Clay Bavor, states that most, if not all, of currently available smartphones are not “Daydream-ready”.
Daydream itself will be out Fall 2016. However, if you are itching to be part of those next-gen VR apps, you can get started with a Daydream Development Kit if you have a Nexus 6P, a second phone running KitKat or above, and a Google Cardboard (or other VR Viewer).
Fair warning: You will need to install DP3 on the Nexus 6P. Also (as hinted above) the 6P is not actually “Daydream-ready” and so you might experience throttled CPU and GPU performance depending on your app’s work load.
Android Wear 2.0
A big announcement for Android developers of the Wear variety is Android Wear 2.0, a huge update and re-work of many of the Wear foundations both from a developer and a user perspective.
The most significant change in Android Wear 2.0 is the introduction of standalone apps. Apps maintain full functionality even if a user’s phone is far away or is turned off, and they will have direct network access to the cloud. Rather than being embedded inside of phone apps, Wear apps will instead be delivered as independent APKs. This will allow developers to update Wear apps independently of the matching phone app. This new independence of Wear apps will also mean new ways for users to authenticate independently of a phone.
Android Wear 2.0 adds keyboard input, which recognizes both normal and gesture typing, as well as Smart Reply (Inbox’s automatic responses generated by machine learning from previously received messages).
I did not know this before hearing about Android Wear 2.0’s new Complications API, but “complications” is in fact a horology term that describes “any feature in a timepiece beyond the simple display of hours and minutes” (Wikipedia). The Complications API allows watch face developers to connect with data provider apps to display extra information.
Android Wear gets some more Material Design love with two new navigation components:
- A navigation drawer which is analogous to the phone navigation drawer, allowing users to jump to different areas within an app.
- An action drawer which lives at the bottom of the screen and provides context-specific actions.
Check out the Material Design for Wearables documentation for good practices in wearable design. For example, something I learned during one of I/O’s Wear sessions is to use darker color palettes for Wear apps because:
- Darker colors make watch screens less obtrusive in social environments.
- OLED displays use less power in rendering darker colors.
Android Wear 2.0 contains both a visual update for notifications and new expanded notifications allowing you to provide additional content and actions to your users.
Keep an eye on the Material Design for Wearables for more information on notification design and best practices, and keep in mind that as Android Wear 2.0 is still just a developer preview, these guidelines may change.
Firebase is a Backend-as-a-Service (BaaS) and cloud services company that launched in 2012 with a realtime cloud database. Google acquired Firebase in 2014 and in the time since has grown from 110,000 users to over 450,000.
Google I/O 2016 marked another huge leap for Firebase as it has expanded to become a unified app development platform with more features and services. Quite a few of these services are actually familiar Google services that have been integrated into and rebranded with Firebase.
At the core of the new Firebase is a new analytics solution which gives you a single dashboard for analyzing both user behaviors and demographics as well as your advertising and marketing. Furthermore, the intention is that these analytics will be used in conjunction with other Firebase tools and features.
Before the announcement at Google I/O, Firebase offered a realtime database, user authentication, and hosting to developers. Now it has added the following offerings:
- Cloud Messaging: an integration and rebranding of Google Cloud Messaging to Firebase Cloud Messaging (FCM).
- Storage: backed by Google Cloud and allows developers to securely upload and download large files such as images and video.
- Remote Config: allows developers to perform “instantly-updatable” changes to app behavior through customization variables.
- Test Lab: Google Cloud Test lab integrated as Firebase Test Lab for Android.
- Crash Reporting: as the name implies, generates reports on app crashes to help in finding and fixing problems. For those of us currently using Crashlytics, it will be interesting to see how this product evolves.
- Notifications: provides a console for pushing notifications to users by leveraging FCM APIs.
- App Indexing: rebranding of Google App Indexing used to include app content in Google searches.
- Dynamic Links: “smart URLs” that allow deep linking into app content and personalizing content; they also are meant to survive if a user has to first install the application before proceeding to the link’s target.
- Invites: provides user sharing of content or referral codes via SMS or email.
- AdWords: Google’s long-running advertising service is now integrated with Firebase as well.
- AdMob: Like AdWords, AdMob, Google’s mobile advertising platform, also is now integrated with Firebase.
As you can see, Firebase seems to have changed to an essentially “All the Things” application platform. Something to note is that developers can choose which of these features to include so your app does not have to be as big as “All the Things” to take advantage of Firebase.
For the full scoop, head over to the firebase.google.com.
For me, one of the most intriguing announcements from both a UX and a development perspective is the new Android Instant Apps project. Instant Apps aims to ease the process of bringing new users into your app.
As noted in the Android Developers Blog introductory post on Instant Apps, if you think about the web, it takes a single click to take new users to a new website. On mobile platforms, your users must explicitly download your entire app to be able to experience it.
Instant Apps aims to change this and bring users more easily into your app by reducing what Google calls “install friction.” Via Instant Apps, a user could tap once on a deep link to specific content within your app and only those components necessary to display that content would be downloaded and launched.
Users could then have the option of installing your entire app from the downloaded components, or they could just finish viewing and/or interacting with your content and leave without any residue of your application.
For now, most of us developers have to wait to utilize Instant Apps, but we do know that:
- Instant Apps are still native Android apps.
- Instant Apps will be properly integrated with payment and authentication.
- Adding Instant Apps functionality is done through upgrading existing apps: it does not require a separate app.
- Instant Apps will be backwards compatible all the way back to Jelly Bean (4.1).
Google says that “it can take less than a day” to upgrade apps to Instant Apps, but that comes with a “your mileage may vary” caveat. It seems that the relative ease with which you can upgrade depends largely on the structure and modularity of your existing app. I am looking forward to see what else Google has to say about Instant Apps, but there is still much to figure out in terms of how Instant Apps will jive with more complex applications.
For now we will have to wait and see, and you can keep an eye on g.co/InstantApps for further updates.
Okay, so a lot of the Google I/O 2016 announcements I have covered so far are still off in the far future, like Instant Apps. Some things are particular to specific types of apps, like Daydream and Android Wear 2.0. However, one thing for which every Android developer can cheer is a bunch of new features for the Android Studio 2.2 Preview!
A few months ago, I took at look at slew of new features in Android Studio 2.0. Not to rest on their laurels, the Android Tools team announced a whole bunch of new features and tools at Google I/O to take our productivity up to 11. This was easily my favorite session, and from the cheers in the audience, I don’t think I’m alone in that. Here’s what had us in the audience applauding:
Android Studio 2.2 introduces a whole new way of building layouts with the new Layout Editor, which includes drag-and-drop addition of widgets to a layout, a new blueprint mode that details the spacing and arrangement of widgets, a properties panel for easily changing widget properties, and the ability to edit menus and system preferences files.
To accompany the powerful new visual editor, there is also the brand new
ConstraintLayout. While it may sound similar to the
ConstraintLayout helps you reduce the number of nested layouts that you need to implement your UI while still being high-performing and adaptive to different screen sizes and orientations. It was designed to work with the Layout Editor and the constraints inference engine, which automatically generates constraints for you using machine learning.
ConstraintLayout is distributed as a support library and is backwards compatible back to API 9.
A third new layout design tool is the Layout Inspector which allows you to drill down into your view hierarchy and examine the attributes of each view. Furthermore, Android Studio can now give you information on the default font size of a view as inherited from the theme of the layout.
Android Studio 2.2 has added quite a few new tools and features to improve development productivity.
- Improved code analysis and quality checks that include new lint checks and code checks for Java 8.
- A code sample browser that provides sample snippets and sample code for the symbol that you currently have selected in Android Studio.
- Permission inference and code generation to automatically request detected required permissions.
- Highlighting and removal of unused resources within Android Studio.
- New annotations, including ones for resource type checking and for keeping specific methods safe from Proguard
- A Firebase plugin that assists you when adding Firebase to your app.
- Improved support for C++.
- The latest updates from IntelliJ 2016.1.
In regards to builds, the Tools team has made improvements to the Jack compiler, including support for annotations and reduced build times for incremental builds. Also there is now a merged manifest viewer that allows you to analyze the dependencies and permissions in your app for your various build variants.
Two of my favorite new Android Studio announcements were the Espresso test recorder and the APK analyzer.
The Espresso Test Recorder aims to make generating UI tests incredibly simple. When the Test Recorder is active and you are interacting with your app, Espresso test code is generated automatically based on your interactions. This test code is reusable and editable.
Getting to watch the code generated as the Tools team moved through an app was pretty amazing and fun. Test recorder eliminates a whole bunch of excuses for not getting started UI testing.
By using the APK analyzer, you can view a breakdown of the memory usage of an APK by raw file size as well as download size. The APK analyzer is also a great tool for keeping your app’s method limit under control with detailed method and reference counts. You can even drill down to the app resources and see the compiled versions of them.
But wait, there’s more…
There were just so many improvements and new features that it would take a while to go through each and every one. For the full scoop, check out the What’s new in Android development tools session from Google I/O and read up on Android Studio 2.2 at the Android Developers Blog.
Where To Go From Here?
Google I/O 2016 definitely had its share of party and spectacle, but what should really get Android developers excited is the sheer number of new platforms and new tools that improve both the quality and functionality of apps as well as the productivity of developers. There really is more content than I can cover here, but luckily all of the Google I/O sessions were recorded and can be viewed on the Android Developers YouTube channel.
I hope you enjoyed this recap and are as excited as I am about all the incredible Android news from Google I/O this year!