Droidcon Boston 2018 Conference Report

Attendees and speakers descended upon Boston this week for Droidcon Boston 2018. Learn what you missed and what to watch to hone your Android app development skills. By Joe Howard.

Leave a rating/review
Save for later
Share
You are currently viewing page 2 of 5 of this article. Click here to view the first page.

Lunch & a Lightning Talk: Background Processing

Karl Becker, Co-Founder / CTO, Third Iron

One really fun part of Droidcon Boston is hearing some talks during lunch. On the first day, Karl Becker gave some lessons learned adding notifications and background processing to an app.

Some key considerations are: platform differences, battery optimizations, notification specific issues, and targeting the latest APIs. You also need to avoid skewing your metrics, since running the app in the background may increase you analytics session count. Determine what your analytics library does specifically, and consider sending foreground and background to different buckets.

Once they rolled out the updated app with notifications, Karl and his team saw that Android notifications get tapped on 3 times as much as in their iOS app. He pointed out that even higher multiples are seen in articles online. Some possible reasons are: notifications are just a lot easier to get to on Android, and they go away quickly on iOS.

Key takeaways from Karl were: background processing is easy to implement, but hard to test. Schedule 2x time for testing. Support SDK 21 or newer if possible. Don’t expect background events to happen at specific times. And, notifications will be more impactful on Android than on iOS.

Practical Kotlin with a Disco Ball

Nicholas DiPatri, Principal Engineer, Comcast Corporation

In one of the first sessions after lunch on Day 1, Nicholas gave a great walkthrough of Kotlin from a Java developers perspective, and used a disco ball with some Bee Gees music to do so! He’s built an app named RoboButton in Kotlin, that uses bluetooth beacons to control nearby things with your phone, such as the aforementioned disco ball. :]

Disco Ball

The first question answered was: what is Kotlin? It’s a new language from JetBrains that compiles to bytecode for the JVM, so it’s identical to Java at the bytecode level.

Next question: why use it? It’s way better than Java, fully supported by Google on Android, and Android leaders in the community use it. The only real risks with moving to Kotlin are the learning curve of a new language, and that the auto-converter in Android Studio from Java to Kotlin is imperfect.

Nicholas then gave a summary of syntactic Kotlin. You have mutable properties using var, which you can re-assign. The properties have synthesized accessors when being using from Java. You create a read-only property with val. It’s wrong to call a val immutable, it just can’t be re-assigned. Kotlin has type inference and compile-time null checking that help you avoid runtime null pointer exceptions. There are safety and “Elvis” operators and smart casts for use with nullable values. For object construction and initialization, you can combine the class declaration and constructor in one line. You can have named arguments, and an init block is used to contain initialization code. There is a subtle distinction between property and field in Kotlin: fields hold state, whereas properties expose state. Kotlin provides backing fields to properties.

Great places to begin learning Kotlin? The great documentation at kotlinlang.org, including a searchable language reference. The Kotlin style guide from Jake Wharton. And the online sandbox.

Nicholas pointed out that when moving to Kotlin, large legacy projects will be hybrid Java-Kotlin projects. Legacy Java can remain. New features can be written in Kotlin. Small projects can be completely converted, such as Nicholas converting the RoboButton project. You can use Code > Convert Java File to Kotlin File in Android Studio, but there is one bad thing: revision history is lost. Also, you may get some compile errors after the conversion, so you may need to sweep through and fix them.

Nicholas wrapped up his talk by discussing idiomatic Kotlin. When using Dagger for Dependency Injection, inject in the init block and fix any compile errors using lateinit. Perfom view injection using the Kotlin Android extensions. Use decompilation to get a better understanding of what Kotlin code is doing: create bytecode and then decompile to Java. Nicholas showed an example of discovering that Kotlin view extension uses a cached findViewById(). Use extension functions, which allow you to make local changes to a 3rd party API.

TensorFlow for Android Developers

Joe Birch, Senior Android Engineer, GDE, Buffer

Like many developers, Joe Birch likes to explore new things that come out. Such was the case with Joe for TensorFlow, when a friend gave him a book on the topic. He was a little scared at first, but it turned out to be not so bad. His talk was not a deep dive into machine learning, but instead was showing how to use existing machine learning models in apps and how to re-train them. Joe used the TensorFlow library itself with Python on his dev machine for the retraining, and the TensorFlow Mobile library to use the re-trained model on a device.

Joe started with some machine learning 101: getting data, clean prep & manipulate, training a model based on patterns in the data, then improve. Joe discussed the differences between unsupervised learning and supervised learning, and then described the supervised learning topics of classification and regression. Some machine learning applications are photo search, smart replies in Gmail, Google Translate, and others like YouTube video recommendations.

Then Joe began a dive into TensorFlow. It’s open-source from Google, and they use it in their own applications. You use it to create a computational graph, a collection of nodes that all perform operations and compute values until the end predicted result. They’re also known as neural networks, a model that can learn but that needs to be taught first. The result of the training is a model that can be exported, then used and re-trained.

For the rest of talk, Joe detailed his re-training of an open source model that classifies images to specifically recognize images of fruit, and then the use of the re-trained model in a sample app that lets you add the recognized fruit to a shopping cart.

Joe walked through the use of the web-based TensorBoard suite of visualization tools to see training of your model in action. TensorBoard shows things like accuracy and loss rate. Joe listed all the steps you need to perform to retrain a model and examine the results of the re-training in TensorBoard. He started the re-training with a web site that gave him 500 images of each type of fruit at different angles. To test the re-trained model, you take images outside of the training set and run through the model. As an example, a banana was recognized at 94% probability in a few hundred milliseconds.

Before using the re-trained model, you want to optimize it using a Python script provided by TensorFlow, and also perform quantization via another Python script. Quantization normalizes values to improve compression so that the model file is smaller in your app. Joe achieved around 70% compression, from 50MB to 16MB. You want to check the model after optimization and quantization to make sure accuracy and performance were not impacted too much for your use case.

After re-training, you have a model for use in an app. Add a TensorFlow dependency in the app. You put model files into the app assets folder. Create a TensorFlow reference. Feed in data from a photo or the camera as bitmap pixels, and run inference. Handle confidence of the prediction by setting a threshold.