Droidcon Boston 2018 Conference Report

Attendees and speakers descended upon Boston this week for Droidcon Boston 2018. Learn what you missed and what to watch to hone your Android app development skills. By Joe Howard.

Leave a rating/review
Save for later
Share
You are currently viewing page 4 of 5 of this article. Click here to view the first page.

Common Poor Coding Patterns and How to Avoid Them

Alice Yuan, Software Engineer, Pinterest

Alices’s talk focused on a number of problems her team has solved for the Pinterest app. She walked through each problem in detail, and then talked about how they found a solution.

Problem #1. Views with common components. They initially chose inheritance to combine the common components, but that led to many problems in the implementation. The solution: Be deliberate with inheritance and consider composition first.

Problem #2. So many bugs with the “follow” button in the app. They have many fragments, and were using an event bus library as a global event queue. It becomes confusing with more and more events and more fragments. The code looks simple, but it breaks due to different lifecycles and different conditions. Views require tight coupling, but an event bus is de-coupling. Event bus does not make sense for this scenario. Other use cases that are de-coupled make more sense. The solution was to use an Observer pattern. They reintroduced observers into code base. The key takeway was that event bus is often misued due to its simplicity. Only use it where it makes sense.

Problem #3. Do we need to send events to maintain data consistency? Why do we even need to send events? They were caching models on a per Fragment basis, which leads to data inconsistencies across the app. They had model dependent logic in the view layer. There are a lot of different ways to introduce data inconsistency: global static variables, utils class and singletons. The solution: have a central place to handle storing and retrieving of models using the Repository pattern. A repository checks memory and disk cache, and makes network calls if needed for the freshest model. You can also use the Repository pattern with RxJava, kind of like observables on steroids, more than just a simple listener pattern. The key takeway: build a central way to fetch and retrieve models.

Problem #4. Why is writing unit tests so difficult? Laziness. It’s a lot of work to write unit tests. Also, a typical fragment has too much logic, including business logic. You just want to unit test business logic. Things like mocking and Robolectric can be pains to use. The solution: separate concerns using a pattern like MVVM or MVP. Now you can communicate between classes without knowing internals. Kelly gave an example using MVP. Loose coupling is preferred here to make business logic more testable. It makes the code way cleaner and understandable and increases re-usability of the codebase. The key takeaway: unit testing is easier when you separate concerns. Consider MVP/MVVM/MVI. Use interfaces to abstract. You can then easily mock interfaces with Mockito.

The overall key takeway from Alice’s talk: have awareness to see if you’re making any of these mistakes.

Lunch & a Lightning Talk: Reactive Architecture

Dan Leonardis, Director of Engineering, Viacom & LEO LLC

The second day’s lunch talk was about Reactive Architecture. Dan chose to give all examples in Java to get maximum impact.

Dan explained that Reactive Architecture is simple but not easy. The goals are to make a system responsive (with background threads), resilient (with error scenarios built-in), elastic (easy to change), and message driven (with well-defined data types). It provides an asynchronous paradigm built on streams of information.

Dan walked though a history lesson on MVC, MVP, and MVVM. MVC was not meant to be a system architecture, but meant to just be for UI. MVP has kind of been killed off, with three nails: Android data binding, Rx, and ViewModel from Android Architecture Components which helps with lifecycle issues and is agnostic to Activity recreation on rotation.

Dan then emphasized how the whole point of reactive architecture is to funnel asynchronous events to update UI. Examples are button clicks, scrolls, and UI filters. Each are events. You flatmap them into specific event types that have data you need from the UI. Use merge to make them one stream. You use actions to translate events for testability using Transformers. See Dan Lew’s blog post.

The basic flow is a UI event -> transformer -> split stream into event types with publish operator in each stream, transform into action -> merge back to one stream -> end up with an observable stream of actions. What’s next is a Result, and you then use transformers to go back to the UI too.

Dan then gave a deep dive into a simple app and walked through code examples to see a Reactive Architecture in action.

ARCore for Android Developers

Kunal Khona, Android | AR Developer, Wayfair

Kunal started with a nice fun virtual version of himself that introduced himself on a projected device. His talk then introduced us to augmented reality, ARCore on Android, and showed some code examples written in C# with Unity.

Pokemon Go is the best example so far of AR affecting the real world. It’s an illusion to add annotations to the real physical world. It let’s you escape the boundaries of a 2D screen. But we’ve been wired for thousands of years to interact with the world in 3D.

Kunal then contrasted VR and AR. VR is virtual environment with virtual experience. AR is real environment with virtual experience. He said that in VR, the screen becomes your world, but in AR the world becomes your screen. AR uses your device as a window into the augmented world. Mobile phones are the most common device available, so you have to do AR on mobile if you want to hit scale.

Kunal gave some history of AR on Android. Tango used a wide-angle camera and a depth sensor. It could use motion tracking, area learning, and depth sensing to map indoor spaces with high accuracy. But it was only supported on a few phones, and Tango is dead as of March 2018.

ARCore replaces Tango. ARCore went to 1.0 stable in February 2018. It works without special hardware, with motion tracking technology using the phone camera to identify interesting points. It tracks the position of the device as it moves and builds an understanding of the real world. It’s scalable across the Android ecosystem, and currently works on a wide range of devices running Android 7.0 and above, about 100 million devices total. More phones will be supported over time. You can even run ARCore apps in the Android emulator in Android Studio 3.1 beta, with a simulated environment.

The fundamental concepts of ARCore are: motion tracking, environmental understanding, and light estimation. Motion tracking creates feature points and uses them to determine change in location. Environmental understanding begins with clusters of feature points that appear to live on common horizontal surfaces and are made available as planes. Light estimation in an environment gives an increased sense of realism by lighting a virtual object under the same conditions as the environment. Kunal showed a great example of a virutal lion getting scared when the lights turn off.

ARCore does have limitations. ARCore does not use a depth sensor. It has difficulty with flat surfaces without texture. And it cannot remember session information once a session is closed.

Kunal discussed using ARCore at Wayfair. How does a consumer know furniture will look good in a room? He showed an awesome demo of placing a couch in a room. A consumer can hit a cart button and purchase the couch. Kunal described many of the possible ARCore applications: shopping, games, education, entertainment, visualizers, and information/news.

The remainder of the talk was a code example using Unity to abstract away complex ARCore and OpenGL concepts. He showed motion tracking, using an ARCore.Session to track attachment points and plane detection, and placing an object with hit testing, transforming a 2D touch to a 3D position. He described using anchors to deal with the changing understanding of the environment even as the device moves around.