One of the new feature in Android P is the support for display cutouts. A display cutout (a.k.a. notch) is a small cut in a portion of the screen displaying no UI. Generally, notches are at the top edge of the screen and hold the cameras and sensors. The very first device I discovered featuring a cutout was the Essential Phone. Then came the iPhoneX and its large notch. More recently, we’ve seen this “disease” spread out to a lot of other Android OEMs…
As you may have understood, I’m not fan of cutouts in general. First, from a developer point of view, as they require quite a lot of work to deal with them properly: we’ve always been used to rectangle-shaped windows1. Secondly from a UI designer point of view as they clutter the UI and make it directional. Finally from a user point of view as they are not intelligible nor visually attractive.
Cutouts look like a temporary solution to technical impossibilities. I don’t consider them as viable in the long term and can’t wait to see full edge-to-edge screens. But they help in the mean time. As a consequence, we need to deal with them and that’s probably why Android P brings support for it. I took some time to look at the framework additions in Android P preview 1 and started to have fun with notches rather than complain about their introduction. So let’s have fun with cutouts!
The first concept I thought about was to trick the user about the actual purpose and origin of the cutout. In other words, we want to fully embrace the cutout shape in the app UI. Indeed, it would let the user think the cutout is actually part of the design rather than part of the device. Here is a what it looks like:
The effect here consists on 2 main steps:
The ability to render your app content under the status bar has been introduced in KitKatLollipop so I won’t spend much time explaining how to do it. I strongly recommand you to look at the View
documentation and in particular everything related to “system UI visibility”. Chris Banes’s terrific presentation about “Becoming a master window fitter” might also be extremely helpful. The only difference in Android P is the introduction of a new layoutInDisplayCutoutMode
on WindowManager.LayoutParams
. This field defines how the Window
is laid out when there is a display cutout. As we always want our content to be laid out under the cutout area, we simply need to initialize the mode in our Activity
’s onCreate()
:
1 2 |
|
DP1 vs DP2: The flag LAYOUT_IN_DISPLAY_CUTOUT_MODE_ALWAYS
has been renamed to LAYOUT_IN_DISPLAY_CUTOUT_MODE_SHORT_EDGES
.
In order to draw the blue background, we need to know about the shape of the cutout. Android P preview 1 exposes a new DisplayCutout getDisplayCutout()
method on WindowInsets
. The DisplayCutout
class provides developers with several interesting dimensions like the safe area and the cutout region. In particular, calling getBounds()
returns the Region
defining the cutout. Knowing a Region
also exposes its path definition thanks to the getBoundaryPath
, we can easily retrieve the display cutout outline stroke in a cutoutPath
property:
1 2 3 4 5 6 |
|
DP1 vs DP2: Region getBounds()
has unfortunately been removed and a new List<Rect> getBoundingRects()
has been added. In other words, there is no way to get the actual shape of the cutout in DP2 - which makes this effect impossible :(. You can only get the bounding rectangle. On the other side, DP2 supports multiple cutout areas.
You can then easily build the background path of your content View
in its onSizeChanged
callback and draw it. Most of the trick consists on using Path.Op
:
1 2 3 4 5 6 7 8 9 10 11 |
|
While this looks pretty cool, a lot of improvements can be made. Indeed, the effect would look better if we were taking into account several device-specific particularities. First, some recent devices feature rounded corners. Tweaking the shape of the background accordingly would make the effect really shine. Similarly, this entire demo relies on the assumption the cutout is always black. But we could imagine a device with a white cutout and would have to deal with it.
As you may have noticed, really leveraging the cutout is quite painful and requires having actual hardware information about the device (screen rounded corners radius, cutout color, etc.). Unfortunately, as far as I know, the Android framework doesn’t offer such information which are rather OEM specific.
The other concept I thought about was to create a progress indicator. Users generally expect an indicator to show their current position in a content that is larger than the entire screen. The old - but still perfectly accurate - concept of “scroll bars” respond to that issue. This UI effect consists on modifying the concept a bit to simply follow the horizontal contour of the cutout area instead of a straight vertical line.
Even though, the concept looks pretty simple from a UI point of view, implementing it was actually pretty tricky. Here are the main steps we need to complete in order to implement it:
The very first step of the implementation is to build the path that will be supporting the progress indicator. In a nutshell, we need to compute an open path knowing the contour of the cutout area. The figure below shows the required transition from the cutout path (retrieved via getBounds()
as explained above) to the wanted final path:
At first, I though doing so would be easy using path ops. Unfortunately, path ops only work with closed contours. For example, if you try to add a vertical line to an horizontal one, you will get an empty path rather than a cross. Similarly, doing the difference between a disc and a line crossing it won’t split the disc in two distinct parts. Clearly, path ops were not the solution.
The second option was to analyse the path and only keep a subset of it. A nice way to do this would be to read the drawing commands or verbs of the Path
(lineTo
, cubicTo
, etc.) and only keep the needed ones, potentially changing their direction. Unfortunately, Path
is a pretty opaque type. It lets you add drawing commands to an existing Path
but there is no way to retrieve the drawing commands.
The final option I went for was to build the Path
entirely on my own. The Android framework offers a handy approximate
way to flatten the Path
with a series of segments. The API is pretty low-level (the result is an array of 3-floats components per point) and not really Kotlin/Java friendly but you can use Path.flatten
from Android KTX to get a list of more comprehensible PathSegment
s. Here is the code I quickly wrote to extract only the wanted points of the path.
1 2 3 4 5 6 7 8 |
|
Once you get all the approximated points, you can loop over them to build a Path
made of multiple segments:
1 2 3 4 5 |
|
When running the code above you might notice some artifacts on the rounded portions of the cutout area. These glitches, shown below, appear in Skia (the rendering engine dealing with paths on Android) generally when drawing larges strokes on a non-smooth path (i.e. not continuously differentiable) or on zero length contours2.
In order to workaround these glitches, I used several algorithms to build a Path
mostly made of Bezier curves rather than segments. I started by implementing the Ramer-Douglas-Peucker algorithm. The main purpose of the algorithm is to find a similar curve with fewer points reducing the complexity of the paths. The counter part is obviously a loss of precision. In practice, running the RDP (with an epsilon of 1) on my set of 186 points reduced it to 26 points. Because it helped getting rid of the zero-length segments, it almost removed the glitches entirely:
The second technique I used to smooth the stroke was to compute a series of Bezier paths. While this is a pretty common problem in computer graphics, I didn’t know much about it. After some research on the web I found this paper dealing with cubic splines curves. Put simply, this paper explains how to compute a cubic Bezier curve (2 points + 2 control points) for each segments of the original path. After implementing the algorithm and using it to build the final path, I ended up with a pretty solid result. The stroke now appears smooth on curved portions of the shape:
If you read this article a while ago, you probably already know how to compute the current progress based on the content scroll offset. Rather than computing the actual height in pixels of the content and the position in pixels of the visible window, the idea is to rely on a framework feature. Indeed, most scrolling container on Android (implicitly or explicitly) implement ScrollingView
. The methods provided by this interface expose 3 different dimension-less values:
The current progress (between 0 and 1) can then be computed with the formula: progress = offset / (range - extend). When using a RecyclerView
as our scrolling container, we end up with the following code:
1 2 3 4 5 6 7 |
|
In order to draw the current progress, we need to draw a portion of the stroke. This can be done using the phase
parameter of DashPathEffect
. Romain Guy explained in details how to do it in this article but here is the simplified code:
1 2 3 4 5 6 7 8 |
|
And we’re done! Whenever the RecylerView
scrolls, we refresh the progress indicator current progress value, forcing it to be redrawn at the given progression.
While these implementations rely on a lot of assumptions that may be wrong in the future (cutout color, cutout position, etc.), it was nice to see what kind of visual effects could be implemented around the display cutout. It was also a great opportunity to discover how UIs can deal with notches and what it implies for both developers and users. I really don’t like display cutouts as a user. But I had fun with cutouts writing this article! If you do too, feel free to show me your tricks on Twitter: @cyrilmottier
1: To be honest, this is not the first time developers have to deal with weird screen shapes. The Moto 360 “flat tire” and most of the recent Android Wear devices featuring a round screen are some great examples.
2: The stroke not behind anti-aliased on its top edge has nothing to do with the way we render the stroke. Indeed, the stroke is actually twice as large as the visible portion and hence renders way beyond the screen edge. This is actually due to the cutout shape. Indeed, when doing a display cutout, hardware manufacturers have only two options: remove the pixel or keep it. The result is similar on rounded corners screens and that’s why I’ve seen some anti-aliasing being done on the software side to smooth curves or corners a bit.
]]>We’re now in 2017 and mobile apps are bigger than ever. According to Sensor Tower’s analysis of App Intelligence1, the total space required by the top 10 most installed iPhone apps in the U.S. has grown from 164 MB in May 2013 to about 1.9 GB in May 2017, a 12x increase in just four years. Unfortunately, this analysis only focuses on iPhone apps but, from my experience, Android apps has also increased in size in the past four years.
One might say, app file size increase is actually completely normal. Indeed, a lot of things changed in the past 4 years and users expectations increased too: devices have higher densities, apps bundle more features and provide richer experiences, etc. As a consequence, the question is not “Is the increase in file size normal?” but rather “Is the increase in file size smaller/larger than it should”. In other words, does the increase worth the value? The answer to that question is rather complex and subjective.
A lot of things have changed in 4 years to reduce the amount of data transmitted over the air when delivering applications as well as the disk space occupied on the device.
shrinkResources
)resConfigs
)Today, the biggest issue when looking at the size of an app is generally the large amount of dependencies your application relies on. Indeed, dependencies come with resources (images, sounds, etc.), native code (.so) and so on. In particular, support libraries like support-v4 or appcompat are now bundled in almost all apps in the Play Store and comes with tons of resources or code you might not be using. Even worse, this code gets duplicated in all apps.
Android developers are so used to support libraries, they probably don’t even remember how large an Android application should be. Here is what you get when creating two helloworld apps (one with appcompat and one without it) and building it with minify and resources shrinking enabled:
The helloworld binary built with appcompat is around 621KB while the one without appcompat is only 3KB (yep you read it correctly… only 3KB). One could expect both Proguard and the shrinker to get rid of the unused code and resources. Unfortunately, this is not possible. Both of these tools look at the code dependency graph at compile-time but cannot know exactly whether or not a snippet of code will be executed at runtime.
Support libraries are first-citizens in Android development and nobody can imagine developing an Android app today without them. As a consequence, getting rid of them is obviously not a solution. A possible solution would be to have something very similar to what has been done recently with downloadable fonts: download the required libraries (at the appropriate version) at the system level and share them between installed applications.
It looks like Google is thinking about such a concept as pointed in a talk about Instant Apps at Google I/O:
We’re focused on features that can enable additional binary size reduction […] Allowing commonly used libraries like appcompat to be shared between Instant Apps
Obviously, only Instant Apps are mentioned in this talk but I don’t see a reason why it couldn’t apply to installable Android apps as well.
Some other recent changes in the Android ecosystem might act as some of the first steps towards such a system-wide library sharing:
google()
in your repositories
DSL of your build.gradle
files). This could act as the central repository where the system would download missing libraries. That would also suggest only “Google-provided” libraries (e.g. support librairies, Play Services, etc.) could be eligible to library sharing.<meta-data />
block in your app manifests indicating the versions of the libraries your application is based on. This could clearly inform the system which libraries should be pre-fetched at installation-time.Google hasn’t shared documentation on this yet so everything I described here might not exist. Knowing how this would drastically reduce applications binary size, I’m really looking forward to see whether or not system-wide library sharing will become a reality.
1: You can learn more about this analysis reading this blog post: The Size of iPhone’s Top Apps Has Increased by 1,000% in Four Years
2: I really encourage you to discover more about how to slim down your app size watching this Google I/O session.
]]>As an icon designer, creating an adaptive icon can be quite cumbersome. You have to deal with several constraints but it’s difficult to have a clear view on the actual look of your icons. In a recent tweet of mine, I mentioned a pretty handy web tool that you can use to get a quick preview of your launcher icons: adapticon.tooo.io
Although this tool is really nice to preview the possible animations of your icons, it also requires you to upload the images on a server and copy-paste their URLs in the appropriate fields making the design process quite lengthy and boring. In order to quickly look at some possible renderings, I recently created an Affinity Designer template:
The template makes use of a new feature in Affinity Designer 1.5: Symbols. The left-most icon – labeled “Master” – is based on a symbol which is then duplicated to get different shapes. Any changes made to the master symbol will be directly replicated to the other copies. Symbols are marked with a solid orange border on the left.
As I always do when releasing graphic assets, I ensured the .afdesign respects a certain hygiene: made only of vector-based elements, sensibly layered, named and grouped, etc. The file has been created with Affinity Designer 1.5.5. Also note the following resources are licensed under the CC BY 3.0:
I really hope you’ll find this .afdesign useful in the process of adapting your icon to the new Android O adaptive icons.
Although search is usually one of the main entry point in a mobile app, I regret to see a lot of mobile apps are not always implementing search in a comprehensive way. Specifically, I often end up having search results presented to me but don’t always understand why. In this article, I will introduce you to search results highlights, explain you why they are extremely helpful from a user point of view and how simple it is to implement them in an Android app.
Let’s first look at the two screenshots from two common apps: Todoist (a very handy and well-designed to-do list app I use on a daily basis) on the left and the built-in Contacts app on the right. In both cases, screenshots are taken while the application is showing search results.
If you are familiar with designing mobile apps search experiences, you probably noticed the main difference between the two screenshots: search results highlights. Indeed, Todoist displays items with no styling at all while Contacts uses bold to better indicate the position of the matching search terms. This difference might seem visually minimal but highlighting search terms is actually a simple and efficient way to enhance user experience in search screens.
Search terms highlights are largely important in search experiences as they provide a lot of additional information to users:
As explained previously, highlighting search results helps users better understand search and filters in your app. But it doesn’t help improving the filtering algorithm itself. It’s up to your app to provide an accurate filtering. A search result is considered accurate from a user standpoint when it sends back to the query. As a consequence, most search implementation are based on a simple technique: they provide results containing 100% of the search query. Let’s consider the following list:
1 2 3 4 |
|
What do you expect when searching for “no”? What about when typing “ta” as a query? Dependending on the search strategy you may end up with different results. When matching characters for instance, “no” matches both “camembert de [no]rmandie” and “me[no]nita” while you only get “camembert de [no]rmandie” when matching words. The word-based strategy even gives no results for “ta”. In real life, word-based strategies are to be favored. The strategy is obviously up to the context in which your app is used. However, in general, a word-based (i.e. beginning of words) strategy is the best option as it matches the user mental model. Indeed, when searching for a term, users tend to use “words” as the atomic text component. For example, I really don’t expect “Cyril” to be shown when I type “ri” in a search input.
Another important aspect of search is the ability to provide search results that are not matching exactly the search terms. This behavior is generally called fault tolerance. For instance, one might expect “San Francisco” to be displayed when typing “San Franscisco” (notice the extra ’s’) as a query. In theory, implementing a great fault tolerance behavior involves integrating distance-based computations such as Jaro-Winkler or Levenshtein distances. In practice, using such distances is not easy and might end up displaying non accurate results.
However, there are some simple solutions you can use to avoid user frustration while still preserving accurate results: character normalization. The first simple normalization you can ensure is have a case-insensitive matching. For instance, when querying “SaN FranCiscO”, you expect “San Francisco” as a result.
Another tricker character normalization is dealing with accented characters. This is not always obvious for English-speaking people but accents are all around in lots of languages. The idea is to make non-accented queries matches accented data. For instance, querying “ceci” should match “[Céci]le”. At the time of the writing, Todoist (version 11.2.4) is not handling accented characters making searches painful sometimes.
Obviously, notions detailed above mainly focus on text-based search results. Some applications offer search on photo or video which makes the highlighting more difficult. In all cases, the quicker the user guesses the relationship between a search term and the results, the better.
Prior deep diving into the code, let’s first determine what we need to achieve in order to implement search terms highlighting. The code can be split into three distinct parts: styling portion of text, finding the portion of text to style and applying highlight to items in a list.
The screencast below shows the app we want to achieve. It will just display a list of cheeses and provide a text input in order to filter them based on a query.
The first thing we need to do is to find a way to display a styled version of a text. TextView
obviously offers attributes like android:textColor
or android:textStyle
but this applies to the entire character sequence rather than a sub-portion of it. Fortunately, Android offers, since day one, an interface representing a character sequence whose portions can be “tagged” with styles: Spanned
. The android.text.style
contains a set of classes representing some common text styles: weight, color, size, etc.
I suppose most mobile developers are already familiar with Spanned
s1. In particular, this is the feature that is used internally to render basic HTML content in a TextView
. The HTML content is parsed generating a Spanned
object passed to the TextView
. If you want to discover more about Spanned
, CharacterStyle
, etc., I encourage you to read the official documentation as well as Flavien Laurent’s explanation article. Although, it has been written a while ago, the API hasn’t changed much since then so everything still applies today.
The code below uses an indexOfQuery
method (described later) in order to tag the first portion of text
matching wordPrefix
. Note that the method returns a CharSequence
because this is the super-type of both String
and Spanned
. Put simply, the method returns a freshly created and tagged SpannableString
if wordPrefix
is found. If not, it returns the input text
as it.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
In theory, determining parts to highlight should be done using the exact same technique used to perform the filtering. Indeed, doing so ensure consistent results and enforces a single computation point. Unfortunately, in practice, this is generally difficult or impossible to do.
Let’s take an example to better understand the gap between filtering and highlighting. Imagine an application performing the filtering by querying a remote server. This app would GET /items?query=<query>
and most APIs would respond with a (potentially empty) list a items. In this example there is a clean difference between the entity doing the filtering (the server) and the entity doing the highlight (the mobile app). Another example would be an application querying a local database with a simple LIKE
clause, getting a Cursor
and displaying those results on screen. Even though the filtering and highlighting happen on the same entity, they both need to be considered independently. Indeed, the filtering is generally managed by the app’s backend while the highlighting is handled by the UI part.
The perfect solution would be to receive items with tagging information associated to it. For instance, in case of a query “franc”, we could imagine a response as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
In practice, this makes the API cumbersome1 and more importantly, it only pushes the filtering versus determining portion to highlight back to the server. For instance, when querying the database with a LIKE
clause, you don’t get the index of the first occurrence by default. Secondly, it forces the backend to compute the index of the search term for all results which may not be necessary in long lists whose items are not displayed (do not forget items are lazily displayed in most mobile list-based scrolling containers).
In a nutshell, a great highlighting mechanism should match the behavior of the filtering and remain completely distinct from the filtering. This is not so simple. Fortunately, highlighting is not as important as filtering. As explained previously, it enhances user experience but it doesn’t need to work 100% of the time or be completely exact. If it works 99% of the time and does nothing for the remaining 1% it is still a huge enhancement to your search results.
The second part of the implementation consists on determining the portion of text that need to be highlighted. In other words, we need to find a way to determine the index of the character at which the highlight needs to start as well as its length. A simple solution is to use String.indexOf(String)
. In practice, you may want to tweak the behavior a little bit: use a word-based strategy, accept CharSequence
, etc. Implementing your own method is generally the best option:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
TextView
sBecause highlighting text is a feature that can be reused, a good option is obviously to extract this behavior in a dedicated class. In order to do this, we can create a QueryHighlighter
class that contains the methods and fields given above. QueryHighlighter#apply
returns the CharSequence
to set to the TextView
. To make the QueryHighlighter
usage even simpler on TextView
, we can add a utility method setText
:
1 2 3 |
|
Using QueryHighlighter
in an Adapter
for instance, is now dead simple:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
I’ve published the code of this sample application on my own GitHub. You can browse it to get a better overview. Note that there are some differences between the code shown here and the code available on GitHub. Here are some extra notes:
QueryHighlighter
has several settable fields. Changing them afterwards (in a thread-safe manner of course) is not a problem either. As a result, I favored exposing setters over exposing several constructors. Because QueryHighlighter
is generally instantiated once and never modified I also implemented a fluent API on setters. All setters return the same instance so that calls can be chained:1 2 3 4 |
|
QueryHighlighter
only highlights the first occurrence of the query. This is done on purpose because I consider highlighting all matching terms would be information overhead and displaying only the first occurrence is a sufficient condition. If you want to highlight all occurrences, you obviously can. Please note that doing so would require you to create a new interface acting as a highlight style provider and reusing these styles. Indeed, we can’t reuse the unique style set to QueryHighlighter
because Android doesn’t support attaching the same Span
more than once to the same Spanned
.Highlighting terms in search results on mobile requires only a few lines of code. Not implementing this feature would be a shame when knowing how much it enhances user experience. Do not miss a feature with such a great work-value ratio.
An Android app is usually only a single part of a larger product. Indeed, a product is usually made of several independent entities such as a website, one or several mobile apps, etc. In this talk, we will learn how to increase app engagement and tear down the walls between your website and your apps. You will also discover how you can give your users the most integrated mobile experience possible with features such as Related Apps Banner, Smart Lock for Passwords and more… In a nutshell, this talk is all about driving users to your mobile app and making your product successful.
I already anticipate your question: “Was the talk recorded”. Yes it was. Unfortunately, I presented it in French so I don’t think it will help most of you. I will surely do the talk once again and I will make sure I do it in English so that I can share the video globally.
]]>The launch screen is a user’s first experience of your application and, hence, should be designed with great care. In this talk, we will deep dive into the concept of launch screen, discover how to measure, debug & optimise them efficiently, and learn more about how to implement them correctly. In other words, this talk is all about discussing for ±45 min about screens displayed less than 5 seconds.
As far as I know, the talk has been recorded but here are some resources you may find useful while waiting for the video:
I would like to complete this post by thanking all of the organizers, speakers & attendees from mDevCamp. I had a really great moment there.
]]>This post can be considered as a quick app clinic on the Android Clock app. App clinic is generally dedicated to third-party apps but, after all, there is absolutely no reasons it can’t be performed on Google apps… I also think showing and explaining the few details that could have been better is a great way to learn and improve. Demonstrating a UI/UX guideline using both great and bad examples is how most human interface guidelines are based on. Material Design guidelines use this method a lot. Also keep in mind guidelines and reviews are not definitive rules and are, by definition, subject to discussion.
Prior starting with the list of notes I made about the Clock app, I think it is important to point out the reviewed version of the app. The package manager gives 3.0.3. As far as I can tell, this is the latest one currently available.
From a global point of view, the Clock application is clearly a well polished application. It doesn’t crash, runs smooth animations, features a beautiful material design, have some nice unique details (animated icons on tab change, hour-of-day based background color), etc. Most of the notes listed below can actually be considered as little details. But there are no little details. Details make your product. They are part of your design and literally bring your app to life. Understanding and fixing all of these tiny details helps both in making your app more pleasant to use and making it stand out of the other apps on the Google Play Store.
There are no little details. Details make your product. They are part of your design and literally bring your app to life.
Interactions with mobile devices are mainly based on touch-screens. Because the touch-screen is the only thing between users’ fingers and your application, you need to make sure actions are properly intercepted. Smaller touch targets are harder for users to hit than larger ones. Always make your touchable areas are large enough to be easily tapped.
It is generally considered a touch target should be at least 48x48dp. Although these requirements make sense in most situations, it doesn’t mean you can’t make targets larger. Using large touch targets is even encouraged whenever possible. In the “Timer” section of the Clock app, both the “Delete timer” and “Add new timer” buttons clearly lacks of touchable width leading to potentially no-op taps. Enlarging the touchable areas makes buttons more accessible while preserving the current layout and design of the screen.
Input feedback is an extremely important part of UX design. It basically consists on informing the user his/her input/action/whatever is being tracked and processed by the application. Just like there is a reaction to any force in the real world, there must be a feedback to any action in UIs. When a button is pressed, its appearance changes to reflect the pressed state. When a list is pulled down to be refreshed, a visual indicator appears to notify the loading is in progress. When a tap occurs on the top edge of the screen, the notification tray slides down quickly to indicate its presence.
Just like there is a reaction to any force in the real world, there must be a feedback to any action in UIs.
Quite logically, feedback only make sense when a counterpart action is about to be performed by the application. Reacting to user’s input but doing nothing in response increases frustration and reduces UI comprehension. In other words, a UI should be completely transparent to user’s input if the area is not interactive.
Most mobile application are made of several screens. Screens can be reached thanks to navigation patterns leading to a complex screen hierarchy. This is especially true when the app displays a lot of content. In order not to lose the user when switching from one screen to another, it is important to show the purpose of each screen. This is a key point in UI/UX design that is mainly solved by adding a title to all of your screens. In some cases, using screen titles may also help users better understand the overall navigation pattern of your app.
Preserving the context of each application is both essential and difficult to do on mobile apps. Indeed, mobile screens are generally small and don’t leave a lot of room to add titles. By default, the Toolbar (R.I.P. ActionBar) is the perfect place to put the title. If you want to preserve as much space as possible on screen, do not hesitate to use some smart scrolling techniques to hide the Toolbar when the content is being scrolled (e.g. Google Play Store).
As described earlier, feedback is obvious when performing a direct interaction with the UI. However, it is clearly not limited to it. Another great feedback you can implement is “state feedback”. Although the expression seems quite abstract it simply consists on informing the user about the current state the app/screen is in. Most common states are: “content”, “loading”, “error” or “empty”.
There are plenty of ways to visually display a state feedback. Empty states are generally displayed where the content would have normally be displayed. Error states may also be displayed in-layout or using widgets such as Toast or snackbars. Finally loading states are generally displayed outside of the content area as it may occur at the same time. Indeed, the loading state is not exclusive to the content state: an app may be both displaying content (from local database) and loading data from the network.
The “app as a platform” vision is often discussed on social network. I’m personally convinced an application should never create its own visual language but rather extend the platform language the app in running on1. The approach mainly consists on using the platform visual language as a starting point and build your brand and style on top of it.
Embracing the platform visual language & navigation patterns […] reduces the cognitive load and enhances the comprehensiveness of the UI.
Embracing the platform visual language & navigation patterns has several advantages. First, it obviously reduces the amount of work third-party apps requires to get nice user interfaces. Secondly, it reduces the cognitive load and enhances the comprehensiveness of the UI. In other words, the user has to make no or low efforts to understand your application because it looks and behaves just like the other ones on the device.
We have explained previously how important feedback is. Another important rule to follow when displaying feedback is to make sure it is displayed in a logical way. Basically you have to make sure feedback is done at the correct point in time (i.e. synchronously to the user gesture) and space (i.e. at the location of the interaction). Doing so enforces the impression of responsiveness and accuracy of your app.
The Android Clock application is rather good at displaying a responsive feedback. However, there are some cases where the visual feedback appears at a wrong location. Sometimes it is even doing “over-feedback”.
Consistency is an important guideline when it comes to design an application. It obviously makes your code easier to maintain as most snippets are based on the same logic/values/processes/etc. From a UI point of view, consistency is a great way to have a coherent and immersive UI. In fact, consistency reassures users and helps them better deep dive into your application brand and style.
Consistency has a bunch of facets starting from colors, font sizes, font styles, button appearances, etc. There are several techniques to ensure your UI takes the form of a coherent and integrated app. I personally always creates a small set of base values (colors, spacings, grid sizes, font sizes) and create a set of styles (text appearances & widget styles) based on these values. Most designers consider the technique as a painful constraint. I do too … but that is clearly a wanted positive constraint in the long term. After all, mobile is all about creating amazing experiences out of a set of constraints.
Doing an app clinic is an interesting exercise both for the reviewer and the developer. From a reviewer point of view, it is a great way to get to know an application and quickly learn about UI&UX patterns. Because there is no definitive answers to what in wrong or good in UI/UX design, doing app clinics regularly helps better weight the pros and cons of all solutions. From a developer point of view, an app clinic is a great way to take a step back from the mammoth amount of work done on an app. Thanks to external feedbacks, you can better discover what you missed in the code, UI, UX, etc. Obviously, as the maintainer of the app you will always have the final decision on whether or not to tweak and modify your app to reflect reviewers’s notes.
Note: Prior deep diving into this article, let’s start with a quick disclaimer. Indeed, I think it is mandatory to mention the strategies, processes and other methodologies described below are far from being ideal. Just like there is no perfect answer in UX or development, there is no perfect way to deal with large projects. In other words, this article has no intentions to force you to switch to new methodologies and should be read as a simple feedback on how the team manages projects like the Capitaine Train for Android apps.
Methodologies have no meaning when taken outside of their context of usage. Thus, I believe an introduction to the Android team at Capitaine Train is mandatory. The team was born on March 2013 when I joined Capitaine Train to lead Android applications. It took me quite some time to get used to the extremely complex train european eco-system. We rapidly decided to grow the team and I was joined in November 2013 by Mathieu Calba and more recently by Flavien Laurent in October 2014. If you are (really) good at counting you will have noticed the Android team at Capitaine Train is a group of 3! Being such a relatively small team is an important point to keep in mind to better understand our processes.
I’m really proud to be part of Capitaine Train team but I’m even prouder of the Capitaine Train for Android team I built. Mathieu and Flavien amaze me every single day and it’s always a pleasure to work with them. All of the members are talented developers extremely focused on the product, do not hesitate to fight for what they believe in and have a clear understanding of what a good UI/UX is (which is not really common with developers…). In other words, each member of the team is extremely independent when it comes to design a new feature from the ground-up. Most of their time is spent working on Android related features. It means we are all mostly working with the Android framework but may also spend some time on other platforms (Ruby on Rails for instance) to implement features that specifically relate to Android (Google Now is a great example).
From a product point of view, Android at Capitaine Train can be summed up into two different applications. The most important one, the handheld, is adapted to both tablets and phones. The smaller and most recent one targets wearable devices. The minimum supported version of Android is Android 4.0 (API 14) and the app bundles 4 different languages: English, French, German and Italian.
Capitaine Train targets worldwide customers. However, our offer is clearly focused on european trains. As a consequence, most of our audience is based in Europe and lives in 2 to 3 different timezones. Having to deal with such a relatively small span of timezones is quite helpful, especially when announcing new stuff as communication is synchronous (in a way a given moment in the day is approximatively the same moment of the day for all of our users).
Regarding download numbers, we usually don’t communicate on them. However, at the time of the writing, Google Play Store publicly indicates the application has been downloaded between 50.000 and 100.000 times.
Talking about how we deal with code at Capitaine Train could be the subject of an entire article. Instead of deep diving into our design, development and/or testing techniques I will only give a surface overview of our processes.
The entire Android code base is managed by Git. I don’t think it is necessary to present Git in this article as it is well known to be one of the best source control management system out there. We use it extensively at Capitaine Train as all of our review techniques are based on this amazing development tool.
We are also using the git flow model on top of Git. This model ensures a coherent and understandable commits tree. Put simply, development is done on the dev
branch. When a teammate needs to work on a new feature, a new “feature branch” is created starting from dev
. It’s up to the teammate in charge of the feature to create the branch. The “feature branch” is rebased on dev
until it is finally merged. Releasing the application is synonymous with merging dev
into master
using the --no-ff
option. This option ensures the merge is always represented by a commit. Finally, the commit is tagged with the application version code. In other terms, the master
should only contain commits that refer to a public version of the application.
Prior being merged […] the code […] is always read and validated by at least two members of the team.
Because the Android team is quite small, all of the features are always managed by a single member of the crew (let’s say Bob). Bob is entirely responsible for the development of the feature: from design to release. Once the feature is considered mature and polished, it is submitted as a “Merge Request” to another member of the Android team (Alice). The review process is done thanks to a tool called GitLab which can be seen as a GitHub clone. Alice is responsible for reviewing the code. Feedbacks can be extremely heterogenous. For instance, it is common to have reviews like: “You should use this method instead” to “I would have used a different text color and text size” or “I won’t merge this if it makes the APK heavier than 5MB!”. Reviews with a set of alternatives are usually extremely appreciated compared to a simple “No!”.
One of the particularities of the Capitaine Train for Android team is each member is responsible for QA. Indeed, there is no QA team in the company1 and both Bob and Alice have to make sure there is no regression and that the code works perfectly. It basically means the code review does not only consist on reading code. Alice also has to test the implemented feature in all possible conditions.
The review process ends once Bob and Alice are both okay with the feature in general (code, design, introduced changes, API, etc.). As a consequence, prior being merged into dev
, the code from the Capitaine Train for Android applications is always read and validated by at least two members of the team. This is actually the case for all projects at Capitaine Train. Larger companies also rely on a similar process but require at least two “+1”s from reviewers. We clearly can’t afford to do that in such a small team.
The entire project is built on top of Gradle. The main advantage of the new Android Gradle-based build system relies in the fact it fits both development and packaging purposes. When developing a feature, we all use Android Studio (which also uses Gradle under the hood) and when it comes to packaging, command-line line is used. Builds packaging is done thanks to our continuous integration environment based on Jenkins. Jenkins currently manages two different Android projects:
android-dev
which builds what’s currently on the dev
branchandroid-master
which corresponds to the master
branch.A new build of these projects gets triggered when a new commit is pushed in the main remote repository (origin
). The main differences are each projects point to a different branch and have different compilation options. Indeed, contrary to android-dev
, android-master
is optimised and obfuscated thanks to Proguard.
Because we had to create a lot of screenshots (3 form factors, 4 languages, 6 screenshots: 3x4x6 = 72 screenshots) we recently added a new home-made tool to the packaging process: automatic screenshots (thanks Flavien). The new utility takes care of taking screenshots in all of the necessary configurations and clean them up by unifying the status bar. At the time of the writing this tool is not integrated right into our Jenkins build stream but this is definitely something that will be done in the future.
When it comes to publishing on the Google Play Store, everything is done manually. Obviously, the new Google Play Publishing APIs could be used but we prefer to keep control over the releases for now. Because of our “lengthy” release life cycles, I’m convinced this is not an issue with our current way of dealing with releases.
Versioning an Android application is a mandatory process. Indeed, the Google Play Store uses application version code in order to detect new versions of the application. The only requirement from the Google Play Store is to make sure the application version code is incremented monotonically.
Instead of […] trying to determine whether a version is major, minor or patch, each new release containing at least one new user-visible feature is considered major.
The Capitaine Train for Android application don’t use the traditional major.minor.patch versioning (aka semantic versioning). Indeed, because we wanted to have as less friction as possible we came up with a simpler versioning model based on two version numbers: major and minor. The application code is computed based on these numbers thanks to the following formula:
The main idea behind this versioning strategy is to have no or low friction in the release process. Instead of scratching our heads trying to determine whether a version is major, minor or patch, each new release containing at least one new user-visible feature is considered major. Bug fixes and patches are always considered as minor versions. This works particularly great in conjunction with the release schedule described below.
From an external user point of view, only the major version is important. The version name of the application is always the major version regardless of the minor version. The main reason behind this naming strategy is minor versions are supposed to be completely transparent to the user as they do not contain any user-facing features. Using only the major number of the version name makes it simpler to remember and more recognisable. If you really want to know the exact version code of the application, you can open the “Settings” screen in the Capitaine Train for Android app. It shows the version name (also displayed in the system Settings app) + version code:
Development is fun but you can make it even more fun. All of our public builds are actually named internally (such like regular Android releases). As a huge of fan of Stargate SG-1, I named each major release according to some characters of the show: ANUBIS
(101), BRATAC
(201), CARTER
(301), etc. Minor releases are names using the _MR<x>
suffix where <x>
is the minor version number minus one: FRAISER_MR1
(602), GEORGES_MR1
(702).
The Android Wear app versions also follow the same versioning pattern and are named according to characters from Pixar movies: ANDY
, BUZZ
, COLETTE
, etc.2 Even though the wear app is tied to the handheld app from a release point of view (the wear APK is packaged inside the handheld APK and published simultaneously on the Play Store), we decided to use a distinct versioning.
Having fun internally is obviously not the only purpose of maintaining such a list of version codes. It helps us easily change the execution behaviour depending on the current version of the application. Let’s say you had a serious issue when storing some stuff on disk in a given version of the application, you probably want to check if the app version is greater than the data stored on disk to know if it’s time to update the data to the new format.
Capitaine Train users may have noticed, the team is extremely focused on the product. We don’t release new features until they are definitely ready for production. Our high quality standards prevent us from publicly releasing non polished features. This is one of the reason there is no strict deadlines on the project.
Instead of sticking to hard deadlines, the […] Android app follows the release train software release schedule.
Instead of sticking to hard deadlines, the Capitaine Train for Android app follows the release train software release schedule. The release trains are time based release schedules. It does not wait for either features, or bug fixes but is based (as purely as possible) on time. Put simply, each new version of the app can be considered as a train that leaves and arrives on time. If a feature is ready for the planned time of arrival it jumps into the release train. If it is not, the feature has to wait for the next release train. Release trains enforce discipline in introducing features, give predictability, and allow more regular releasing. And yes! Similarities between the naming of the methodology and the company name are just a coincidence :-).
Release train only applies to major releases of the Capitaine Train for Android app. Minor releases are done on a completely different timeline. Because minor versions are usually hot-fixes on blocking or crashing issues, they are released as soon as possible regardless of the release train schedule. This usually only happens during the beta testing phase of the app, as discussed later in this article.
New major versions of the Capitaine Train Android application are released following a recurring pattern. This pattern repeats itself every 6 weeks. Why 6 in particular? To be honest there is no complex maths behind this figure. It is only empirical and comes from my experience as an Android app designer and developer. Here is the rationale:
To be honest, I don’t think there is a perfect release life cycle length. The 6 weeks pattern works particularly great at Capitaine Train because it is half way from both users and our own expectations.
The diagram above describes our release life cycle. As explained earlier, each version v(n) is being prepared for 7 weeks. Because successive life cycles overlap, a new version is released every 6 weeks. The planning during a release life cycle is quite flexible and up to the engineer. However it usually reduces to:
Another interesting point is the Capitaine Train for Android apps are always released the exact same day of the week: Tuesday. More precisely: Tuesday morning. Tuesday have several advantages over some other days in the week. Most of these reasons are common to most software projects but some other are rather personal:
I previously mentioned a beta phase in Capitaine Train for Android release life cycle. At Capitaine Train, beta testing is done via the beta channel from the Google Play Store. The beta is private but people can ask to join. We are pretty picky when it comes to add new beta testers to the pool. Indeed, a beta tester has to be both extremely active (users travelling by train at least once a week) and trustable (we don’t want him/her to communicate about features to be released soon). Beta tester have an entire week (Week 6) to test and report important bugs.
Users report crashes through the Google Play […] only once every 25 crashes
Because crash reporting on the Google Play Store requires users to approve sending the crash info, we added an additional crash reporter: Crashlytics. This is extremely important as users tend not to report crashes. For example, for the current production version, there is a 25x difference between Crashlytics and Google Play Services reports. In other words, users report crashes through the Google Play feedback dialog only once every 25 crashes. Crashlytics helps us to be notified about important fatal crashes. We can also prioritise crashes based on the number of occurrences and the type of devices.
Google Play Store provides a really great feature called “staged rollouts”. Staged rollouts consist on making new versions of your application available to only a subset of your entire user base. For instance, it allows you to release the new build to only 10% of your audience. This is particularly great to test new features or potentially reduce the load on your servers. We have been experiencing with this feature during the first versions of Capitaine Train for Android. Because we are quite confident with our release life cycles, we now use it only sparingly (in case the code contains some ground-breaking changes). As a consequence, 95% of our releases are done in a single full release (100%).
I hope I have explained in details how the Android team at Capitaine Train works. I tried to be as precise as possible but may have forgotten some important points. Do not hesitate to leave a comment below and I will try to answer questions regarding missing important points. Once again, do not forget this article describes methodologies that are dedicated and apply nicely to the Android team at Capitaine Train. Always keep in mind all contexts are different. Think about the context in which you are working prior changing and adapting the way you work on your projects.
Thanks to @Mathieu_Calba for proofreading this post.
JONAS
and DORY
…Using the grid principle generally requires developers to add some extra padding/margin/spacing (choose the name that best fits your style…) between elements. Indeed, adding spacing between elements helps maintaining a clear separation between blocs while still maintaining high level of readability of your UI. All Android developers are familiar with these concepts and most cases are actually solved by using framework features such as padding and/or margin on Views. In order to clearly isolate the logic from the UI, this is generally done in the XML definition of the interface. While this works particularly great when the UI is quite static, it may be harder to manage dynamic UIs where elements get hidden/shown on demand. This article gives you some tips and tricks to better manage dynamic grid-base UI.
Let’s create a simple layout as an example. We create an horizontal bar of three Button
s that appears below a static View
(the application logo for instance). The rendering of the following layout is given in the image below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
The user interface shown in the previous screenshot clearly relies on a notion of grid. However, it seriously lacks of spacing between elements in order to let the user clearly distinguish independent entities in the UI. Let’s do that be simply adding android:layout_marginTop="@dimen/spacing_medium"
to the LinearLayout
id-ed @id/buttons_container
and android:layout_marginRight="@dimen/spacing_medium"
to Button
s @id/btn_first
and @id/btn_second
:
The UI above looks particularly great: it looks nice, it is readable, etc. Unfortunately, things get kinda bad when dynamically hiding some View
s in the layout. Indeed, let’s imagine the feature normally activated by a click on @id/btn_third
requires some capabilities that are not available on the device (for instance Google Play Services). The best way not to clutter the UI is to change the visibility of the third Button
to View.GONE
:
As expected, @id/btn_third
is not displayed anymore but the right edge of @id/btn_second
is not aligned with the right edge of the application icon. The main reason of this problem is the margin technique works well as long as it sticks to the assumption made at the beginning: each View
with a right/top margin has a neighbour View
on its right/top. Hiding some View
s in the bar goes against this constraint.
An obvious trick to deal with this issue would be to manually change the margin of elements in the Java code. It is worth telling this is a really bad solution. Another way would be to use a layout that automatically deal with element spacing. GridLayout
is one of them, for instance. Unfortunately, this layout is kind of a pain in the ass to use and doesn’t let you specify a specific margin between elements (only the default margin is available).
Actually, the LinearLayout
already manages a notion of spacing between elements. This feature is quite unknown as pretty hidden in the framework but it works like magic. The trick consist on using a Drawable
with an intrinsic width/height as the LinearLayout
’s elements divider:
1 2 3 4 5 6 7 8 9 10 11 |
|
You can now use this newly created Drawable as a spacer between elements by setting it as the divider of the LinearLayout
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
The Android framework contains a bunch of features that can be used and tweaked to fulfil a slightly different purpose than initially expected. The notion of Drawable
is usually part of these tricks. Making sure you fully understand the concept of Drawable
as it may help you simplify your code sometimes.
It was really a pleasure to be part of this great event. As a quick note, the end of 2014 will be quite busy as I will be speaking about various Android-related subjects (mostly Android Wear) at Droidcon UK, DevFest Nantes and Devoxx Antwerp. If you are attending these conferences, I will be glad to chat with you.
]]>Publishing light-weight applications on the Play Store is a good practice every developer should focus on when designing an application. Why? First, because it is synonymous with a simple, maintainable and future-proof code base. Secondly, because developers would generally prefer staying below the Play Store current 50MB APK limit rather than dealing with download extensions files. Finally because we live in a world of constraints: limited bandwidth, limited disk space, etc. The smaller the APK, the faster the download, the faster the installation, the lesser the frustration and, most importantly, the better the ratings.
In many (not to say all) cases, the size growth is mandatory in order to fulfill the customer requirements and expectations. However, I am convinced the weight of an APK, in general, grows at a faster pace than users expectations. As a matter of fact, I believe most apps on the Play Store weight twice or more the size they could and should. In this article, I would like to discuss about some techniques/rules you can use/follow to reduce the file size of your APKs making both your co-workers and users happy.
Prior to looking at some cool ways to reduce the size of our apps, it is mandatory to first understand the actual APK file format. Put simply, an APK is an archive file containing several files in a compressed fashion. As a developer, you can easily look at the content of an APK just by unzipping it with the unzip
command. Here is what you usually get when executing unzip <your_apk_name>.apk
1:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Most of the directories and files shown above should look familiar to developers. They mostly reflect the project structure observed during the design & development process: /assets
, /lib
, /res
, AndroidManifest.xml
. Some others are quite exotic at first sight. In practice, classes.dex
, contains the dex compiled version of you Java code while resources.arsc
includes precompiled resources e.g. binary XML (values, XML drawables, etc.).
Because an APK is a simple archive file, it means it has two different sizes: the compressed file size and the uncompressed one. While both sizes are important, I will mainly focus on the compressed size in this article. In fact, a great rule of thumb is to consider the size of the uncompressed version to be proportional to the archive: the smaller the APK, the smaller the uncompressed version.
Reducing the file size of an APK can be done with several techniques. Because each app is different, there is no absolute rule to put an APK on diet. Nevertheless, an APK consists of 3 significant components we can easily act on:
The tips and tricks below all consist on minimizing the amount of space used per component reducing the overall APK size in the process.
It probably seems obvious but having a good coding hygiene is the first step to reducing the size of your APKs. Know your code like the back of one’s hand. Get rid of all unused dependency libraries. Make it better day after day. Clean it continuously. Focusing on keeping a clean and up-to-date code base is generally a great way to produce small APKs that only contain what is strictly essential to the app.
Maintaining an unpolluted code base is generally easier when starting a project from scratch. The older the project is, the harder it is. As a fact, projects with a large historical background usually have to deal with dead and/or almost useless code snippets. Fortunately some development tools are here to help you do the laundry…
Proguard is an extremely powerful tool that obfuscates, optimizes and shrinks your code at compile time. One of its main feature for reducing APKs size is tree-shaking. Proguard basically goes through your all of your code paths to detect the snippets of code that are unused. All the unreached (i.e. unnecessary) code is then stripped out from the final APK, potentially radically reducing its size. Proguard also renames your fields, classes and interfaces making the code as light-weight as possible.
As you may have understood, Proguard is extremely helpful and efficient. But with great responsibilities comes great consequences. A lot of developers consider Proguard as an annoying development tool because, by default, it breaks apps heavily relying on reflection. It’s up to developers to configure Proguard to tell it which classes, fields, etc. can be processed or not.
Proguard works on the Java side. Unfortunately, it doesn’t work on the resources side. As a consequence, if an image my_image
in res/drawable
is not used, Proguard only strips it’s reference in the R
class but keeps the associated image in place.
Lint is a static code analyzer that helps you to detect all unused resources with a simple call to ./gradlew lint
. It generates an HTML-report and gives you the exhaustive list of resources that look unused under the “UnusedResources: Unused resources” section. It is safe to remove these resources as long as you don’t access them through reflection in your code.
Lint analyzes resources (i.e. files under the /res
directory) but skips assets (i.e. files under the /assets
directory). Indeed, assets are accessed through their name rather than a Java or XML reference. As a consequence, Lint cannot determine whether or not an asset is used in the project. It is up to the developer to keep the /assets
folder clean and free of unused files.
Android supports a very large set of devices at its core. In fact, Android has been designed to support devices regardless of their configuration: screen density, screen shape, screen size, etc. As of Android 4.4, the framework natively supports various densities: ldpi, mdpi, tvdpi, hdpi, xhdpi, xxhdpi and xxxhdpi. Android supporting all these densities doesn’t mean you have to export your assets in each one of them.
Don’t be afraid of not bundling some densities into your application if you know they will be used by a small amount of people. I personally only support hdpi, xhdpi and xxhdpi2 in my apps. This is not an issue for devices with other densities because Android automatically computes missing resources by scaling an existing resource.
The main point behind my hdpi/xhdpi/xxhdpi rule is simple. First, I cover more than 80% of my users. Secondly xxxhdpi exists just to make Android future-proof but the future is not now (even if it’s coming very quickly…). Finally I actually don’t care about the crappy/low-res densities such as mdpi or ldpi. No matter how hard I work on these densities, the result will look as horrible as letting Android scaling down the hdpi variant.
On a same note, having a single variant of an image in drawable-nodpi
also can save you space. You can afford to do that if you don’t think scaling artifacts are outrageous or if the image is displayed very rarely throughout the app on day-to-day basis.
Android development often relies on the use of external libraries such as Android Support Library, Google Play Services, Facebook SDK, etc. All of theses libraries comes with resources that are not necessary useful to your application. For instance, Google Play Services comes with translations for languages your own application don’t even support. It also bundles mdpi resources I don’t want to support in my application.
Starting Android Gradle Plugin 0.7, you can pass information about the configurations your application deals with to the build system. This is done thanks to the resConfig
and resConfigs
flavor and default config option. The DSL below prevents aapt from packaging resources that don’t match the app managed resources configurations:
1 2 3 4 5 6 |
|
Aapt comes with a lossless image compression algorithm. For instance, a true-color PNG that does not require more than 256 colors may be converted to an 8-bit PNG with a color palette. While it may reduce the size of your resources, it shouldn’t prevent you from embracing the lossy PNG preprocessor optimization path. A quick Google search yields several tools such as pngquant, ImageAlpha or ImageOptim. Just pick the one that best fits your designer workflow and requirements and use it!
A special type of Android-only images can also be minimized: 9-patches. As far as I know, no tools have been specifically created for this. However, this can be done fairly easily just by asking your designer to reduce the stretchable and content areas to a minimum. In addition to optimizing the asset weight, it will also make the assets maintenance way easier in the long term.
Android is generally about Java but there are some rare cases where applications need to rely on some native code. Just like you should be opinionated about resources, you should too when it comes to native code. Sticking to armabi and x86 architecture is usually enough in the current Android eco-system. Here is an excellent article about native libraries weight reduction.
Reusing stuff is probably one of the first important optimization you learn when starting developing on mobile. In a ListView
or a RecyclerView
, reusing helps you keep a smooth scrolling performance. But reusing can also help you reduce the final size of your APK. For instance, Android provides several utilities to re-color an asset either using the new android:tint
and android:tintMode
on Android L or the good old ColorFilter
on all versions.
You can also prevent packaging resources that are only a rotated equivalent of another resource. Let’s say you have 2 images named ic_arrow_expand
and ic_arrow_collapse
:
You can easily get rid of ic_arrow_collapse
by creating a RotateDrawable
relying on ic_arrow_expand
. This technique also reduces the amount of time your designer requires to maintain and export the collapsed asset variant:
1 2 3 4 5 6 7 |
|
In some cases rendering graphics directly for the Java code can have a great benefit. One of the best example of a mammoth weight gain is with frame-by-frame animations. I’ve been struggling with Android Wear development recently and had a look at the Android wearable support library. Just like the regular Android support library, the wearable variant contains several utility classes when dealing with wearable devices.
Unfortunately, after building a very basic “Hello World” example, I noticed the resulting APK was more than 1.5MB. After a quick investigation into wearable-support.aar
, I discovered the library bundles 2 frame-by-frame animations in 3 different densities: a “success” animation (31 frames) and an “open on phone” animation (54 frames).
The frame-by-frame success animation is built with a simple AnimationDrawable
defined in an XML file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
The good point is (I’m being sarcastic of course) that each frame is displayed for a duration of 33ms making the animation run at 30fps. Having a frame every 16ms would have ended up with a library twice larger… It gets really funny when you continue digging in the code. The generic_confirmation_00175
frame (line 15) is displayed for a duration of 333ms. generic_confirmation_00185
follows it. This is a great optimization that saves 9 similar frames (176 to 184 included) from being bundled into application. Unfortunately, I was totally disappointed to see that wearable-support.aar
actually contains all of these 9 completely unused and useless frames in 3 densities.3
Doing this animation in code obviously requires development time. However, it may dramatically reduce the amount of assets in your APK while maintaining a smooth animation running at 60fps.. At the time of the writing, Android doesn’t provide a easy tool to render such animations. But I really hope Google is working on a new light-weight real-time rendering system to animate all of these tiny details that material design is so fond of. An “Adobe After Effect to VectorDrawable” designer tool or equivalent would help a lot.
All of the techniques described above mainly target the app/library developers side. Could we go further if we had total control over the distribution chain? I guess we could but that would mainly involve some work server-side or more specifically Play Store-side. For instance, we could imagine a Play Store packaging system that bundles only the native libraries required for the target device.
On a similar note, we could imagine only packaging the configuration of the target device. Unfortunately that would completely break one of the most important functionalities of Android: configuration hot-swapping. Indeed, Android has always been designed to deal with live configuration changes (language, orientation, etc.). For instance, removing resources that are not compatible with the target screen density would be a great benefit. Unfortunately, Android apps are able to deal on the fly with a screen density change. Even though we could imagine deprecating this capability, we would still have to deal with drawables defined for a different density than the target density as well as those having more than a single density qualifier (orientation, smallest width, etc.).
Server-side APK packaging looks extremely powerful. But is is also very risky because the final APK delivered to the user would be completely different from the one sent to the Play Store. Delivering an APK with some missing resources/assets would just break apps.
Designing is all about getting the best out of a set of constraints. The weight of an APK file is clearly one of these constraints. Don’t be afraid of pulling the strings out of one apsect of your application to make some other better in some ways. For instance, do not hesitate to reduce the quality of the UI rendering if it reduce the size of the APK and make the app smoother. 99% of your users won’t even notice the quality drop while they will notice the app is light-weight and smooth. After all, your application is judged as a whole, not as a sum of severed aspects.
Thanks to Frank Harper for reading drafts of this
1 The .aar
library extension is a pretty similar archive. The only difference being that the files are stored in a regular non-compiled jar/xml form. Resources and Java code are actually compiled at the very moment the Android application using them is built.
2 There is just one optional exception to this rule: the launcher icon. The new Google experience launcher relies on the density “above” the current screen density to render the icon on the launcher. Thus, I always bundle an xxxhdpi version of this icon.
3 I personally consider this as a huge flaw in the Android wearable support library and decided not to use it. I couldn’t afford adding a 1.5MB Android Wear app to my 3.5MB Android app (especially knowing it is sent to devices probably not having a connected Android Wear device). As a solution, I re-implemented on my own the only interesting utilities.
Working on full-frame screenshots is usually enough in the design and development phases of a mobile app. But when it comes to marketing, communication and promotion, device-frame screenshots should be favored over full-frame ones because they give life to your product. Indeed, device-frame screenshots have the advantage of bringing your application to the real world by associating it with the objects/devices it will be running on.
Because I recently had to give some Android Wear-related presentations, I wanted to have a nice and simple way to integrate my screenshots into actual device frames. I had a look on the Internet but was quite disappointed about the resources currently available2. As a consequence, I made my own device frame and would like to share it with you so that you can use it in your presentations or simply when promoting your Android Wear app.
I have also worked on reproducing the two choices of colors of the LGE G Watch you can find out there: Black Titan and White Gold. This can be particularly helpful when using several watches at the same time or simply to choose the model that best fits your background (light over dark vs dark over light):
As I always do when releasing graphic assets, I ensured the PSD respects a certain hygiene: made only of vector-based elements, sensibly layered, named and grouped, etc. The PSD is using an LGE G Watch form-factor (280x280 pixels, hdpi, etc.) and have been created with Photoshop CS6 but should work properly with all recent versions of CS. Also note the following resources are licensed under the CC BY 3.0:
I really hope you’ll find this PSD useful to create stunning applications and amazing presentations.
To be honest, I agree with them. It has been a long time I have been claiming Google should ditch Java for another language. The thing is I think Google has been working on a replacement programming language just as long as Apple did - in secret - on Swift (i.e. 2010). So? What is this modern programming language? Some people think Go would be a great fit, I personally think Dart is more appropriate. I read and learned a lot about Dart during my vacation. From my point of view, for many reasons Dart is better for Android than Go: it is more mature, it is VM-based (just like Java), it better fits the Google ecosystem and it has an extremely easy learning curve while still remaining a simple language. It’s clearly time for Google to make Dart the future of Android and, in a global fashion, the future of the company itself.
I love Java. I really do. But Java is getting old, old enough to retire. Even though Java 8 can be considered as one of the biggest evolutions in the programming language history, Java still carries many drawbacks, limitations and problems. Most of these issues have been there from day one and will continue to exist due to the backward-compatibility nature of the language. On the other side, Dart has been created from the ground up keeping a simple idea in mind: fix the common and recurrent development problems. Dart solves many issues in the programming flow and helps developers create insanely powerful and fluent APIs. Here is a short list of some basic but modern features of the language:
No primitive types. In Dart, everything is an Object
. Even bool
, the Java’s boolean
equivalent, is an Object
. A pure object-oriented programming language should be all about Object
s. Java’s primitive types in Java are just an implementation detail.
Way less verbose syntax. Creating a public constant known at compile-time can be done using the const
keyword. No more public static final
. public
and private
keywords are not part of the language. The visibility of a variable/method/class is based on its naming: everything is public by default. If the entity name starts with an underscore, it is private.
Named and factory constructors. Java requires constructors to be named according to the class name, say Rectangle()
. If you create a Rectangle(int left, int top, int right, int bottom)
and a Rectangle(int left, int top, int width, int height)
you end up with a compile-time error because Java uses parameters to distinguish constructors. One way to solve this problem is to create a static factory method. Dart fixes this problem by allowing you to create named (and optionally factory) constructors.
Modern parameters passing: Dart supports positional and named parameters. They can also be optional and have default values.
And more: mixins, implicit interface, isolates (simple concurrency model)
Google and Oracle have been fighting about the use of Java on Android for a long time. The Google VS Oracle trial has probably been one of the most important trials in the recent history of computing. It looks like we are currently in a era of peace between these two mammoth companies … or maybe it is just cold war. No need to say it is way too dangerous for Google to continue to rely on programming languages managed by competitors. They clearly can’t afford to continue to be vulnerable to threats of lawsuits.
In order to be able to move forward, Google has to completely control over the programming languages they are using. For instance, Java 8 has been a bit of an holy grail for a long time. We have heard a lot about it in the past. And a lot of features and enhancements have been postponed. Now, Java 8 is here with some modern features like closures. Chances are we won’t have them on Android for a long time…
By controlling the language they are using, Google can maintain and evolve the language seamlessly on all platforms (mobile, web, server) whenever necessary. Apple did and still does that perfectly with Objective-C and Swift. For instance, they introduced closures 3 years ago in Objective-C just by making the language evolve. More recently, ARC and litterals were introduced. Apple controls the language. They can make it evolve and they do so when appropriate. Google is in the same position with Dart as stated on the Dart website:
Dart is an open-source project with contributors from Google and elsewhere.
In theory, this quote indicates everybody can participate to the language by accessing to the source code or submitting patches and enhancements. In practice, just like with the Android Open Source Project (AOSP), only Google controls Dart, because it is the only company that has enough resources to maintain the project and make it move forward. Personally, I am totally okay with Google controlling the language as long as it remains open-source and they listen to other contributors.
By introducing Dart on Android, Google would fill the only remaining gap in the Google development ecosystem. Dart already lets developers create applications for the web and servers. Porting Dart to Android would be the final stone to the house. Indeed, it would make Dart the only programming language that runs on all majors platforms: mobile, web and servers. One step closer to the “write once, run everywhere” motto that all software companies want to achieve.
Finally, Dart is an opportunity to simplify the Google development languages offering. Indeed, Google has been working with many languages in the past: C++, Python, Java, Javascript, etc. The Dart language could be the crossroads where all languages combine making Google products/services SDKs even more consistent and coherent.
Dart is awesome and has been out there for several years now. Unfortunately, it hasn’t yet reached the critical mass to be considered as the inevitable programming language in web development. This is mainly due to the fact, web developers don’t seem to think that Dart brings enough improvements to make switching worthwhile. Making Dart the default language on Android will be the best way to push the language to the next level and finally make it a first-class citizen in the landscape of programming languages.
Asking developers to use a brand new language is always a difficult move at first. However, it is way more motivating when you know the language you are about to learn can be used on other platforms. As an Android developer, I would be very happy to use Dart on Android and be able to easily create a small web site whenever I want.
Pushing Dart to Android obviously implies Google will have to work hard in order to solve things such as performance, compatibility or interoperability. How to make sure Dart runs as efficient as Java of constrained devices while it has been originally developed for desktop web browsers? How to make sure new Dart-based apps run on a majority of devices? By embedding a DartVM only on Android 4.5+1 devices? By transpiling apps to Dex with a dart2dex utility? How to let people use Java APIs in Dart and vice-versa?. All of these questions are difficult to solve but that is where Google excel: finding smart solutions to overcome issues. If, just like me, you think Android should switch to Dart, you can star issue #19266 on the Dart bugtracker. Google I/O is around the corner and I’m waiting for the June 25th keynote to hear Google reveal everything about the future of Android. Google, it’s time to be brave. Let’s start a new journey by deprecating Java and unveiling Dart as the new programming language for Android apps.
In the past few months, I have been working on developing an Android application from the ground up. This app named after the name of the company, Capitaine Train, can be downloaded on the Google Play Store. Capitaine Train - which can literally be translated as “Captain Train” in English - is a 3-year-old startup born from a simple truth: getting train tickets in Europe was a pain in the ass. We, at Capitaine Train, aim to revolutionize the way people travel all around Europe by simplifying the overall train experience. The release of the Android application clearly represented an important step forward in this direction.
Trying to revolutionize the train experience in Europe is not easy. It requires us to achieve a tremendous amount of work: getting to know the various carriers, learning about the document/reservation requirements for each of them, integrating their price/time tables, binding our servers to their systems, etc. From a user point of view all of this is the hidden, but vital, part of the iceberg. Indeed, a travel need or desire starts from a simple search request: From where? To where? When? Who? Although these questions are simple, the search step is extremely important in the booking process. This is where the trip actually begins after all! We designed the Android app keeping this essential idea in mind by simplifying every bit of the process. In this article, I would like to tell you the story behind the implementation of the search experience in the Android app and how we used animations to enrich the user experience.
When I arrived at Capitaine Train to work on the Android application, I started looking at all of the current ongoing UI-based projects. Some, such as the iOS app, were private but shaping up rapidly. Some others, the web app for instance, were already public and rather well appreciated from our users. My main job, at that time, was to imagine an Android application that could make users feel they were using the best Android app out there to book train ticket. The app had to reflect both the Capitaine Train essence and the Android look ‘n feel. Because the web app was the only public app at this time, I obviously based most of my drafts on top of it. Here is what the search form looks like on capitainetrain.com1:
While the two-panes (search form + options) design works perfectly on desktop we rapidly faced an issue on mobile: we did not have enough space to put both the form and the options panes on the same screen. Because mobile screens are small, we had no other choice than falling-back to a master/detail pattern of some kind. Two well-known and simple options were available to us: the master/detail pattern and the edition dialogs pattern. But we were not satisfied by these patterns. Indeed, dialogs completely breaks the user flow and would have been extremely annoying when filling at least 4 fields in the form (i.e. 4 dialogs). On the other end, opening a fullscreen “option” Activity for each field edition would have lost the user in an extremely complex screen hierarchy and app structure. I seriously thought none of these patterns were effective nor a good fit for the Capitaine Train Android app.
We definitely wanted to replicate the simplicity and obviousness of the desktop search so we finally ended up with a nice approach. Rather than opening a modal screen for each edited form fields, we managed to merge the form pane and the options pane into a single screen. By default, the application displays a search form with all of the available fields. Tapping on a field switches the screen to an “edit mode” where the edited field is visible on top and the rest of the form disappears to reveal the options available on the field. The video below shows an entire search flow use case:
The user flow demonstrated above works very nicely because of the transitions we designed. Indeed, none of this would have been usable without them2. Adding transitions into your application is the best way to enrich user experience by making your users understand the consequences of their actions. As Newton said, to every action there is a reaction: transitions explain what is between two UI states. They also reduce the impression of “stacking screens” when navigating from one screen to another. It makes the user feel the application is made of a single screen where UI elements animate to show and/or dismiss some parts of the app. In other words, transitions break barriers and transform app navigation into a natural flow.
Transitions are generally quick and barely noticeable. In order to better understand, create and/or reverse-engineer them it is interesting to consider slowing them down. In case you are in control of the application’s code, you can obviously switch all animation durations to some greater values. If you’re not, you can screencast the application and watch the resulting video frame by frame or in slow motion. Fortunately, Android comes with another extremely useful technique: a developer option called “Animator duration scale”. As its name states, this options scales all animation durations system-wide with the chosen scale.
In order to better understand what is happening when transitioning between the search form and the date/time edition mode, let’s use the aforementioned technique. The screencast below shows what the transition looks like at a 10x scale:
Looking at the slowed down video, we can look at the edition mode transition in details. More specifically, you may have noticed the final transition is actually divided into several sub-animations that are played in parallel with the exact same timing properties (duration, interpolator, etc.):
The previously described sub-animations composed together creates the search form to edition mode transition. The counter part transition (i.e edition mode to search form) is not described here as it mainly consists on reversing the animations: unfocus, fadeInToTop, slideOutToBottom and unstickFrom.
Prior deep diving into the implementation details, it is important to point out Capitaine Train Android is compatible with Android 4.0+. I personally choose this minimum requirement in order to have full access to the ActionBar features as well as the new property-based animation framework. I obviously could have chosen to target a lower API level but this would have implied multiple code paths (ActionBarCompat VS built-in ActionBar) and the use of support libraries (ActionBarCompat, NineOldAndroids, etc.). I clearly thought we couldn’t match our quality minimum requirements targeting pre-4.0 Android releases. Finally targeting older releases of Android wouldn’t have helped us targeting our rather “tech-familiar” clients. As a side note, at the time of the writing, more than 50% of our install base run the lastest version of Android (4.4) while the official Android dashboard indicates only 8.5%.
Implementing the entire search form flow was a nice challenge. Indeed, we wanted the application to run as greatly as possible on every devices. Thus, we had do deal with a mammoth amount of screen sizes, densities and orientation. While it is generally not a problem at all with Android, it may start to become a small one when you create a fairly complex design. We mainly solved these issues by using a ScrollView
as the root ViewGroup
, using orientation-dependent field height and developing orientation-dependent layouts (for instance the date/time picker looks different in landscape).
From a developer point of view, Capitaine Train Android search form is part of a quite complex Activity
: the HomeActivity
. HomeActivity
is clearly the first and main screen of the application. It is where 80% of our trip information can be found. HomeActivity
is built on top of a ViewPager
featuring 3 Fragment
-based pages: SearchFragment
, CartFragment
and TicketsFragment
. Each of these Fragment
s is represented by a tab in the UI.
As you can easily understand, SearchFragment
is where most of the code lies. SearchFragment
is made of a fairly complex View hierarchy that can be reduced to the simple layout below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
Basically, SearchFragment
is made of two distinct layouts. The first one, @id/normal_mode_container
is the actual search form as you can see it when opening the application while the second one, @id/edit_mode_container
is a simple container the field-dependent options pane will be added to.
Now that we know what the layout actually looks like, let’s finally focus on how the overall transition is performed. Whenever a field is tapped, SearchFragment
adds (or replaces) a new Fragment
to @id/edit_mode_container
, switches the ActionBar to an ActionMode
and starts animating to the “edition mode” using the animations described earlier. The newly added Fragment
depends on the edit mode the user is entering in: SuggestionsFragment
, DateTimePickerFragment
, PassengersFragment
. Just like we can put View
s into ViewGroup
, we put Fragment
s inside other Fragment
s. Nested Fragment
s have been introduced in JellyBean MR2 and are a great way of making sure your code is safely modularized and maintainable3. Although nested Fragment
s are API 17+, they have been back-ported back to API 4 and are available through the support library.
Animating search form UI elements is done thanks to the property-based animation framework introduced in Android 3.0. Because we wanted to use a simple and fluent API, we used ViewPropertyAnimator
. ViewPropertyAnimator
let’s you run optimized animations of select properties on View objects. However, ViewPropertyAnimator
was not enough in some cases. Indeed, we sometimes had to manually compute the translation distance. For instance the “focus” animation requires the computation of the tapped field top to the root container top distance. If the focused field was a direct child of the container, we could have used the getTop()
method. Unfortunately, this was not always the case. Fortunately, the framework comes with some handy methods that can offset View coordinates into a ancestor coordinate system. The trick consists of retrieving the View drawing rectangle (i.e. in its parent coordinate system) with View#getDrawingRect(Rect)
and translating it into the ancestor coordinate system with ViewGroup#offsetDescendantRectToMyCoords(View, Rect)
. This is what the “focus” animation looks like in code (note that you can decide to animate or not - animation-less transitions are used when restoring the UI state after a configuration change):
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The fadeOutToBottom animation translates the View from half the height of @id/edit_mode_container
. Note that precomputing the “half height” of @id/edit_mode_container
requires the entire View
hierarchy to be laid out. In order to do so, Capitaine Train Android relies on the OnLayoutChangeListener
and its onLayoutChanged
method:
1 2 3 4 5 6 7 8 |
|
Animating the edition panel in is done thanks to the slideInToTop animation:
1 2 3 4 5 6 7 |
|
Finally the stickTo animation consists on translating a gray bar according to the focused field bottom:
1 2 3 4 5 6 7 8 9 10 11 |
|
I have not explained how Capitaine Train Android relies on ActionMode
to switch the ActionBar
to a contextual ActionBar
. Doing so is fairly straight-forward and you only have to rely on the ActionBar
APIs to do so. ActionMode
s are used extensively in SearchFragment
in order to display a title and some optional actions that either describe or are in relationship with the displayed options pane. For instance, when selecting passengers, the ActionBar
displays a “Passengers” title and give the user the opportunity to create new passengers.
When everything was finally working perfectly I started to give a closer look at how smooth animations were. While animations were running almost correctly on a Nexus 5 running KitKat, I wasn’t satisfied at all when I switched to a plain old Galaxy Nexus running Android 4.3. Depending on the device, animations were sometimes always janky, sometimes only lagging once, sometimes not janky at all. Investigating the code, I managed to tweaked the animation a little bit and get an almost jank-free transition.
As described earlier, the search form transitions heavily rely on alpha animations. When switching from a normal mode to the edit mode, the edition pane fades in and some search form field fades out at the same time. Because the system can’t directly draw the alpha animated elements on screen, it uses an offscreen buffer to render the frame and then draws the frame on screen with the alpha value of the current interpolation. The offscreen rendering mechanism is a mandatory (at least 95% of the time, the other 5% are addressed by the View#hasOverlappingRendering()
method) and expensive process.
In order to avoid offscreen rendering on each animation frame, you can enable hardware layers on the animated View hierarchy for the duration of the animation. Enabling hardware layers basically asks the system to render the View hierarchy into an offscreen layer that can be considered as a rasterized bitmap copy of the actual View. With hardware layers on, all subsequent View property changes (translation, alpha, scale, etc.) are forwarded directly to the layer itself rather than invalidating the whole View and redrawing it.
Due to the offscreen rendering phase, hardware layers are generally enabled only during the time frame of the animation. Indeed, keeping hardware layers on when a View invalidates itself, requires the system to redraw its backing layer entirely prior compositing it on screen. To prevent such a performance drop, we created a special AnimatorListenerAdapter
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
The LayerEnablingAnimatorListener
is simply set as a listener to the ViewPropertyAnimator
s described above with by calling setListener(AnimatorListener)
.
The early alpha (internal-only) releases of Capitaine Train was based on a calendar library from Square called TimeSquare. Although TimeSquare was a library that nicely fit our needs, it was also completely screwing our transitions up. Indeed, TimeSquare’s CalendarPickerView
is a ListView
made of several CalendarGridView
(months) containing several CalendarRowView
(weeks) in turn composed of several CalendarCellView
(day). Because of the complex View hierarchy, we sometimes were displaying more than 400 View
s at once. Inflating such a huge amount of View
s requires a lot of time we don’t had. The first time the SuggestionsFragment
were displayed inflation was taking around 300ms on my Nexus 5, completely wasting the 333ms-long transition.
The trick here was simply to flatten the View hierarchy. We completely dropped TimeSquare and designed a calendar from scratch. The current CalendarView
implementation is also based on a ListView
but where each MonthView
draws directly on the Canvas
(i.e. a single View
renders a complete month)
SearchFragment
allow users to set 5 different search properties. Nested Fragment
s are all added to the FragmentManager
in SearchFragment
’s onCreate
. As discussed earlier, inflating View
hierarchy can slow down the renderer waiting for completion. We minimized this issue by simply reusing Fragment
s whenever possible. As a consequence, “From” and “To” both use the same instance of “SuggestionsFragment” and “Depart” and “Return” also both rely on the same instance of DateTimePickerFragment
. In addition to reducing inflation UI thread pauses, it also reduced memory consumption.
Being kind of a maniac person, I don’t consider the current release public release of Capitaine Train as perfect. I spent a lot of time tweaking the Capitaine Train application prior to the initial release but couldn’t do everything I had in my mind. Lack of time and startup reality just struck me. As an engineer, I simply made the best I could from the various components I had (time, design, code quality, performance, etc.). Here are some of the improvements I still have in mind to make things a little bit smoother:
Fragment
s in the SearchFragment
onCreate
method. When starting an edition mode, we show the corresponding Fragment
. Internally, the system switches the Fragment
visibility from GONE
to VISIBLE
. Because all nested Fragment
s uses a ListView
, a bunch of View
inflation happens the first time a Fragment
is shown. In fact, ListView
populates itself after it has been laid out. We could force the ListView
to inflate its items as soon as the field is touched by using MotionEvent.ACTION_DOWN
instead of MotionEvent.ACTION_UP
. This could save us the amount of time between these two events (around 40 to 60ms).SearchFragment
make an extensive use of ViewPropertyAnimator
. When transitioning to the edition mode, a bunch of ViewPropertyAnimator
are started and run in parallel. We could prevent the animation system from managing all animations independently and use a single ValueAnimator
of our own.With the introduction of the new property-based animation framework and Fragment
s in Android 3.0, the framework provides developers with all the necessary tools to create wonderful and meaningful UIs while still keeping a maintainable and modularized code. Animating Fragment
s is generally a single ViewPropertyHolder
API call away and may drastically improve the way users understand your application. Designing an application is not only about creating a nice static design. It is also about moving graphical elements in a way it is meaningful to users. Transitions both give life to an application and enrich user experience.
1: Feel free to register and have fun with the Capitaine Train web application. Just like the Android app, it is available in English, French, German and Italian.
2: The best way to understand the importance of transitions is to disable them temporarily. You can do so by disabling animations system-wide in the developer settings. Open the Settings application, go to “Developer options” and set the “Animator duration scale” to “Animation off”. Note that it may be required to restart the application so that the setting takes effect.
3: Since their introduction, Fragments have been overwhelmingly used. They also have been overwhelmingly criticized for their complexity. Their lifecycle is extremely complex, they are quite verbose, they have several “modes” (created via code or via XML inflation), etc. Nested Fragments have been even more criticized. The purpose of this article is not to tell you how to develop your own application. Fragments and nested Fragments are complex indeed but once you control and master them, you can start enjoying them. Using them is a great way to create independent portion of code inside your application.
Me not posting on my personal blog doesn’t mean I’m not active anymore. Even though I’m not publishing new articles here, I usually keep on sharing some of my thoughts on UI, UX, mobile development – or more generally on all of the topics I love discussing about – on some other web media. In this article, I would like to discuss about these web media I appreciate and use regularly.
The main reason behind me using Twitter or Google+ rather than this blog is time. Indeed, blogging usually requires a lot of time if you want your article to be as perfect as you want it to be. I can assure you, I have spent way more time on writing, correcting and publishing some articles than I would have by publishing them on Twitter and/or Google+. To be honest, I also consider the resulting posts/tweets are way less professional or polished than they would have been on this blog.
Twitter is clearly the social network I love the most. First, it’s extremely simple to use. Secondly, it forces people to sum up their ideas because of the 140 characters limit. Finally, it lets people interact with other fellow Twitter users very easily. I regularly tweet and, if you do too, you may be interested in following me at @cyrilmottier. Here is an abstract of some “popular” tweets I published in the past few months:
Charset.forName(String)
and have a look at StandardCharsets
(API 19) for common charsets #AndroidDevListView#add[Header|Footer]View
whenever you want! Calls order are not important anymore. #AndroidDev// STOPSHIP
comment are warned. #AndroidDevGoogle’s social network is also a web media I enjoy. It has one major (dis)advantage (the ‘dis’ addition depends on the point of view) over Twitter: it has no characters limit. I usually use Google+ rather than Twitter when I want to talk about a topic that cannot fit in a 140-characters tweet. Unfortunately, Google+ is clearly not dedicated to technical posts. The best evidence of that is how terrible the code renders in Google+. As a consequence, I consider Google+ as an intermediate media between Twitter and my personal blog: it is nice to discuss about some thoughts very quickly but it is not polished enough to fulfill my requirements about the content I publish.
If you are on Google+ or are willing to create an account, you may find some UI/UX thoughts I shared on my +CyrilMottier account. For instance, I recently started a series of posts entitled “Android app polishing”. These posts gives some insights on how we polished some parts of the Capitaine Train Android application:
Below is a list of some other older Google+ posts that may also interest Android developers:
I think I’ve given enough links in this post to demonstrate you can also follow me on some other web media than my personal blog. I know it’s not easy to keep up on things when they are not aggregated in a single place. However, I’m convinced most of my tweets nor my G+ posts would have no real meaning in this blog. Moreover, Twitter and Google+ are all about sharing present content. They inevitably waste the past by not providing an easy-to-browse history. Because of that I have, I do and I will always prefer the blog medium. Believe me, chances are I will continue posting new articles here!
]]>The release of Google Maps Android API v2 has been a huge step forward regarding map rendering capabilities on Android. The new framework has suffered and still suffers from sometimes unpleasant drawbacks1. From a designer point of view, it has one main advantage over the previous framework: it is bundled with a bunch of default resources. In other words, Google Maps Android API v2 helps developers with no particular design-skills to create nice-looking maps and improve design consistency through applications.
Even though Google Maps Android API is fulfill developers needs 90% of the time, there are still some cases where you might want to create your own graphic resources while still respecting the visual style of maps on Android. For instance, Google Maps Android API v2 does not allow developers to set the accent color of the “My Location”, “Zoom in” and “Zoom out” buttons when pressed. Thanks to this PSD, you can easily replicate the overall appearance and behavior of the original MapView
controls. Hence, you can easily ensure your app pressed color is use throughout the entire UI.
Personally, I used this updated version of the PSD in order to create mockups for a future and groundbreaking version of an application of mine called AVélov. This resources helped me to produce polished and realistic mockups I can directly use as a starting point for early-stage user-testing.
As for the first version, I ensured the PSD respects a certain hygiene: made only of vector-based elements, sensibly layered, named and grouped, etc.2 The PSD is using a Nexus 4 form-factor (1280x768 pixels, xhdpi, etc.) and have been created with Photoshop CS6 but should work properly with all recent versions of CS. Also note the following resources are licensed under the CC BY 3.0:
I really hope you’ll find this PSD useful to create stunning MapView
-based applications.
Developing awesome apps requires energy, passion and commitment. But I also believe great applications come from great development environment. Indeed, I have always thought applications are images of the tools developers are using. Fast emulator involves faster testing which, in turn, involves more polished applications. In the same way, intuitive and user-friendly development tools initiate/inculcate developers with a sense of UI/UX design they can reflect in their products.
Put simply, I am convinced the quality of the dev tools we are using are a direct consequence of the quality of the apps we are creating (and vice-versa). I have always been disappointed about how rustic and raw concrete Android dev tools were but it appears Google is now making a clear turn towards polished and productive dev tools. In addition to that, I am thrilled to see some companies such as Genymobile help the entire ecosystem to move in this direction.
The name of Genymotion is probably completely unknown to you. But I assure you this won’t be the case for so long. You will quickly learn to remember its name once you will try it out. In a nutshell, Genymotion is a feature-complete replacement for the default Android emulator (which has became un-usable due to its serious lack of performance with the latest versions of Android) that can be downloaded on the Genymotion website. The solution is based on Virtual Box and hence consists on virtualizing an Android device rather than emulating it.
I don’t consider myself as a virtualization/emulation expert but it looks like emulation is not an option anymore. As of today, the iOS SDK offers a simulator (apps are compiled to target the host architecture) and the latest Windows Phone SDK is bundled with a Windows Phone Emulator which is actually a virtual machine. I have the feeling virtualization is the best solution to solve both the performance issues inherent to emulators and the “binaries differences” simulators suffer from.
If you have already tried Genymotion, you already know the key difference between the default emulator, an hardware device and Genymotion: speed! Genymotion is extremely fast and makes Android development a pleasure. Genymotion relieves you from the burden of switching from your work station to your hardware device for testing. Thanks to this amazing piece of software, everything happens on your own work station.
In order to demonstrate how speedy Genymotion is, I ran a small project of mine with a few instrumentation tests on both my hardware device (a Nexus 4) and an instance of Genymotion running on my MacBook Pro. The outputs are just self-explanatory, running the instrumentation tests on Genymotion is at least 10 times faster than on a Nexus 4:
I first tried Genymotion when it was still known under the name of AndroVM. At that time, the product was clearly a tool with a great potential. But its lack of polish and its hard setup was making it a no-go in most Android development environments. The latest version of the software clearly demonstrates Genymobile (the company behind Genymotion) decided to push Genymotion to the next level.
After a nice new demo by the Genymotion team at Droidcon France, I gave it another try and after a single hour playing with it and testing it, I decided to start using it everyday when developing. Thanks to Genymotion, my personal hardware device is now almost only necessary for real-life/final development phase testing.
Speed is one of the most important aspect of Genymotion. However, it also offers some other nice features:
Let’s be clear, having a fast virtual device doesn’t mean you don’t have to test on real device. It is a good opportunity to prototype and polish your applications in a fast manner. However, real devices are still the best way to ensure everything runs smoothly in real life.
Genymotion still suffers from some minor bugs/missing features and from what I consider a disappointing UI1 (it’s still a beta after all). But its amazing fastness, true potential and ease of integration to the development environment makes it an obvious choice in your development toolkit. I personally know some of the guys in the Genymotion team and I am confident about the future of the software. They work hard on improving a tool that is probably the biggest step forward in Android emulation/virtualization/simulation since Android has been revealed in 2008.
About 5 years ago, I started developing my first Android app: a school project. At that time, the framework was only available as an early look and I chose Eclipse as my main IDE for two main reasons:
About 6 months ago, I switched to IntelliJ and more recently (since Google I/O 2013) to Android Studio. The reason behind the switch is pretty logical. Indeed, Eclipse was starting to be a pain in the ass for most of my projects. It was mainly too slow and was crashing often. After almost 5 years using Eclipse, switching to IntelliJ/Android Studio was not easy but it wasn’t painful either. The learning curve is quite impressive: it only took me an entire week to get used to the new shortcuts, the new look/appearance and some of the main features of the IDE.
In addition to being fast and stable, Android Studio has several advantages over Eclipse you can discover either by downloading and playing with the software or by watching the Google I/O keynote and the “What’s new in Android development tools” session Tor Norbye & Xavier Ducrohet did.
At the time of the writing Android Studio is an “I/O preview” and in version 0.1.6. The version number may not refer to a final version, Android Studio relies on the shoulders of a giant. If you are not fond of Eclipse and are looking for something new, you should definitely give Android Studio a try.
At Google I/O, I wanted to learn about all the new improvements related to the Android development tools. I obviously attended all of the talks related to the new Gradle-based build system. Even though the Gradle plugin is still in a development phase (0.4.3 at the time of the writing), I already strongly encourage people to use it in their projects.
I recently switched all of my projects to the Gradle plugin. The main reason behind that is the new build system finally deals with all of the features an Android developer may need. Moreover, Android Studio works best with Gradle. Here again, the plugin is not perfect of course but I am sure the Android tools team is working hard on polishing this new build system.
Here are some of the features I appreciate the most:
If you want to send feedback, report bugs, ask for help, etc. I encourage you to go to the adt-dev Google group. Chances are high, some people like Xavier Ducrohet (a Google engineer on the Android tools team) will help you (in case you post complete and precise questions of course). Here are some important links that helped me to switch to the new Gradle-based build system:
All of the tools described in this post make the Android development environment more efficient and productive than ever. They all are works in progress but they already demonstrate Android has a bright future ahead of itself. New Google-powered tools such as Android Studio and the new Gradle build system in addition to the insanely powerful Genymobile’s Genymotion are excellent initiatives to future-proof Android development tools.
If you are still thinking about what to do, here is my piece of advice for you. Do not hesitate, stop thinking, revolutionize your Android development environment right now by switching to these new awesome tools and help them to be even better than they already are.
While being understandable and self-explanatory, I seriously think Genymotion’s UI could be way simpler. The product is already awesome from a functional/feature point of view. Reducing friction due to the current ineffective UI would make the product truly revolutionary and ground-breaking.
Some APIs and/or frameworks such as Google Maps Android API v2 require an API key based on the key used to sign the APK. Sharing keys between developers at the project level, may relieve you from the burden of managing several API keys.
ActionBar
. Although I mentioned some of the effect’s possible applications, I never had time to effectively add an ActionBar
animation to one of my own apps nor saw an application on the Play Store taking advantage of it.
While being at Google I/O last week, I finally found an application using the ActionBar
animation technique. Let’s be honest, it literally blew my mind the first time I saw it. I felt in love with the nice, subtle and yet extremely useful animated effect probably more than the entire app itself! I am pretty sure you know the application I am talking about as it has been presented during the Google I/O keynote. You have also probably recently received an update of it: Play Music!
The latest update of Play Music (v5.0) has been completely redesign and features a brand new artist/album detail screen. If you open such a detail screen, you’ll notice the ActionBar
is initially invisible and overlaps a large image describing the artist/album. Once you start scrolling down (if possible), the ActionBar
fades in gradually. The ActionBar
turns completely opaque when the large image has been scrolled out of the screen.
Here are two main advantages of this ActionBar
animation:
Polish the UI: animations synchronized on an element you’re interacting with are generally appreciated by users because it makes them feel the UI is natural and reacts to their actions. The fading animation is a direct consequence of the per-pixel scrolling state and not a launched-once animation.
Take advantage of the screen real estate: while still preserving the UX of the platform, this pattern let the user primarily focus on the content rather than the controls. Used in addition to a nicely designed screen, it can be a game changer for your app’s interface.
In this article, I will deep dive into the details of implementing the technique described in “ActionBar on the Move” to create an effect similar to the one used in the Play Music app.
In order to better understand the goal we are targeting, you can have a look at the screenshots below or alternatively download the sample application.
As you can easily notice, in order to reproduce such an effect, the ActionBar
must overlap the content of the screen. This can be easily done using the android:windowActionBarOverlay
XML attributes. The code below describes the definition of the themes we’ll use:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Pretty logically, the style of the ActionBar
is defined in values/styles.xml
as follows:
1 2 3 4 5 6 7 8 9 10 |
|
Finally, we can use these themes in order to style our Activity
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Note that by using themes/styles we remove all potential flickering issues at startup (see Android App Launching Made Gorgeous for more information).
As explained previously, the ActionBar
fading is synchronized on the per-pixel state scrolling of the scrolling container. In this example, we’ll simply use a ScrollView
as a scrolling container. One of the major drawback of this container is you can’t register a listener in order to be notified when the scroll has changed. This can be easily done be creating a NotifyingScrollView
extending the original ScrollView
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
Then, we can use this new scrolling container in an XML layout:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Now most of the boilerplate is ready, we can plug all of these components together. The ActionBar
algorithm is rather simple and only consists on computing the alpha depending on the current per-pixel scrolling state of the NotifyingScrollView
. Note that the effective scrolled distance must be clamped to [0, image_height - actionbar_height] in order to avoid weird values that may occur mainly because of the default over scroll behavior of scrolling containers on Android:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
As described in “ActionBar on the Move”, this snippet of code above doesn’t work for pre-JELLY_BEAN_MR1 devices. Indeed, the ActionBar
isn’t invalidating itself when required because it isn’t registering itself as the Drawable
’s callback. You can workaround this issue simply be attaching the following Callback
in the onCreate(Bundle)
method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 |
|
You can already run the code “as it”. Although the result looks alike the animation used in Play Music we can still continue to tweak it to make it better.
Having an transparent ActionBar
may lead to design issues because you generally don’t know about the background you’ll be displayed on top of. For instance you may end up with a transparent ActionBar
displaying a white text on top of a white description image. No need to say it makes the ActionBar
invisible and useless.
The easiest way to avoid such a problem consists on modifying the image to make it a little bit darker at the top. Thus, in a worse case scenario (i.e. white image) we would have a grey area on top of the image making the ActionBar
content (title, icons, buttons, etc.) visible.
A simple way to do that is to overlay a translucent dark to transparent gradient on top of the image. This can be done in XML only with the Drawable
described below:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
The gradient is overlaid using a wrapping FrameLayout
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
In Gingerbread (API 9), Android introduced a brand new way to notify the user a scrollable container is being scrolled beyond the content bounds. First it introduced the notion of EdgeEffect
(available in the API starting API 14) and enabled over-scroll. While this is not a problem in general, it can be pretty annoying when one of the edge of your scrollable content is different from the background color.
You can reproduce it be simply flinging the ScrollView
rapidly to the top and you’ll notice some white color (the background color) appears on top of the screen because the image is scrolling beyond the bounds. I personally consider this a a UI glitch and usually prefer disabling it in this rare cases.
One could imagine the best way to avoid over-scroll is to use View#setOverScrollMode(int)
to change the mode to View#OVER_SCROLL_NEVER
. Although it works, it also remove the edge effect which can be visually disturbing1. A simple way to do that is to modify the NotifyingScrollView
to force the maximum over scroll values to zero when necessary:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
I seriously don’t know if the team behind the Play Music application decided to implement the behavior based on my article. But it appears they brilliantly used the technique to both polish and emphasize the UI. It is clearly an awesome pattern to use whenever you need to design a screen which content is self-explanatory and is more important than the ActionBar
content itself.
(Un)fortunately - the addition/removal of the ‘un’ obviously depends on the point of view - Google released a radically different and new version of the library in December 2012. In addition to this release they announced the deprecation of the first version of the API as of March 20131. Let’s be honest, at first I was pretty annoyed by this new release because it turned almost all of my work to a waste of time. On the other hand, I was quite happy to notice the new API were really close - functionaly and API-ly speaking - to what I did on my own with Polaris.
Back in December I gave my point of view about the new Google Maps Android API v2. After almost 6 months, the library has been updated only once (as part of the Google Play Service, it was supposed to be updated very often…) and is not a great starting base for building libraries on top of it (all of the classes are final and hence cannot be extended).
With the release of AVélov 1.2, a lot of people were interested in the animated clustering algorithm I developed. Several companies asked me for the app source code and I was quite frustrated not being able to deliver a true library but only a sample code. That was true until I decided to find a way around the Google’s locked down library. I finally managed to by-pass the ‘final’ limitation fairly easily. I entirely wrapped the original Google’s library into my own library: Polaris v2.
The main purpose of Polaris v2 is to act as the root component for creating library projects around the Google Maps Android API v2. Although I originally developed it for creating commercial library providing animated clustering, I extracted the essence of it and kept some basic features. As a consequence, Polaris v2 aims to fix some of the most frustrating bugs of the original library and provide additional features.
For now, the code is mostly a wrapper but I’m releasing and open-sourcing the code so that the community can contribute to it and enhance it with some awesome new features and fixes. The current release includes just a few improvements (see the README file on GitHub for more information). Hopefully, some of the changes introduced in Polaris v2 will be backported into Google Maps Android API v2…
Using Polaris v2 in your projects is quite simple. The API exposed by Polaris v2 is a super-set of what the original Google Maps Android API v2 exposes. As a result, you only need to switch all of the imports from the Google Maps Android API v2 (com.google.android.gms.maps.*
) to Polaris v2 (com.cyrilmottier.polaris2.maps.*
)
Finally you can start interacting with the underlying GoogleMap
by calling getPolarisMap()
(instead of getMap()
) on your SupportMapFragment
, MapFragment
or MapView
.
While the original project at the root of Polaris was more featureful, this release is the perfect way for developers to start adding some missing features to the original Google-provided mapping library. Today, Polaris v2 is just a wrapper around the Google Maps Android API v2 but could easily become a must-use library in the future. The project is just waiting for the community to help it grow.
3 or 4 months ago I received an email from my host telling me they were dropping support of my old version of PHP (5.2). The version of Wordpress used for my android.cyrilmottier.com blog was so old it wasn’t supporting the PHP 5.4 and can’t even upgrade itself to a PHP 5.4 compliant version… Not being a server/backend guy, I decided to go for a complete redesign of both the guts and the look of cyrilmottier.com.
The new version of the website is now using Octopress. Put simply, Octopress is a static website generator. Some of the main advantages of Otopress over Wordpress is it requires no SQL and no PHP, is responsive by default, deals perfectly with code snippets and relieve me from the pain of updating stuff I don’t know well. The only required piece of software Octopress needs is … Apache.
The cyrilmottier.com domain was previously redirecting to a landing page made of tons of redirections to subdomains. Whilst it keeps clear sections in your website, it is hard to maintain and require time I don’t have (nor don’t want to spend on). As a result, I decided to go for something way more simple:
android.cyrilmottier.com is now cyrilmottier.com
In the process, I removed several subdomains and redirections I considered outdated and useless :
Aside from removing a bunch of subdomains, I also created a theme from scratch. Once again, the idea behing the ‘Carrot’ theme (don’t ask me why I decided to name it that way) is to make things simple, remove all distractful and useless info just like the Android Holo style does. I really think I managed to focus on the content by getting rid or minimalizing of all secondary information.
As a mobile UI/UX engineer and designer, I wanted to finally have a website that looked like what I would have done if my website was an Android app. You can now notice the new website is layout and density1 responsive. As a result, it renders like magic on phones as well as tablets and desktop
Static websites don’t let you create dynamic algorithms… After all that’s what “static” means :). In order to keep features such as comments or search I used a plain old design pattern in computer science: delegation. Starting from now, comments are managed by Disqus2, and search is done by the best search-engine in the world: Google.
Meaningless URLs are now over. When creating android.cyrilmottier.com I wanted to have short URLs to blog posts. The only purpose of this was to avoid the use of URL shorteners in social networks such as Twitter. However, using short URLs is far from being a great search engine optimization (SEO). Now that Twitter automatically shorten URLs, I’ve decided to switch to a more standard URL model:
http://cyrilmottier.com/<year>/<month>/<day>/<post-slug>.html
The previous version of android.cyrilmottier.com was great for accessing new content but it wasn’t easy to browse the entire content of the blog. The new version now includes an “Archives” page you can use to easily browse old posts.
I have spent quite a lot of time redesigning this domain but it was worth it. I learned tons of new stuff and I loved it. From a user point of view, everything should work fine seamlessly. However, as I said, I’m not a server administrator nor a backend guy so please contact me if you think something is wrong.
One of the redirection that scares me the most is the RSS feed (R.I.P. Google Reader …). The URL (from http://android.cyrilmottier.com/?feed=rss2 to http://cyrilmottier.com/atom.xml) as well as the format (from RSS2 to ATOM) has changed so please make sure you RSS reader now points to http://cyrilmottier.com/atom.xml.
I’m only talking about the theme. Indeed, images from posts are not density-responsive for one single reason: Octopress nor HTML correctly handles it.
Disqus comments are not activated nor migrated for now but I plan to do it as soon as possible.