Bump’s Journey into Delightful Experiences on Android

This article was initially published as on the official Android Developers blog. As the original author and for the sake of readability and long-term availability, it is now available.


At amo, we are redefining what it means to build social applications. Our mission is to create a new kind of social company, one that prioritizes high-quality, thoughtfully designed mobile experiences. One of our flagship applications, Bump, puts your friends on the map — whether you’re checking in on your crew or making moves to meet up.

Our app leverages multiplatform technologies for its foundation. At the core lies a shared Rust-based library that powers all of our iOS and Android apps. This library, managed by our backend engineers, is responsible for persistence and networking. The library exposes its APIs as Kotlin Flow. In addition to making everything reactive and realtime-enabled by default, it integrates effortlessly with Jetpack Compose, the technology we use to build our UI. This architecture ensures a consistent and high-performance experience across platforms. It also allows mobile engineers to spend more time on the user experience where they can focus on crafting innovative and immersive user interactions.

In this post, we will explore how we leverage the Android SDK, Jetpack Compose, the Kotlin programming language, and Google Play Services to build unique, delightful experiences in Bump. Our goal is to break mental barriers and show what’s truly possible on Android — often in a very few lines of code. We want to inspire engineers to think beyond conventional UI paradigms and explore new ways to create magical moments for users. By the end of this article, you’ll have a deeper understanding of how to harness Android’s capabilities to build experiences that feel like magic.

At amo, we ship, learn, and iterate rapidly across our feature set. That means that the design of some of the features highlighted in this article has already changed, or will do so in the coming weeks and months.

Designing for the senses vs designing for the eyes

Great touch-based UX isn’t just about flashy visuals. It’s about delivering meaningful feedback through graphics, haptics, sounds, and more. Said differently, it is about designing for all senses, not just for the eyes. We take this very seriously when designing applications and always take all of these potential dimensions into account.

One example is our in-app notification center. The notification center is a visual entry point accessible from anywhere in the app which shows all of your notifications from the entire amo suite of apps. It can be moved anywhere on the screen. Its style also changes regularly thanks to some in-residence or external artists. But styling doesn’t stop at the visual level, we also style it at the audio level: when it is dragged around, a short and repeating sound is played. Remember to turn the sound on for the full experience:

To make it fun and joyful, we pushed this even further to let the user be a DJ. The volume, the speed and the pitch of the audio change depending on where the user drags it. It is a “be your own DJ” moment. The implementation of this experience can be split in two parts. The first part deals with the audio and the second part handles the rendering of the entry point and its interactions (dragging, tapping, etc.).

Let’s first dive into the code handling the audio. It consists of a Composable requiring an URL pointing to the music, a flag indicating whether it should play or not (true only when dragging) and a two-dimensional offset: X axis controls the volume, Y axis controls the playback speed & pitch.

@Composable
fun GalaxyGateAccessPointMusicPlayer(
    musicUrl: String,
    isActive: Boolean,
    offset: Offset,
) {
    val audioPlayer = rememberAudioPlayer(
        uri = Uri.parse(musicUrl),
    )
    LaunchedEffect(audioPlayer, isActive) {
        if (isActive) {
            audioPlayer.play(isLooped = true)
        } else {
            audioPlayer.pause()
        }
    }

    SideEffect {
        audioPlayer.setSpeedPitch(
            speed = 0.75f + offset.y * 0.5f,
            pitch = offset.x + 0.5f
        )
        audioPlayer.setVolume(
            (1.0f - ((offset.x - 0.5f) * 2f).coerceIn(0f, 1f)),
            (1.0f - ((0.5f - offset.x) * 2f).coerceIn(0f, 1f)),
        )
    }
}

@Composable
fun rememberAudioPlayer(
    uri: Uri,
): AudioPlayer {
    val context = LocalContext.current
    val lifecycle = LocalLifecycleOwner.current.lifecycle
    return remember(context, lifecycle, uri) {
        DefaultAudioPlayer(
            context = context,
            lifecycle = lifecycle,
            uri = uri,
        )
    }
}

DefaultAudioPlayer is just an in-house wrapper around ExoPlayer provided by Jetpack Media3 that deals with initialization, lifecycle management, fading when starting/stopping music, etc. It exposes 2 methods setSpeedPitch and setVolume delegating to the underlying ExoPlayer.

By combining gesture readings with audio pitch, speed and volume, we added delight and surprise when users didn’t expect it.

Highly Unique Experiences with Animations and Shaders

We named our application “Bump” as a nod to its core feature: people close to each other can “bump” their phones together. If they are not registered as friends on the app, bumping will automatically send a friend request. And if they are, a mesmerizing animation triggers. We also notify mutual friends that they “bumped” and they should join.

This Bump feature is central to the app’s experience. It stands out in its interaction, functionality, and the unique value it provides. To express its importance, we wanted the feature to have a distinctive visual appeal. Here’s a glimpse of how it currently looks in the app:

There is a lot going on in this video but it can be summarized to 3 animations: the “wave” animation when the device detects a local bump/shake, the animation showing the two friends bumping and lastly a “ring pulsing” animation to finish. While the second animation is plain Compose, the two others are custom. Creating such custom effects involved venturing into what is often considered “unknown territory” in Android development: custom shaders. While daunting at first, it’s actually quite accessible and unlocks immense creative potential for truly unique experiences.

Simply put, shaders are highly parallelizable code segments. Each shader runs once per pixel per frame. This might sound intense, but this is precisely where GPUs excel. In Android 13, shaders have been integrated as first-class citizens with AGSL shaders and RuntimeShader for Views and Compose.

Since our app requires a minimum of API 30 (Android 11), we opted for a more traditional approach using a custom OpenGL renderer.

We extract a Bitmap of the view we want to apply the effect to, pass it to the OpenGL renderer, and run the shader. While this method ensures retro-compatibility, its main drawback is that it operates on a snapshot of the view hierarchy throughout the animation. As a result, any changes occurring on the view during the animation aren’t reflected on screen until the animation concludes. Keen observers might notice a slight glitch at the end of the animation when the screenshot is removed, and normal rendering resumes.

Bringing Friends to Life on the Map

In our apps, profile pictures are a bit different. Instead of static images, you record a live profile picture, a transparent, animated boomerang-like cutout. This approach feels more personal, as it literally brings your friends to life on-screen. You see them smiling or making faces, rather than just viewing curated or filtered photos from their camera roll.

From a product perspective, this feature involves two key phases: the recording and the rendering. Before diving into these specific areas, let’s discuss the format we use for data transport between mobile devices (Android & iOS) and the server. To optimize bandwidth and decoding time, we chose the H.265 HEVC format in an MP4 container and perform the face detection on device. Most modern devices have hardware decoders, making decoding incredibly fast. Since cross-platform videos with transparency aren’t widely supported or optimized, we developed a custom in-house solution. Our videos consist of two “planes”:

  • The original video on top
  • A mask video at the bottom

We haven’t yet optimized this process. Currently, we don’t pre-apply the mask to the top plane. Doing so could reduce the final encoded video size by replacing the original background with a plain color.

This format is fairly effective. For instance, the video above is only 64KB. Once we aligned all mobile platforms on the format for our animated profile pictures, we began implementing it.

Recording a Live Profile Picture

The first step is capturing the video, which is handled by Jetpack CameraX. To provide users with visual feedback, we also utilize ML Kit Face Detection. Initially, we attempted to map detected facial expressions (such as eyes closed or smiling) to a 3D model rendered with Filament. However, achieving real-time performance proved too challenging for the timeframe we had. We instead decided to detect the face contour and to move a default avatar image on the screen accordingly.

Once the recording is complete, Jetpack CameraX provides a video file containing the recorded sequence. This marks the beginning of the second step. The video is decoded frame by frame, and each frame is processed using ML Kit Selfie Segmentation. This API computes the face contour from the input image (our frames) and produces an output mask of the same size. Next, a composite image is generated, with the original video frame on top and the mask frame at the bottom. These composite frames are then fed into an H.265 video encoder. Once all frames are processed, the video meets the specifications described earlier and is ready to be sent to our servers.

While the process could be improved with better interframe selfie segmentation, usage of depth sensors, or more advanced AI techniques, it performs well and has been successfully running in production for over a year.

Rendering Your Friends on the Map

Playing back animated profile pictures presented another challenge. The main difficulty arose from what seemed like a simple product requirement: displaying 10+ real-time moving profile pictures simultaneously on the screen, animating in a back-and-forth loop (similar to boomerang videos). Video decoders, especially hardware ones, excel at decoding videos forward. However, they struggle with reverse playback. Additionally, decoding is computationally intensive. While decoding a single video is manageable, decoding 10+ videos in parallel is not. Our requirement was akin to wanting to watch 10+ movies simultaneously on your favorite streaming app, all in reverse mode. This is an uncommon and unique use case.

We overcame this challenge by trading computational needs for increased memory consumption. Instead of continuously decoding video, we opted to store all frames of the animation in memory. The video is a 30fps, 2.5-second video with a resolution of 256x320 pixels and transparency. This results in a memory consumption of approximately 24MB per video. A queue-based system handling decoding requests sequentially can manage this efficiently. For each request, we:

  • Decode the video frame by frame using Jetpack Media3 Transformer APIs
  • For each frame:
    • Apply the lower part of the video as a mask to the upper part.
    • Append the generated Bitmap to the list of frames.

Upon completing this process, we obtain a List<Bitmap> containing all the ordered, transformed (mask-applied) frames of the video. To animate the profile picture in a boomerang manner, we simply run a linear, infinite transition. This transition starts from the first frame, proceeds to the last frame, and then returns to the first frame, repeating this cycle indefinitely.

@Immutable
class MovingCutout(
    val duration: Int,
    val bitmaps: List<ImageBitmap>,
) : Cutout

@Composable
fun rememberMovingCutoutPainter(cutout: MovingCutout): Painter {
    val state = rememberUpdatedState(newValue = cutout)
    val infiniteTransition = rememberInfiniteTransition(label = "MovingCutoutTransition")
    val currentBitmap by infiniteTransition.animateValue(
        initialValue = cutout.bitmaps.first(),
        targetValue = cutout.bitmaps.last(),
        typeConverter = state.VectorConverter,
        animationSpec = infiniteRepeatable(
            animation = tween(cutout.duration, easing = LinearEasing),
            repeatMode = RepeatMode.Reverse
        ),
        label = "MovingCutoutFrame"
    )
    return remember(cutout) {
        // A custom BitmapPainter implementation to allow delegation when getting
        //  1. Intrinsic size
        //  2. Current Bitmap
        CallbackBitmapPainter( 
            getIntrinsicSize = {
                with(cutout.bitmaps[0]) { Size(width.toFloat(), height.toFloat()) }
            },
            getImageBitmap = { currentBitmap }
        )
    }
}

private val State<MovingCutout>.VectorConverter: TwoWayConverter<ImageBitmap, AnimationVector1D>
    get() = TwoWayConverter(
        convertToVector = { AnimationVector1D(value.bitmaps.indexOf(it).toFloat()) },
        convertFromVector = { value.bitmaps[it.value.roundToInt()] }
    )

Beyond Default: Crafting Powerful Map Interactions

As a map-based social app, Bump relies heavily on the Google Maps Android SDK. While the framework provides default interactions, we wanted to push the boundaries of what’s possible. Specifically, users want to zoom in and out quickly. Although Google Maps offers pinch-to-zoom and double-tap gestures, these have limitations. Pinch-to-zoom requires two fingers, and double-tap doesn’t cover the full zoom range.

For a better user experience, we’ve added our own gestures. One particularly useful feature is edge zoom, which allows rapid zooming in and out using a single finger. Simply swipe up or down from the left or right edge of the screen. Swiping down to the bottom zooms out completely, while swiping up to the top zooms in fully.

Like Google Maps gestures, there are no visual cues for this feature, but it’s acceptable for a power gesture. We provide visual and haptic feedback to help users remember it. Currently, this is achieved with a glue-like effect that follows the finger, as shown below:

Implementing this feature involves two tasks: detecting edge zoom gestures and rendering the visual effect. Thanks to Jetpack Compose’s versatility, this can be done in just a few lines of code. We use the draggable2D Modifier to detect drags, which triggers an onDragUpdate callback to update the Google Maps camera and triggers a recomposition by updating a point variable.

@Composable
fun EdgeZoomGestureDetector(
    side: EdgeZoomSide,
    onDragStarted: () -> Unit,
    onDragUpdate: (Float) -> Unit,
    onDragStopped: () -> Unit,
    modifier: Modifier = Modifier,
    curveSize: Dp = 160.dp,
) {
    var heightPx by remember { mutableIntStateOf(Int.MAX_VALUE) }
    var point by remember { mutableStateOf(Offset.Zero) }
    val draggableState = rememberDraggable2DState { delta ->
        point = when (side) {
            EdgeZoomSide.Start -> point + delta
            EdgeZoomSide.End -> point + Offset(-delta.x, delta.y)
        }
        onDragUpdate(delta.y / heightPx)
    }
    val curveSizePx = with(LocalDensity.current) { curveSize.toPx() }
    
    Box(
        modifier = modifier
            .fillMaxHeight()
            .onPlaced {
                heightPx = it.size.height
            }
            .draggable2D(
                state = draggableState,
                onDragStarted = {
                    point = it
                    onDragStarted()
                },
                onDragStopped = {
                    point = point.copy(x = 0f)
                    onDragStopped()
                },
            )
            .drawWithCache {
                val path = Path()
                onDrawBehind {
                    path.apply {
                        reset()
                        val x = point.x.coerceAtMost(curveSizePx / 2f)
                        val y = point.y
                        val top = y - (curveSizePx - x)
                        val bottom = y + (curveSizePx - x)
                        moveTo(0f, top)
                        cubicTo(
                            0f, top + (y - top) / 2f,
                            x, top + (y - top) / 2f,
                            x, y
                        )
                        cubicTo(
                            x, y + (bottom - y) / 2f,
                            0f, y + (bottom - y) / 2f,
                            0f, bottom,
                        )
                    }

                    scale(side.toXScale(), 1f) {
                        drawPath(path, Palette.black)
                    }
                }
            }
    )
}

enum class EdgeZoomSide(val alignment: Alignment) {
    Start(Alignment.CenterStart),
    End(Alignment.CenterEnd),
}

private fun EdgeZoomSide.toXScale(): Float = when (this) {
    EdgeZoomSide.Start -> 1f
    EdgeZoomSide.End -> -1f
}

The drawing part is handled by the drawBehind Modifier, which creates a Path consisting of two simple cubic curves, emulating a Gaussian curve. Before rendering it, the path is flipped on the X axis based on the screen side.

Schema of the rendered Bezier path This effect looks nice but it also feels static, instantly following the finger without any animation effect. To improve this, we added spring-based animation. By extracting the computation of x (representing the tip of the Gaussian curve) from drawBehind into an animatable state, we achieve a smoother visual effect:

val x by animateFloatAsState(
    targetValue = point.x.coerceAtMost(curveSizePx / 2f),
    label = "animated-curve-width",
)

This creates a visually appealing effect that feels natural. However, we wanted to engage other senses too, so we introduced haptic feedback to mimic the feel of a toothed wheel on an old safe. Using Kotlin Flow and LaunchedEffect and snapshotFlow this was implemented in a very few lines of code:

val haptic = LocalHapticFeedback.current
LaunchedEffect(heightPx, slotCount) {
    val slotHeight = heightPx / slotCount
    snapshotFlow { (point.y / slotHeight).toInt() }
        .drop(1) // Drop the initial "tick"
        .collect {
            haptic.performHapticFeedback(HapticFeedbackType.SegmentTick)
        }
}

Conclusion

Bump is filled with many other innovative features. We invite you to explore the product further to discover more of these gems. Overall, the entire Android ecosystem — including the platform, developer tools, Jetpack Compose, Google Play Services — provided most of the necessary building blocks. It offered the flexibility needed to design and implement these unique interactions. Thanks to Android, creating a standout product is just a matter of passion, time, and more than a few lines of code!