watchOS With SwiftUI by Tutorials!

Build awesome apps for Apple Watch with SwiftUI,
the declarative and modern way.

Home Android & Kotlin Tutorials

CameraX: Getting Started

Learn how to implement camera features on Android using CameraX library

5 / 5 8 Ratings


  • Kotlin 1.3, Android 4.1, Android Studio 3.5

The camera is central to the mobile device experience, with many popular apps using it in some way. As a mobile developer, you’ll need to know how to integrate the camera into your apps. However, you’ll find that coding for the camera is hard work. You need to account for variations due to different versions of Android running on different devices, and edge cases tied to those differences.

You’ll have hours of coding ahead… unless you use CameraX!

CameraX is a new Jetpack library that simplifies the process of integrating the device camera into your Android app. Just picture all the time you’ll save!

In this tutorial, you’ll make an app that takes and stores photos using CameraX. The app will show the a camera preview in the bottom half of the screen and the most recent photo in the top half. You’ll also add a toolbar to save photos or add special effects.

By creating this app, you’ll learn how to:

  • Present a camera preview.
  • Access the camera to take photos.
  • Store the photos in memory or in a file.
  • Provide options for different special effects.
Note: This tutorial assumes you have previous experience developing for Android in Kotlin. If you’re unfamiliar with Kotlin, take a look at Kotlin for Android: An Introduction. If you’re also new to Android development, check out our Getting Started with Android tutorials.

You’ll start building your app in no time; first, you need to know more about CameraX.

What Is CameraX?

CameraX is a part of the growing Jetpack support library. It has backward compatibility down to Android API 21. It replaces Camera2 while supporting all the same devices — without the need for device-specific code!

Caution: CameraX is still in alpha, and is not considered ready for use in production apps.

The library provides abstractions for some helpful use cases, which are used to complete a specific task. The primary three are:

  • Preview
  • Image analysis
  • Image capture

Also of note, CameraX brings life to vendor extensions, which are features that display special effects on some devices. CameraX lets you add extensions to your app with just a few lines of code. Currently, the supported effects are:

  • Portrait
  • Bokeh
  • Night Mode
  • Beauty
  • HDR

Enough talk, you’re ready to build!

Getting Started

Click on the Download Materials button at the top or bottom of the page to access the begin and end projects for this tutorial.

Next, open Android Studio. From the welcome screen, select Open an existing Android Studio project. If you already have Android Studio open, click File ▸ New ▸ Import Project. Either way, select the top-level project folder for the begin project.

You’ll want to take a moment to familiarize yourself with the code that you’ll find in the begin project.

Pre-Built Kotlin Classes and Dependencies

Open the main package, You’ll see six Kotlin classes:

  • ExtensionFeature.kt: An enum class that you’ll use to apply vendor extensions.
  • FileUtils.kt and FileUtilsImpl.kt: The interface and implementation for creating files and folders. You’ll use these to store images to a file.
  • ImagePopupView.kt: A class for displaying images as a pop-up. You’ll use this to view an image in full-screen mode.
  • PhotoActivity.kt: The main class for this tutorial, where you’ll write all camera-related code.
  • SplashActivity.kt: The class for displaying the splash screen of the app.

Also, notice that the app uses two CameraX dependencies, which you can see in the build.gradle file of the app module:


The core dependency consists of helper classes for:

  • Binding and unbinding the camera to the lifecycle.
  • Creating use cases like image preview or image capture.
  • Changing lens facing.

The core dependency gives you everything you need to support basic camera features.

The extensions dependency provides a way to apply vendor extensions like Night mode.

Note: Access to vendor extensions is currently limited to extensions dependency version 1.0.0-alpha03 or lower.

Now that you have a good overview of the key files for this project, it’s time to build and run the app. Your device will ask for permission to access the camera. After you accept the requested permissions, you’ll see a screen like this:

Screenshot of app's opening view

Familiarizing Yourself With the App Screen

The app’s main screen has a top and bottom half and a toolbar at the top. You’ll use the lower half to display the preview and the top half to show the most recent photo.

Notice that you already have a Save Image switch and a drop-down menu on the toolbar. Neither of these does anything now, but you’ll fix that soon!

The Save Image switch and drop-down menu

Creating a Camera Preview

To create a camera preview, you need to have a view that can display it. To do this, you’ll use TextureView.

Open activity_photo.xml and find a view at the bottom of the file with the id previewView. Change the view type to TextureView and delete the line that sets the background to black.

After those changes, your code should look like this:

    app:layout_constraintTop_toBottomOf="@+id/takenImage" />

TextureView allows you to stream media content, like videos, which is exactly what you need to display a camera preview.

Next, you need a way to start the camera programmatically.

To implement this, open PhotoActivity.kt and add this new method to the class:

  private fun startCamera() {
    // 1

    // 2
    val preview = createPreviewUseCase()

    preview.setOnPreviewOutputUpdateListener {
      // 3
      val parent = previewView.parent as ViewGroup
      parent.addView(previewView, 0)

      previewView.surfaceTexture = it.surfaceTexture

      // 4

    // 5
    imageCapture = createCaptureUseCase()

    // 6
    CameraX.bindToLifecycle(this, preview, imageCapture)

Android Studio will show you compile errors with the above code, you’ll fix that in a moment. First, take a look at what you’re doing in startCamera:

  1. Unbind all use cases from the app lifecycle. This closes all currently open cameras and allows you to add your own use cases when you start the camera.
  2. Create a preview use case object. You haven’t added this method yet, you’ll do so in the next section.
  3. Set a listener that receives the data from the camera. When new data arrives, you release the previous TextureView and add a new one.
  4. Call updateTransform which sets the latest transformation to the TextureView. You haven’t added this method yet, you’ll do so in the next section.
  5. Create an image capture use case and bind the camera to the lifecycle. This binding controls the opening, starting, stopping and closing of the camera. You haven’t added this method yet, you’ll do so in the next section.
  6. Bind the three use cases to the camera lifecycle. This creates a camera session for them.

Next, you’ll add the missing implementations for createPreviewUseCase, updateTransform and createCaptureUseCase, starting with createPreviewUseCase.

Creating Use Cases: createPreviewUseCase

Add the following code below startCamera:

  private fun createPreviewUseCase(): Preview {
    // 1
    val previewConfig = PreviewConfig.Builder().apply {
      // 2

      // 3

    return Preview(previewConfig)

Here’s what you’re doing in the method above:

  1. Create a configuration for the preview using the PreviewConfig.Builder helper class provided by CameraX.
  2. Set the direction the camera faces using the lensFacing property, which defaults to the rear camera.
  3. Set the target rotation for the preview using the orientation from TextureView.

Creating Use Cases: updateTransform

Go back to createPreviewUseCase and add the following code below it:

  private fun updateTransform() {
    val matrix = Matrix()

    // 1
    val centerX = previewView.width / 2f
    val centerY = previewView.height / 2f

    // 2
    val rotationDegrees = when (previewView.display.rotation) {
      Surface.ROTATION_0 -> 0
      Surface.ROTATION_90 -> 90
      Surface.ROTATION_180 -> 180
      Surface.ROTATION_270 -> 270
      else -> return
    matrix.postRotate(-rotationDegrees.toFloat(), centerX, centerY)

    // 3

In updateTransform, you’re compensating for changes in device orientation. You do this by:

  1. Calculating the center of TextureView.
  2. Correcting the preview output to account for the rotation of the device.
  3. Applying the transformations to TextureView.
Note: In portrait mode, the rotation values are 0 for the standard orientation and 180 when the device is upside-down. In landscape mode, the values are 90 when rotated to the left and 270 when rotated to the right.

Creating Use Cases: createCaptureUseCase

With these two methods in place, you’re ready to add the third missing method: createCaptureUseCase. Add this below updateTransform:

  private fun createCaptureUseCase(): ImageCapture {
    // 2
    val imageCaptureConfig = ImageCaptureConfig.Builder()
         .apply {
           // 2

     return ImageCapture(

Notice how this method is almost identical to the createPreviewUseCase. The only differences are:

  1. You use ImageCaptureConfig.Builder instead of PreviewConfig.Builder.
  2. You set the capture mode to have the max quality.

At this point, you’ve completed the code to start the camera. All you need now is permission to use the camera.

Requesting Permission to Use the Camera

Find requestPermissions in PhotoActivity.kt and replace it with the following code:

  private fun requestPermissions() {
    // 1
    if (allPermissionsGranted()) {
      // 2 { startCamera() }
    } else {
      // 3
      ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS,

Here’s what you’re doing in this method:

  1. Checks if the user granted all permissions
  2. Run to make sure TextureView is ready to use. After this, you start the camera.
  3. Request the user’s permission to access the camera.

But what if this is the first time they’re opening your app? To cover that possibility, you’ll replace another method.

Find onRequestPermissionsResult in onCreate and replace it with this:

  override fun onRequestPermissionsResult(
    requestCode: Int, permissions: Array<String>, grantResults: IntArray
  ) {
    if (requestCode == REQUEST_CODE_PERMISSIONS) {
      if (allPermissionsGranted()) { { startCamera() }
      } else {

The above code is similar to the code you added in requestPermissions, except in this case, it happens after the permission has been granted.

Your app is coming along! It’s time to build and run to see your progress.

You’ll see that the app requests two different kinds of permissions before continuing. The first request grants access to the camera:

app screenshot with permission request number one

The second request grants access to the device’s storage, which you’ll need when the user wants to save photos to an external folder:

app screenshot with permission request number two

If you deny either of these requests, your app will immediately close.

After you grant these permissions, you’ll see a camera preview on the bottom half of the screen.

app screenshot with photo of painting in lower half of screen

The preview works fine, but at the moment, you can only use the back camera. There’s a button for toggling the camera in the toolbar, but right now, it doesn’t do anything. You’ll fix that next.

Toggling the Camera Lens

To toggle the camera lens, you’ll need to change the value of lensFacing. You saw this property in the preview and image capture configuration above.

Add this method below createCaptureUseCase:

  private fun toggleFrontBackCamera() {
    lensFacing = if (lensFacing == CameraX.LensFacing.BACK) {
    } else {
    } { startCamera() }

In this method, you check the current value of lensFacing and swap it. Then you start the camera again with the new lensFacing value.

Now, you’re ready to connect this method with the camera toggle button in the toolbar.

Go back to onCreate and replace setClickListeners with this:

  private fun setClickListeners() {
    toggleCameraLens.setOnClickListener { toggleFrontBackCamera() }

The toggle is all set. Build and run. When you tap on the icon this time, the preview will switch to the front camera. Cool, huh?

Capturing Images

When you added startCamera, you created a use case for image capturing, which provides a way to store the photos. You want to be able to store them in memory for short-term use or in a file for long-term access. You’ll implement that functionality next.

Storing Images to Memory

To take a photo, you’ll make a click listener and a method to trigger it.

Start by adding this below toggleFrontBackCamera in PhotoActivity.kt:

  private fun takePicture() {
    if (saveImageSwitch.isChecked) {
    } else {

In this method, you first disable all user actions to avoid taking multiple photos at the same time.

Next, you check the value of isChecked for saveImageSwitch. If true, you save the picture to file. If false, you save the picture to memory.

Your next step is to create the methods you referenced in takePicture: savePictureToFile and savePictureToMemory.

Copy this below takePicture:

  // 1
  private fun savePictureToFile() {}

  private fun savePictureToMemory() {
    // 2
        object : ImageCapture.OnImageCapturedListener() {
          override fun onError(
              error: ImageCapture.ImageCaptureError,
              message: String, exc: Throwable?
          ) {
            // 3

          override fun onCaptureSuccess(imageProxy: ImageProxy?,
                                        rotationDegrees: Int) {
            imageProxy?.image?.let {
             // 4
              val bitmap = rotateImage(
              // 5
              runOnUiThread {
            super.onCaptureSuccess(imageProxy, rotationDegrees)

Here’s what the above code is doing:

  1. savePictureToFile is currently empty. You’ll leave it like that for now.
  2. When you created an image capture use case earlier, you stored it in the imageCapture property. You’ll use that property to call takePicture which takes in two parameters: A listener object for success and error actions, and an executor for running those methods.
  3. If there’s an error, you notify the user with a toast message.
  4. If you captured the image successfully, you’ll convert it to a bitmap. Then, because some devices rotate the image on their own during the image capturing, you rotate it to its original position by calling rotateImage.
  5. Now that the bitmap is ready, you set the image to ImageView on the top half of the screen and re-enable user actions. Note that you switch to a UI thread before you change any UI components by wrapping the code inside the runOnUiThread block.

So now that the user can take a picture, your next step is to let them do so from the preview screen.

Adding Click Listeners

Next, you’ll add a click listener so that you can click on the preview and take a picture. Go back to onCreate and replace setClickListeners with this:

  private fun setClickListeners() {
    toggleCameraLens.setOnClickListener { toggleFrontBackCamera() }
    previewView.setOnClickListener { takePicture() }

Build and run. Click on the preview now to take a picture. Sweet!

app screenshot with same picture of a painting in both upper and lower halves

Once you take a picture, you can compare it with the live preview, like this picture of a picture. :]

Storing Images to File

Congratulations, your app can take pictures! But what if you want to save them?

You can do that with this updated version of savePictureToFile. Replace the currently empty method with this:

  private fun savePictureToFile() {
    // 1
    val file = fileUtils.createFile()

    // 2
    imageCapture?.takePicture(file, getMetadata(), executor,
        object : ImageCapture.OnImageSavedListener {
          override fun onImageSaved(file: File) {
            // 3
            runOnUiThread {

          override fun onError(imageCaptureError: ImageCapture.ImageCaptureError,
                               message: String,
                               cause: Throwable?) {
            // 4

Here’s what you are doing in savePictureToFile:

  1. Create a destination directory and file using the helper methods in the Pictures/Photo_session directory on your device.
  2. Call takePicture with three parameters: The newly-created file to save the image, the metadata for the image and an executor to define which thread to call these methods on. Metadata stores the geographical location of the photo. It also indicates whether the system needs to render the image mirrored horizontally or vertically. For instance, isReversedHorizontal will be true for a selfie.
  3. After taking the photo, if there are no errors, you set the image to ImageView and enable user actions. Notice that you set the image using URI instead of the bitmap. You do this because you have a file object.
  4. If there are any errors, you show them to the user as a Toast.

Build and run to test this new feature. Set the Save Image switch to save.

Picture of painting in full-screen

Take a photo. Now look in the Images/Photo_session directory on your device. Notice something? The image looks very different than the one in your app. That’s because you’re seeing it full-screen. You don’t want people leaving your app to do this so you’ll add this feature now.

Replace setClickListeners with the following:

  private fun setClickListeners() {
    toggleCameraLens.setOnClickListener { toggleFrontBackCamera() }
    previewView.setOnClickListener { takePicture() }
    takenImage.setOnLongClickListener {
      return@setOnLongClickListener true

In this new version of setClickListeners, you add another listener that runs showImagePopup when the user long-clicks on the most recent photo. saveImagePopup is already in the starter project.

Build and run. Take a picture and then long-click on it. It’s full-screen now in its normal aspect ratio and size. Click anywhere again and you’ll return to split-view.

Take a moment to appreciate what you’ve created here!

Picture of painting in full-screen

Adding Vendor Extensions

There’s one more feature you want to add to your camera app: What if your users want to add special effects to their photos like Night Mode or Bokeh?

No problem. With CameraX, you can add these features with only a few lines of code.

Note: Although many devices have these effects available in the default camera app, they do not necessarily support vendor extensions. Currently, only the following models support this feature:
  • Huawei (HDR, Portrait): Mate 20 series, P30 series, Honor Magic 2, Honor View 20.
  • Samsung (HDR, Night, Beauty, Auto): Galaxy Note 10 series.

Add this code below savePictureToMemory:

  private fun enableExtensionFeature(
    imageCaptureExtender: ImageCaptureExtender
  ) {
    if (imageCaptureExtender.isExtensionAvailable) {
    } else {
      Toast.makeText(this, getString(R.string.extension_unavailable),

In this method, you’re enabling an extension, if it’s available. Otherwise, you notify the user that it isn’t available and remove the option from the drop-down menu.

enableExtensionFeature needs to know the image capture extender you want to use. You’ll provide that with the method below:

    private fun applyExtensions(
      builder: ImageCaptureConfig.Builder
    ) {
      when (ExtensionFeature.fromPosition(extensionFeatures.selectedItemPosition)) {
        ExtensionFeature.BOKEH ->
        ExtensionFeature.HDR ->
        ExtensionFeature.NIGHT_MODE ->
        else -> {

And add these to your import statements above:


In the method above, you check the currently-selected item from the drop-down menu and then create an image extender based on that option.

Listening for the Image Extension

Now that you’ve created this array of image extenders, you’ll add a listener for the drop-down menu.

Go back to onCreate and replace setClickListeners with this:

  private fun setClickListeners() {
    toggleCameraLens.setOnClickListener { toggleFrontBackCamera() }
    previewView.setOnClickListener { takePicture() }
    takenImage.setOnLongClickListener {
      return@setOnLongClickListener true

    extensionFeatures.onItemSelectedListener =
        object : AdapterView.OnItemSelectedListener {
          override fun onItemSelected(
              parentView: AdapterView<*>,
              selectedItemView: View,
              position: Int,
              id: Long
          ) {
            if (ExtensionFeature.fromPosition(position) != ExtensionFeature.NONE) {
     { startCamera() }

          override fun onNothingSelected(parentView: AdapterView<*>) {}

Now, when the user selects an available extension, you’ll start the camera — which will automatically apply the extensions.

The only thing left is to call applyExtensions from within createCaptureUseCase. Add the call right before the return:

  private fun createCaptureUseCase(): ImageCapture {
    val imageCaptureConfig = ImageCaptureConfig.Builder()
        .apply {

    return ImageCapture(

Build and run. The extensions are now available from the drop-down menu on any of the supported devices. Note that the app only applies the extensions to the most recent photo.

Where to Go From Here?

Download the final project using the Download Materials button at the top or the bottom of the tutorial.

In this tutorial, you learned how to use CameraX to preview, take and store photos. You also saw how this library simplifies the process of adding options for special effects.

If you want to learn more, check out the official CameraX documentation.

I hope you enjoyed this tutorial! If you have any questions or comments, please join the forum discussion below.

More like this