How to Build an Augmented Images Application with ARCore

Augmented reality allows you to overlay digital content onto the real world, creating immersive and interactive experiences. One powerful AR capability is image tracking – the ability to recognize specific 2D images in the environment and use them as markers to position virtual content.

ARCore, Google‘s AR SDK for Android, provides an Augmented Images API that enables you to build apps that can detect and track 2D images in the real world. Once an image is detected, you can seamlessly place 3D models, animations, or any other digital assets on top of it.

In this guide, we‘ll walk through how to create an Android app that uses ARCore‘s Augmented Images API to display a 3D object on a recognized image. Whether you want to build AR product previews, interactive guides, or educational content, augmented image tracking opens up a range of engaging possibilities. Let‘s get started!

Understanding Augmented Images

Augmented Images in ARCore allow your app to detect and track 2D images, such as posters, product packaging, or magazines, in the user‘s physical environment. You provide a set of reference images, and ARCore will continuously analyze the camera feed to identify matches.

Once a reference image is detected, ARCore will provide detailed information about its physical location, size, and orientation within the AR scene. You can then use that information to anchor virtual content precisely on top of the image.

Here are a few key things to understand about Augmented Images in ARCore:

  • ARCore can track up to 20 unique images simultaneously in real-time
  • Reference images must be flat, high contrast, and contain distinct visual features
  • The physical image in the environment should be at least 15cm x 15cm in size
  • ARCore cannot track moving images, only stationary ones
  • You can store up to 1000 reference images in an AugmentedImageDatabase

Augmented image tracking is ideal for AR use cases where you want to trigger specific digital content based on designated markers, rather than generic surfaces. It provides pixel-precise placement of AR content and opens up creative possibilities for interactive print media, product visualization, and location-based experiences.

Project Setup

To build an ARCore app with Augmented Images, you‘ll need the following:

  • Android Studio 3.1 or higher
  • A device running Android 7.0 (API level 24) or later
  • ARCore SDK 1.9.0 or later

Make sure your Android device is compatible with ARCore and has the latest version of the ARCore app installed from the Google Play Store.

First, create a new Android Studio project and add the following dependencies to your app-level build.gradle file:

dependencies {
  implementation ‘com.google.ar:core:1.24.0‘
  implementation ‘com.google.ar.sceneform:core:1.17.1‘
}

ARCore provides the augmented image detection and tracking capabilities, while Sceneform is a 3D rendering library that simplifies the process of working with 3D assets in AR.

Setting Up the Augmented Images Database

Next, we need to create an AugmentedImageDatabase and populate it with our reference images. The database lets ARCore quickly determine whether a given camera frame contains any recognized images.

Here‘s how to set up the database:

  1. Add your reference images to the "assets" folder in your project. Supported formats are JPEG, PNG, and TIFF.

  2. Create a new Java class named "AugmentedImageFragment" that extends com.google.ar.sceneform.ux.ArFragment:

public class AugmentedImageFragment extends ArFragment {
  @Override
  protected Config getSessionConfiguration(Session session) {
    Config config = super.getSessionConfiguration(session);
    config.setFocusMode(Config.FocusMode.AUTO);
    config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE);

    AugmentedImageDatabase database = new AugmentedImageDatabase(session);
    database.addImage("tiger", loadAugmentedImage("tiger.jpg"));

    config.setAugmentedImageDatabase(database);
    return config;
  }

  private Bitmap loadAugmentedImage(String filename) {
    try (InputStream is = getContext().getAssets().open(filename)) {
      return BitmapFactory.decodeStream(is);
    } catch (IOException e) {
      Log.e("AugmentedImageFragment", "IO exception loading augmented image bitmap.", e);
    }
    return null;
  }
}

In the getSessionConfiguration method, we configure ARCore to use the latest camera image for detection and enable auto-focus for optimal image quality.

We then create a new AugmentedImageDatabase instance and add our reference image(s) to it using the addImage method, providing a unique string name for each. The loadAugmentedImage helper method loads the image file from assets as a Bitmap.

Finally, we set the configured AugmentedImageDatabase on the session config using setAugmentedImageDatabase.

Detecting and Tracking Augmented Images

With our database set up, we can now use ARCore to detect and track the reference images in the camera view. We‘ll update the AugmentedImageFragment to listen for onUpdate events and take action when an image is found.

Add the following code to the fragment class:

@Override
public void onUpdate(FrameTime frameTime) {
  Frame frame = getArSceneView().getArFrame();

  // If there is no frame or ARCore is not tracking yet, just return.
  if (frame == null || frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
    return;
  }

  Collection<AugmentedImage> updatedAugmentedImages = frame.getUpdatedTrackables(AugmentedImage.class);
  for (AugmentedImage img : updatedAugmentedImages) {
    if (img.getTrackingState() == TrackingState.TRACKING) {
      if (img.getName().equals("tiger") && shouldAddModel) {
        placeObject(img.createAnchor(img.getCenterPose()), "models/tiger.glb");
        shouldAddModel = false;
      }
    }
  }
}

private void placeObject(Anchor anchor, String modelUri) {
  ModelRenderable.builder()
      .setSource(getContext(), Uri.parse(modelUri))
      .setIsFilamentGltf(true)
      .build()
      .thenAccept(renderable -> addNodeToScene(anchor, renderable))
      .exceptionally(throwable -> {
        Log.e("AugmentedImageFragment", "Unable to load renderable.", throwable);
        return null;
      });
}

private void addNodeToScene(Anchor anchor, Renderable renderable) {
  AnchorNode anchorNode = new AnchorNode(anchor);
  anchorNode.setParent(getArSceneView().getScene());

  TransformableNode node = new TransformableNode(getTransformationSystem());
  node.setParent(anchorNode);
  node.setRenderable(renderable);
  node.select();
}

In the onUpdate method, we first get the current Frame from ARCore and check if tracking is active. We then retrieve a collection of AugmentedImage objects that were updated in this frame using getUpdatedTrackables.

We iterate through the updated images and check if any of them match our reference image name and are currently being tracked. If so, we call the placeObject method to anchor a 3D model at the center of the detected image. The shouldAddModel flag ensures we only place the model once per image detection.

The placeObject method uses Sceneform‘s ModelRenderable to asynchronously load a glTF 3D model file. Once the model is loaded, it‘s passed to addNodeToScene which creates an AnchorNode attached to the image‘s position and a TransformableNode to render the model.

Choosing Effective Reference Images

The quality and characteristics of your reference images play a key role in the robustness and reliability of image detection. Here are some tips for selecting good reference images:

  • Use high-resolution images with a minimum size of 300 x 300 pixels
  • Choose images with high contrast and distinct visual features
  • Avoid images with repetitive patterns, such as grids or text
  • Steer clear of images with sparse features or large uniform regions
  • Consider the expected viewing angle and lighting conditions of your target environment

You can use the arcoreimg tool provided in the ARCore SDK to analyze the suitability of your reference images. It generates a quality score between 0 and 100 for each image, with a minimum score of 75 recommended.

To use the arcoreimg tool:

  1. Download and extract the ARCore SDK for Android
  2. Navigate to the "tools" directory in the extracted SDK
  3. Run the following command, replacing <image_file> with the path to your image:
./arcoreimg eval-img --input_image_path=<image_file>

The tool will output an image quality score along with any specific issues found. Use this feedback to optimize your images for the best tracking performance.

Conclusion and Next Steps

Congratulations! You now have a working Android app that can detect and track 2D images using ARCore and display 3D content on top of them. The Augmented Images API opens up a world of possibilities for creative and engaging AR experiences.

Some potential next steps and enhancements:

  • Expand your reference image database to track multiple images
  • Experiment with different types of virtual content, such as animations, particles, or video
  • Implement interactive behavior, such as gestures or proximity triggers
  • Combine augmented image tracking with other ARCore features like plane detection or cloud anchors

Remember, image tracking is just one tool in the AR toolbox. Think creatively about how you can use it in combination with other techniques to build compelling and immersive experiences.

ARCore and Augmented Images are constantly evolving, so be sure to stay up-to-date with the latest features and best practices. Google‘s Augmented Images developer guide is a great resource to bookmark.

The complete source code for this project is available on GitHub. Feel free to use it as a starting point for your own augmented image experiments!

If you have any questions or run into issues, the ARCore SDK issue tracker and AR developer community are helpful resources.

Happy AR building! The world is your augmented oyster.

Similar Posts