Building a Fashion Police App with React Native and AI

Introduction

Artificial Intelligence is rapidly transforming how we build software applications. With the power of AI, our apps can now understand complex information like images, speech, text, and video to derive actionable insights.

One application of AI that‘s both fun and practical is in the realm of fashion. Imagine being able to get an instant, data-driven evaluation of your outfit before leaving the house in the morning. Or when shopping online or in-store, seeing automatically generated recommendations for clothing items that match your personal style.

In this post, we‘ll walk through how to build a mobile "fashion police" app to do just that. We‘ll be using React Native to build the cross-platform mobile app, and integrating an off-the-shelf computer vision AI model to analyze fashion choices in real-time.

Whether you‘re an experienced full-stack developer looking to enhance your apps with AI or just getting started with mobile development, this guide will provide a comprehensive tutorial of the end-to-end process. Let‘s dive in!

Why React Native?

For mobile app development, React Native has emerged as a popular choice for several reasons:

  1. Cross-Platform – React Native allows you to build apps for both iOS and Android from the same codebase. This can significantly reduce development time and cost compared to building separate native apps.

  2. Native Performance – Unlike hybrid app frameworks like Cordova or Ionic that render web views, React Native renders native platform UI elements. This leads to better performance that is nearly indistinguishable from fully native apps.

  3. Extensive Ecosystem – React Native has a large and active open-source community. There are many third-party libraries, UI kits, and tools to aid development. Finding support and inspiration is easy.

  4. Simplified Styling – React Native uses a subset of CSS for styling that makes it easy to customize your app‘s look and feel. Flexbox support also simplifies building responsive layouts.

  5. Fast Refresh – React Native supports instant reloading during development. As soon as you save changes to the code, the app UI refreshes, making for a fast development cycle.

According to a 2020 Stack Overflow survey, React Native ranked as the 7th most loved software development framework, with 57.9% of developers expressing interest in continuing to develop with it [1]. Companies like Instagram, Facebook, Uber, Wix, and Shopify have used React Native in production.

The Power of Off-the-Shelf AI

To add intelligence to our fashion app, we‘ll be using pre-built AI models for computer vision tasks like detecting clothing items in an image and classifying their style attributes.

The advantages of using off-the-shelf AI include:

  1. Reduced Development Time – Training computer vision AI models from scratch requires significant time, technical expertise, and compute resources. With off-the-shelf models, we can simply send data to a cloud API and receive predictions in real-time.

  2. High Accuracy – Pre-trained models are often built by large tech companies or research institutions using massive, carefully curated datasets. This allows them to achieve high accuracy on standard computer vision benchmarks.

  3. Scalability – Cloud AI platforms handle the infrastructure and scaling necessary to deploy ML models to production. Our app can rely on their scalable and highly available APIs.

  4. Customizability – Many pre-trained AI models allow for custom training to further optimize performance on your specific dataset. We can improve fashion recommendations by tuning the model on user-supplied data.

Popular computer vision AI platforms include:

For this app, we‘ll be using Microsoft‘s Custom Vision Service, which is part of Azure Cognitive Services. Custom Vision provides a user-friendly interface for custom training computer vision models without needing to be an ML expert.

Code Walkthrough

Now that we‘ve discussed the motivation and high-level architecture for the app, let‘s walk through the key code snippets.

Camera Access

To access the device camera in a React Native app, we can use the expo-camera library. First, install the library:

expo install expo-camera

Then, request camera permissions and render the camera view:

import React, { useState, useEffect } from ‘react‘;
import { Text, View, StyleSheet, TouchableOpacity } from ‘react-native‘;
import { Camera } from ‘expo-camera‘;

export default function CameraScreen() {
  const [hasPermission, setHasPermission] = useState(null);

  useEffect(() => {
    (async () => {
      const { status } = await Camera.requestPermissionsAsync();
      setHasPermission(status === ‘granted‘);
    })();
  }, []);

  if (hasPermission === null) {
    return <View />;
  }

  if (hasPermission === false) {
    return <Text>No access to camera</Text>;
  }

  return (
    <View style={styles.container}>
      <Camera style={styles.camera} type={Camera.Constants.Type.back}>
        <View style={styles.buttonContainer}>
          <TouchableOpacity 
            style={styles.button}
            onPress={async () => {
              const picture = await this.camera.takePictureAsync({
                quality: 0.5,
                base64: true
              });

              // Send picture.base64 data to Custom Vision API

            }}
          >
            <Text style={styles.text}>Snap</Text>
          </TouchableOpacity>
        </View>
      </Camera>
    </View>
  );
}

const styles = StyleSheet.create({
  // Styles omitted for brevity
});

When the user presses the "Snap" button, we capture the current camera frame using camera.takePictureAsync. We request the image data to be returned in base64 format so we can easily send it to the Custom Vision API in the next step.

API Integration

To get fashion recommendations, we‘ll send the captured image to the Custom Vision API. First, provision a Custom Vision resource in your Azure account and create a new project. Then, train the model on labeled data of your choice – you can provide your own dataset of clothing images or use one of Custom Vision‘s pre-made "Fashion" or "Clothing" datasets.

After training the model, publish it to get the API endpoint URL and access keys. We‘ll use these to make the API call in our React Native code.

First, install the axios library for making HTTP requests:

npm install axios

Then, add the following code to call the API with the image data:

import axios from ‘axios‘;

const CUSTOM_VISION_ENDPOINT = "YOUR_ENDPOINT_URL";
const CUSTOM_VISION_KEY = "YOUR_PREDICTION_KEY";
const CUSTOM_VISION_PROJECT_ID = "YOUR_PROJECT_ID";
const CUSTOM_VISION_PUBLISH_ITERATION_NAME = "YOUR_PUBLISHED_ITERATION_NAME";

// In snap button onPress handler:
const picture = await this.camera.takePictureAsync({
  quality: 0.5,
  base64: true
});

const response = await axios.post(
  `${CUSTOM_VISION_ENDPOINT}/customvision/v3.0/Prediction/${CUSTOM_VISION_PROJECT_ID}/classify/iterations/${CUSTOM_VISION_PUBLISH_ITERATION_NAME}/image`,
  picture.base64,
  {
    headers: {
      ‘Content-Type‘: ‘application/octet-stream‘,
      ‘Prediction-Key‘: CUSTOM_VISION_KEY
    }  
  }
);

console.log(response.data.predictions);

The /classify API returns an array of predictions, each containing a tagName (the name of the class, e.g. "stylish" vs "not stylish") and probability (confidence score between 0 and 1).

We can grab the prediction with the highest probability to determine the outfit recommendation:

const predictionScores = response.data.predictions.sort(
  (a, b) => b.probability - a.probability
);

const highestConfidencePrediction = predictionScores[0];

const outfit = {
  isStylish: highestConfidencePrediction.tagName.includes("stylish"),
  confidenceScore: highestConfidencePrediction.probability  
};

Displaying Results

Finally, we‘ll display the AI-powered fashion recommendation to the user. Add state to track the recommendation and a conditional view to render it:

const [recommendation, setRecommendation] = useState(null);

// In snap button onPress handler:
setRecommendation(outfit);

// Add to returned JSX:
{recommendation && (
  <View style={styles.recommendationContainer}>
    <Text>
      The AI rates this outfit as {recommendation.isStylish ? "👍 Trendy" : "👎 Not Trendy"}
    </Text>
    <Text>Confidence: {(recommendation.confidenceScore * 100).toFixed(2)}%</Text>
  </View>
)}

So in just a few steps, we have a functional prototype of a fashion recommendation app! The user flow is simple: snap a photo, let the AI analyze it, and see the resulting recommendation appear.

Training the Model

Let‘s talk more about the data that powers the AI model. The pre-trained Custom Vision models provide a great starting point, but to build a truly personalized fashion app, we would want to optimize the model on user-specific data.

To enable this, we can prompt users to Rate the AI‘s recommendations (thumbs up or down) as they use the app. We‘d send this feedback data, along with the original image, to our backend API for storage.

const sendRating = async (isCorrect) => {
  await axios.post(
    "https://my-backend-api.com/model-feedback",
    JSON.stringify({
      imageData: recommendation.imageData,
      isCorrect  
    })
  );
};

// In displayed recommendation:
<TouchableOpacity onPress={() => sendRating(true)}>
  <Text>👍</Text>   
</TouchableOpacity>
<TouchableOpacity onPress={() => sendRating(false)}>
  <Text>👎</Text>
</TouchableOpacity>

Then, we can periodically retrain the Custom Vision model using the accumulated feedback data. The key steps would be:

  1. Download all stored feedback images and labels
  2. Split into training and validation datasets
  3. Upload the new images and labels to Custom Vision
  4. Retrain the model using the updated dataset
  5. Publish the new model iteration
  6. Update the app to point to the new model endpoint

Microsoft provides a training API that we can use to automate this process in code (e.g. a Node.js script):

function trainModel(trainingData) {
  const options = {
    method: ‘POST‘,
    uri: trainingEndpoint + "projects/" + projectId + "/train",
    headers: {
      ‘Training-key‘: trainingKey
    },
    body: trainingData,
    json: true
  };

  return request(options);
}

// Upload images
// Tag images
// Train model
trainModel().then(model => {
  // Publish model
  // Update model name in app
});

By retraining the model on user-supplied data, the model‘s predictions will align closer to each individual user‘s fashion preferences over time. We could even create separate models for different use cases, such as casual vs formal outfits, to further personalize and improve the recommendations.

Of course, there are some challenges to consider:

  • Acquiring enough high-quality, diverse training data to avoid overfitting the model
  • Filtering out mislabeled or low-quality images from user feedback
  • Handling the "cold start" problem for new users without much feedback data

Techniques like transfer learning, data augmentation, and gathering initial training data from generic fashion datasets can help mitigate these issues to an extent.

Conclusion

In this post, we walked through the process of building a fashion police app powered by computer vision AI. We leveraged React Native to build a cross-platform mobile app, and integrated the camera and a cloud-based AI model to provide real-time outfit recommendations.

While this app serves as an easy starting point, there are so many additional features we could build to improve it:

  • Add social sharing so users can post their outfits and get feedback from others
  • Implement a recommendation engine to suggest clothing items that pair well with the user‘s existing wardrobe
  • Gamify the app by awarding points for stylish outfits and encouraging friendly competition among friends
  • Integrate with e-commerce APIs to let users shop directly for recommended items

The machine learning possibilities are also endless – we could expand the model to detect more granular fashion attributes like color, pattern, brand, occasion, etc. We could even experiment with Generative Adversarial Networks (GANs) to generate realistic fashion images from scratch!

According to a 2018 study, the global AI fashion market size is expected to reach $4.4 billion by 2027, growing at a rate of 40.8% per year [2]. Major fashion brands like Stitch Fix, Levi‘s, and Nike are increasingly leveraging AI to personalize offerings, streamline logistics, and gain customer insights.

AI is transforming businesses across every industry, and the most successful apps of the future will be those that can intelligently adapt to each user‘s unique needs and preferences. As mobile developers, we have an exciting opportunity to use tools like React Native and cognitive services to build smarter, more engaging app experiences. The fashion police app is just one example of how AI can be used to provide utility and delight for users.

I hope this tutorial has inspired you to think about new ways to add AI capabilities into your own apps! Feel free to use this code as a starting point and experiment with your own ideas. What other AI-powered features would take this fashion app to the next level? Let me know what you come up with.

Until next time, happy coding!

References

[1] Stack Overflow Developer Survey 2020: https://insights.stackoverflow.com/survey/2020#development-environments-and-tools

[2] Artificial Intelligence (AI) in Fashion Market Forecast, 2018-2027: https://www.polarismarketresearch.com/industry-analysis/artificial-intelligence-in-fashion-market

All code and images by the author unless otherwise noted

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *