TensorFlow – How to build an AI classifying an image by MobileNet model?

Prepare training classified images

curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C tf_files

Training classified images are saved in folder structure like this.

daisy
dandelion
roses
sunflowers
tulip

Folder name is important. It’s the classified name.

Retrain the network by model MobileNet

What is MobileNet

  • MobileNets is a family of mobile-first computer vision models for TensorFlow. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases.
  • MobileNet models perform image classification – they take images as input and classify the major object in the image into a set of pre-defined classes.
  • They are trained on ImageNet dataset which contains images from 1000 classes. MobileNet models are also very efficient in terms of speed and size and hence are ideal for embedded and mobile applications.

Run training

  • ImageNet models are networks with millions of parameters that can differentiate a large number of classes. We’re only training the final layer of that network, so training will end in a reasonable amount of time.
python -m scripts.retrain \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=tf_files/models/ \
  --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --architecture="${ARCHITECTURE}" \
  --image_dir=tf_files/flower_photos
  • This script downloads the pre-trained model, adds a new final layer, and trains that layer on the flower photos you’ve downloaded. 
  • ImageNet does not include any of these flower species we’re training on here. However, the kinds of information that make it possible for ImageNet to differentiate among 1,000 classes are also useful for distinguishing other objects. By using this pre-trained network, we are using that information as input to the final classification layer that distinguishes our flower classes.

Classify a flower image by using our retrained model

python -m scripts.label_image \
    --graph=tf_files/retrained_graph.pb  \
    --image=our_image_file_path

Reference: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0

How to embed our retrain modal into an Android app?

Convert the model “.pb” file into “.lite” format

IMAGE_SIZE=224
tflite_convert \
  --graph_def_file=tf_files/retrained_graph.pb \
  --output_file=tf_files/optimized_graph.lite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3 \
  --input_array=input \
  --output_array=final_result \
  --inference_type=FLOAT \
  --input_data_type=FLOAT

Add “.lite” and “labels.txt” files into “assets” folder

Install pre-compiled TFLite Android Archive (AAR) by adding these lines into “build.gradle”

repositories {
    maven {
        url 'https://google.bintray.com/tensorflow'
    }
}

dependencies {
    // ...
    compile 'org.tensorflow:tensorflow-lite:+'
}

Be the first to comment

Leave a Reply

Your email address will not be published.


*