TensorFlow Object Detection Model Training

This is a summary of this nice tutorial.


  1. Install TensorFlow.
  2. Download the TensorFlow models repository.

Annotating images and serializing the dataset

All the scripts mentioned in this section receive arguments from the command line and have help messages through the -h/--help flags. Also check the README from the repo they come from to get more details, if needed.

  1. Install labelImg. This is a Python package, which means you can install it via pip, but the one from GitHub is better. It saves annotations in the PASCAL VOC format.
  2. Annotate your dataset using labelImg.
  3. Use this script to convert the XML files generated by labelImg into a single CSV file.
  4. Use this script to separate the CSV file into two, one with training examples and one with evaluation examples. Let's call them train.csv and eval.csv. Images will be selected randomly and there are options to stratify examples by class, making sure that objects from all classes are present in both datasets. The usual proportions are 75% to 80% of the annotated objects used for training and the rest for the evaluation dataset.
  5. Create a “label map” for your classes. You can check some examples to understand what they look like. You can also generate one from your original CSV file with this script.
  6. Use this script to convert the two CSV files (train.csv and eval.csv) into two TFRecord files (eg. train.record and eval.record), a serialized data format that TensorFlow is most familiar with. You'll need the label map from the previous for this.

Choosing a neural network and preparing the training pipeline…

  1. Download one of the neural network models provided in this page. The ones trained in the COCO dataset are the best ones, since they were also trained on objects.
  2. Provide a training pipeline, which is a config file that usually comes in the tar.gz file downloaded in the previous step. If they don’t come in the tar.gz, they can be found here. You can find a tutorial on how to create your own here.
    • The pipeline config file has some fields that must be adjusted before training is started. Its header describes which ones. Usually, they are the fields that point to the label map, the training and evaluation directories and the neural network checkpoint. In case you downloaded one of the models provided in this page, you should untar the tar.gz file and point the checkpoint path inside the pipeline config file to the "untarred" directory of the model (see this answer for help).
    • You should also check the number of classes. COCO has 90 classes, but your data set may have more or less.
    • There are additional parameters that may affect how much RAM is consumed by the training process, as well as the quality of the training. Things like the batch size or how many batches TensorFlow can prefetch and keep in memory may considerably increase the amount of RAM necessary, but I won't go over those here as there is too much trial and error in adjusting those.

Training the network

  1. Train the model. This is how you do it locally. Optional: in order to check training progress, TensorBoard can be started pointing its --logdir to the --model_dir of object_detection/model_main.py.

  2. Export the network, like this.

  3. Use the exported .pb in your object detector.


In the data augmentation section of the training pipeline, some options can be added or removed to try and make the training better. Some of the options are listed here.

Written on September 22, 2017