Tensorflow 1.9 / Object Detection: model_main.py only evaluates one image
Asked Answered
O

3

7

I've updated to Tensorflow 1.9 & the latest master of the Object Detection API. When running a training/evaluation session that worked fine previously (I think version 1.6), the training appears to proceed as expected, but I only get evaluation & metrics for one image (the first).

In Tensorboard the image is labeled 'Detections_Left_Groundtruth_Right'. The evaluation step itself also happens extremely quickly, which leads me to believe this isn't just a Tensorboard issue.

Looking in model_lib.py, I see some suspicious code (near line 349):

  eval_images = (
      features[fields.InputDataFields.original_image] if use_original_images
      else features[fields.InputDataFields.image])
  eval_dict = eval_util.result_dict_for_single_example(
      eval_images[0:1],
      features[inputs.HASH_KEY][0],
      detections,
      groundtruth,
      class_agnostic=class_agnostic,
      scale_to_absolute=True)

This reads to me like the evaluator is always running a single evaluation on the first image. Has anyone seen and/or fixed this? I will update if changing the above works.

Omnivore answered 1/8, 2018 at 15:1 Comment(1)
I'm seeing the same issue, I'd bee keen to find out why.Spooner
T
1

You are right, object detection supports only batch sizes of 1 for evaluation. The number of evaluations is equal to the number of eval steps. Eval metrics are accrued across batches.

Btw, a change to view more eval images in Tensorboard was just submitted to master.

Tumbleweed answered 14/8, 2018 at 3:15 Comment(0)
E
0

I have the same issue when using the model_main.py module. When using the train.py and eval.py functions that can be found in the object_detection/legacy/ directory, however, I can see more than one image in tensorboard.

I didn't have enough time yet to go through the code to understand fully what is going on. I think this eval function is not calling the part of the code that you are quoting, because the images in tensorboard are different. Rather than having the left/right image pairs showing prediction/ground_truth, it is only the predicted bounding box that is shown.

Eastward answered 29/8, 2018 at 2:50 Comment(0)
G
0

For Tensorflow 1.14 and using models I added this to my config:

num_visualizations: <number of images to evaluate>

example:

eval_config: {
num_visualizations: 288
num_examples: 288
max_evals: 288

}

I'm not certain about the num_examples or num_evals, but 100% you need num_visualizations to be in your config to see more images.

https://github.com/tensorflow/models/issues/5067

This is also referenced here: Show more images in Tensorboard - Tensorflow object detection

Gastelum answered 10/5, 2020 at 23:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.