How to add custom evaluation metrics in Tensorflow Object Detection API?
Asked Answered
H

0

8

I would like to have my custom list of metrics when evaluating an instance segmentation model in Tensorflow's Object Detection API, which can be summarized as follows;

  • Precision values for IOUs of 0.5-0.95 with increments of 0.05
  • Recall values for IOUs of 0.5-0.95 with increments of 0.05
  • AUC values for precision and recall between 0-1 with increments of 0.05

What I've currently tested is modifying the already existing coco evaluation metrics by tweaking some code in the PythonAPI of pycocotools and the additional metrics file within Tensorflow's research model. Currently the default output values for COCO evaluation are the following

Precision/mAP
Precision/[email protected]
Precision/[email protected]
Precision/mAP (small)
Precision/mAP (medium)
Precision/mAP (large)
Recall/AR@1
Recall/AR@10
Recall/AR@100
Recall/AR@100 (small)
Recall/AR@100 (medium)
Recall/AR@100 (large)

So I decided first to use coco_detection_metrics in my eval_config field inside the .config file used for training

eval_config: {
  metrics_set: "coco_detection_metrics"
}

And edit cocoeval.py and cocotools.py multiple times (proportional to the number of values) by adding more items to the stats list and stats sumary dictionary in order to get the desired result. For demonstration purposes, I am only going to show one example by adding precision at IOU=0.55 on top of precision at IOU=0.5.

So, this is the modified method of the COCOeval class inside cocoeval.py

def _summarizeDets():
    stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])
    stats[12] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])

and the edited methods under the COCOEvalWrapper class inside coco_tools.py

summary_metrics = OrderedDict([
    ('Precision/[email protected]', self.stats[1]),
    ('Precision/[email protected]', self.stats[12])
for category_index, category_id in enumerate(self.GetCategoryIdList()):
    per_category_ap['Precision [email protected] ByCategory/{}'.format( category)] = self.category_stats[1][category_index]
    per_category_ap['Precision [email protected] ByCategory/{}'.format( category)] = self.category_stats[12][category_index]

It would be useful to know a more efficient way to deal with my problem and easily request a list of custom evaluation metrics without having to tweak the already existing COCO files. Ideally, my primary goal is to

  • Be able to create a custom console output based on the metrics provided at the beginning of the question

and my secondary goals would be to

  • Export the metrics with their respective values in JSON format
  • Visualize the three graphs in Tensorboard
Heel answered 21/6, 2019 at 3:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.