Tensorflow high false-positive rate and non-max-suppression issue
F

2

7

I am training Tensorflow Object detection on Windows 10using faster_rcnn_inception_v2_coco as pretrained model. I'm on Windows 10, with tensorflow-gpu 1.6 on NVIDIA GeForce GTX 1080, CUDA 9.0 and CUDNN 7.0.

My dataset contain only one object, "Pistol", and 3000 images (2700 train set, 300 test set). The size of the images are from ~100x200 to ~800x600.

I trained this model for 55k iterations, where the mAP was ~0.8 and the TotalLoss seems converged to 0.001. But however, seeing the evaluation, that there are a lot of multiple bounding boxes on the same detected object (e.g. this and this), and lot of false positives (house detected as a pistol). For example, in this photo taked by me (blur filter was applied later), the model detect a person and a car as pistols, as well as the correct detection.

The dataset is uploaded here, together with the tfrecords and the label map. I used this config file, where the only things that I changed are: num_classes to 1, the fine_tune_checkpoint, input_path and label_map_path for train and eval, and num_examples. Since I thought that the multiple boxes are a non-max-suppression problem, I changed the score_threshold (line 73) from 0 to 0.01 and the iou_threshold (line 74) from 1 to 0.6. With the standard values the outcome was much worse than this.

What can I do to have a good detection? What should I change? Maybe I miss something about parameters tuning...

Thanks

Flatter answered 6/4, 2018 at 7:41 Comment(0)
N
5

I think that before diving into paramter tuning (i.e. the mentioned score_threshold) you will have to review your dataset.

I didn't check the entire dataset you shared but from a high level view the main problem I found is that most of the images are really small and with a highly variable aspect ratio.

In my opinion this enters in conflict with this part of your configuration file:

image_resizer {
  keep_aspect_ratio_resizer {
    min_dimension: 600
    max_dimension: 1024
  }
}

If take one of the images of your dataset and you manually apply that transformation you will see that the result is very noisy for small images and very deformed for many images that have a different aspect ratio.

I would highly recommend you to re-build your dataset with images with more definition and maybe try to preprocess the images with unusual aspect ration with padding, cropping or other strategies.

If you want to stick with the small images you'd have to at least change the min and max dimensions of the image_resizer but, from my experience, the biggest problem here is the dataset and I would invest the time in trying to fix that.

Pd.

I don't see the house false positive as a big problem if we consider that it's from a totally different domain of your dataset.

You could probably adjust the minium confidence to consider a detections as true positive and remove it.

If you take the current winner of COCO and feed it with strange images like from a cartoon you will see that it generates a lot of false positives.

So it's more like a problem with the current object detection approaches wich are not robust to domain changes.

Nunuance answered 7/4, 2018 at 11:1 Comment(5)
Thanks for your suggestion. My test set will be of images 800x600, so I edit keep_aspect_ratio_resizer with min dimension 600 and max dimension 800 and I deleted from my train set all the images with one size <= 400 or >= 1000 and those with aspect ratio <= 1.5 or >= 2.0. Moreover, as you implicity suggested, all the images with different domains are deleted as well. However, the problem is still there: after +80k iterations, I still have false positive like this one imgur.com/qb1gOb8 , with cars detected as pistols with high confidence. Do you have other advises?Flatter
I think that in that particular photo the problem is the blur. If you want your model to be robust against that kind of images you should add imges with similar blur to your training set.Nunuance
The blur is added by me after the detection: there is no blur in the processed images.Flatter
Another example of false positive is this one: in the same video (700x448), the pistol is correctly detected imgur.com/R8CLNEY , but, when the pistol disappear, there are a "stable" false positive that cover 50% of the image imgur.com/RHJigqhFlatter
@Flatter have you overcome the issue of false positives? if so, can you please guide me to overcome this as the same issue occurs for me tooRing
N
4

A lot of people I see online have been running into the same issue using Tensorflow API. I think there are some inherent problems with the idea/process of using the pretrained models with custom classifier(s) at home. For example people want to use SSD Mobile or Faster RCNN Inception to detect objects like "Person w/ helmet," "pistol," or "tool box," etc. The general process is to feed in images of that object, but most of the time, no matter how many images...200 to 2000, you still end up with false positives when you go actually run it at your desk.

The object classifier works great when you show it the object in its own context, but you end up getting 99% match on every day items like your bedroom window, your desk, your computer monitor, keyboard, etc. People have mentioned the strategy of introducing negative images or soft images. I think the problem has to do with limited context in the images that most people use. The pretrained models were trained with over a dozen classifiers in many variety of environments like in one example could be a Car on the street. The CNN sees the car and then everything in that image that is not a car is a negative image which includes the street, buildings, sky, etc.. In another image, it can see a Bottle and everything in that image which includes desks, tables, windows, etc. I think the problem with training custom classifiers is that it is a negative image problem. Even if you have enough images of the object itself, there isn't enough data of that that same object in different contexts and backgrounds. So in a sense, there is not enough negative images even if conceptually you shouldn't need negative images. When you run the algorithm at home you get false positives all over the place identifying objects around your own room. I think the idea of transfer learning in this way is flawed. We just end up seeing a lot of great tutorials online of people identifying playing cards, Millenium Falcons, etc., but none of those models are deployable in the real world as they all would generate a bunch of false positives when it sees anything outside of its image pool. The best strategy would be to retrain the CNN from scratch with a multiple classifiers and add the desired ones in there as well. I suggest re-introducing a previous dataset from ImageNet or Pascal with 10-20 pre-existing classifiers and add your own ones and retrain it.

Noshow answered 29/8, 2019 at 19:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.