if I'm right, you're trying to use the object_detection model with a pre-trained network offered by Tensorflow, am I right?
Then, if you know a little of programming, you can take a look at models/research/object_detection/builders/optimizer_builder.py and see which are the optimizer that can be used and with which parameters.
Instead if you just want a out-of-the-box solution, this is how I did:
optimizer {
# momentum_optimizer {
adam_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: .0002
schedule {
step: 4500
learning_rate: .0001
}
schedule {
step: 7000
learning_rate: .00008
}
schedule {
step: 10000
learning_rate: .00004
}
}
}
# momentum_optimizer_value: 0.9
}
use_moving_average: false
}
In my (little) experience I noticed that using the same learning_experience as momentum_optimizer
makes the learning too fast and/or brings to NaN Losses, so I usually decrease it of 10 times or more. I'm trying just now. :)