How can I quantize facenet's Inception-ResNet-v1 model in Tensorflow?
Asked Answered
M

0

8

What I want to do

I'm trying to create a quantized version of the Inception-ResNet-v1 model used in facenet - with not only quantized weights, but quantized nodes as well, according to Tensorflow's graph_transform guide.

What I have tried

Using a model pretrained on the CASIA webface dataset, I tried to finetune the model with fake quantization nodes, by adding the following line

tf.contrib.quantize.create_training_graph(quant_delay=0)

in the facenet training script train_softmax.py, after calculating the total loss (line 178), and the below line just before saving the checkpoints (line 462):

tf.contrib.quantize.create_eval_graph()

I then finetuned the pretrained model for 1000 iterations, using a learning rate of 0.0005:

python3 src/train_softmax.py \
--logs_base_dir ~/logs/facenet/ \
--models_base_dir ${model_path} \
--data_dir ${casia_path} \
--image_size 160 \
--model_def models.inception_resnet_v1 \
--lfw_dir ${lfw_path} \
--optimizer ADAM \
--learning_rate -1 \
--max_nrof_epochs 150 \
--keep_probability 0.8 \
--random_crop \
--random_flip \
--use_fixed_image_standardization \
--learning_rate_schedule_file data/learning_rate_schedule_classifier_casia.txt \
--weight_decay 5e-4 \
--embedding_size 128 \
--lfw_distance_metric 1 \
--lfw_use_flipped_images \
--lfw_subtract_mean \
--validation_set_split_ratio 0.05 \
--validate_every_n_epochs 5 \
--prelogits_norm_loss_factor 5e-4 \
--center_loss_factor 2e-4 \
--gpu_memory_fraction 0.7 \
--pretrained_model ${model_path}/20180614-060325/model-20180614-060325.ckpt-90

So far, so good. Next, I froze the resulting graph using facenet's freeze_graph:

python3 src/freeze_graph.py ${model_path}/20180709-100209/ ${model_path}/20180709-100209/model-20180709-100209-frozen.pb

Finally I tried to use transform_graph to create a fully quantized model:

bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=${modelpath}/20180709-100209/model-20180709-100209-frozen.pb \
--out_graph=${modelpath}/20180709-100209/model-20180709-100209-quantized.pb \
--inputs='input,phase_train' \
--outputs='embeddings' \
--transforms='
  add_default_attributes
  strip_unused_nodes
  remove_nodes(op=Identity, op=CheckNumerics)
  fold_constants(ignore_errors=true)
  fold_batch_norms
  fold_old_batch_norms
  quantize_weights
  quantize_nodes
  strip_unused_nodes
  sort_by_execution_order'
INFO: Analysed target //tensorflow/tools/graph_transforms:transform_graph (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/tools/graph_transforms:transform_graph up-to-date:
  bazel-bin/tensorflow/tools/graph_transforms/transform_graph
INFO: Elapsed time: 0.369s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
2018-07-09 10:28:57.970978: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     add_default_attributes
2018-07-09 10:28:58.068712: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     strip_unused_nodes
2018-07-09 10:28:58.204184: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     remove_nodes
2018-07-09 10:29:22.175832: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     fold_constants
2018-07-09 10:29:22.241960: E     tensorflow/tools/graph_transforms/transform_graph.cc:333] fold_constants: Ignoring error Input 0 of node InceptionResnetV1/Repeat/block35_1/Conv2d_1x1/weights_quant/AssignMinLast was passed float from InceptionResnetV1/Repeat/block35_1/Conv2d_1x1/weights_quant/min:0 incompatible with expected float_ref.
2018-07-09 10:29:22.294469: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     fold_batch_norms
2018-07-09 10:29:22.421606: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     fold_old_batch_norms
2018-07-09 10:29:22.772485: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     quantize_weights
2018-07-09 10:29:23.224347: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     quantize_nodes
2018-07-09 10:29:25.297763: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     strip_unused_nodes
2018-07-09 10:29:25.403029: I     tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying     sort_by_execution_order

generating an error in the fold_constants transform. Trying to run the resulting model generates the following error:

Traceback (most recent call last):
  File "benchmark_gpu.py", line 116, in <module>
    recognizer = FaceRecognizer(config)
  File "../facerecognizer.py", line 51, in __init__
    self.load_model()
  File "../facerecognizer.py", line 55, in load_model
facenet.load_model(self.model)
  File "../3rd-party/facenet/src/facenet.py", line 373, in load_model
tf.import_graph_def(graph_def, input_map=input_map, name='')
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 602, in import_graph_def
op_to_bind_to, node.name))
ValueError: Specified colocation to an op that does not exist during import: InceptionResnetV1/Repeat_1/block17_6/Conv2d_1x1/act_quant/min in InceptionResnetV1/Repeat_1/block17_6/Conv2d_1x1/act_quant/AssignMinEma/InceptionResnetV1/Repeat_1/block17_6/Conv2d_1x1/act_quant/min/AssignAdd/value

Attempts at solving the problem

The error produced by the fold_constants transform suggests that a constant is received when the op expects a variable, so I tried adding all quantization nodes to the blacklist when converting variables to constants in freeze_graph.freeze_graph_def:

# Get the list of important nodes
whitelist_names = []
blacklist_names = [] # <-- NEW
for node in input_graph_def.node:
    if (node.name.startswith('InceptionResnet') or node.name.startswith('embeddings') or 
        node.name.startswith('image_batch') or node.name.startswith('label_batch') or
        node.name.startswith('phase_train') or node.name.startswith('Logits')):
        whitelist_names.append(node.name)
    elif "quant" in node.name: # <-- NEW
        blacklist_names.append(node.name) # <-- NEW

# Replace all the variables in the graph with constants of the same values
output_graph_def = graph_util.convert_variables_to_constants(
    sess, input_graph_def, output_node_names.split(","),
    variable_names_whitelist=whitelist_names,
    variable_names_blacklist=blacklist_names) # <-- NEW

However, running the graph_transform on the newly frozen model produces the same error as before, and removing the fold_constants transform from the graph_transform command results in the same error as previously when trying to run the model.


Did I misplace the create_*_graph() functions? Have I misunderstood something else?

Meteoritics answered 9/7, 2018 at 12:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.