When you run this command below, you are starting a process of tensorflow model server which serves the model at a port number (9009 over here).
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009
--model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow-
models-repository/ETA
You are not displaying the logs here,but the model server running. This is the reason why the screen is stagnant. You need to use the flag -v=1
when you run the above command to display the logs on your console
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=1 --port=9009
--model_name='model_name' --model_base_path=model_path
Now moving to your logging/monitoring of incoming requests/responses. You cannot monitor the incoming requests/responses when the VLOG is set to 1. VLOGs is called Verbose logs. You need to use the log level 3
to display all errors, warnings, and some informational messages related to processing times (INFO1 and STAT1). Please look into the given link for further details on VLOGS. http://webhelp.esri.com/arcims/9.2/general/topics/log_verbose.htm
Now moving your second problem. I would suggest you to use environment variables provided by Tensorflow serving export TF_CPP_MIN_VLOG_LEVEL=3
instead of setting flags. Set the environment variable before you start the server. After that, please enter the below command to start your server and store the logs to a logfile named mylog
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009
--model_name='model_name' --model_base_path=model_path &> my_log &
. Even though you close your console, all the logs gets stored as the model server runs. Hope this helps.