Serving multiple tensorflow models using docker
Asked Answered
P

4

10

Having seen this github issue and this stackoverflow post I had hoped this would simply work.

It seems as though passing in the environment variable MODEL_CONFIG_FILE has no affect. I am running this through docker-compose but I get the same issue using docker-run.


The error:

I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config:  model_name: model model_base_path: /models/model
I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating models.
I tensorflow_serving/model_servers/server_core.cc:558]  (Re-)adding model: model
E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable model

The Dockerfile

FROM tensorflow/serving:nightly

COPY ./models/first/ /models/first
COPY ./models/second/ /models/second

COPY ./config.conf /config/config.conf

ENV MODEL_CONFIG_FILE=/config/config.conf

The compose file

version: '3'

services:
  serving:
    build: .
    image: testing-models
    container_name: tf

The config file

model_config_list: {
  config: {
    name:  "first",
    base_path:  "/models/first",
    model_platform: "tensorflow",
    model_version_policy: {
        all: {}
    }
  },
  config: {
    name:  "second",
    base_path:  "/models/second",
    model_platform: "tensorflow",
    model_version_policy: {
        all: {}
    }
  }
}
Pursuance answered 28/10, 2018 at 20:43 Comment(4)
After 5th of October you dont have to use nightly anymore, since the lates tensorflow-serving now support config files, as stated hereManwell
@krisR89 that's all well and good but it doesn't work either way.Pursuance
In tensorflow servings, the default MODEL_NAME is "model" while MODEL_BASE_PATH is "/models". For some reason your model does not run the config file, and only runs the default "/models/model". I guess you have one of two problems. old servings image or the image is activated incorrectly. I would try to "docker pull tensorflow/serving" to get the laset version. And can you add how you run the docker image fil (the command), in case that is the problem?Manwell
@Manwell during testing I was explicitly using :1.11.0-rc0 as the image, as stated in the question you linked that is the revision with these changes. I ran a docker image prune -a to ensure I always had the latest image. The command I used was docker-compose build followed by docker-compose up. The problem is stated above. The only thing I can think is that the environment variable is somehow not being honoured by the image.Pursuance
M
6

There is no docker environment variable named “MODEL_CONFIG_FILE” (that’s a tensorflow/serving variable, see docker image link), so the docker image will only use the default docker environment variables ("MODEL_NAME=model" and "MODEL_BASE_PATH=/models"), and run the model “/models/model” at startup of the docker image. "config.conf" should be used as input at "tensorflow/serving" startup. Try to run something like this instead:

docker run -p 8500:8500 8501:8501 \
  --mount type=bind,source=/path/to/models/first/,target=/models/first \
  --mount type=bind,source=/path/to/models/second/,target=/models/second \
  --mount type=bind,source=/path/to/config/config.conf,target=/config/config.conf\
  -t tensorflow/serving --model_config_file=/config/config.conf
Manwell answered 29/10, 2018 at 13:11 Comment(4)
While I appreciate the answer I specifically need this to run through docker-compose with the dockerfile which packages all of the information into a single image. I'll make a feature request on github issues to see if this can be implemented. Thanks.Pursuance
I'm sorry, but i don't know any workaround for that! An environment variable called MODEL_CONFIG_FILE would probably be a good idea for the docker image, as me, and probably more have been confused about that. Luckily for me, a docker-compose solution was not required in my case. Good luck!Manwell
I've got a work around in place it just isn't as nice and managable as a single image as I had originally hoped. I've added a feature request for the environment variable so hopefully that happens at some point.Pursuance
Looks like the reason my earlier attempts to use docker run failed was due to this. Having resolved that I'm just adding a command: property to my docker-compose. Going to give you the green tick as you got me there. Thanks!Pursuance
P
7

I ran into this double slash issue for git bash on windows.

As such I am passing the argument, mentioned by @KrisR89, in via command in the docker-compose.

The new docker-compose looks like this and works with the supplied dockerfile:

version: '3'

services:
  serving:
    build: .
    image: testing-models
    container_name: tf
    command: --model_config_file=/config/config.conf
Pursuance answered 29/10, 2018 at 23:52 Comment(0)
M
6

There is no docker environment variable named “MODEL_CONFIG_FILE” (that’s a tensorflow/serving variable, see docker image link), so the docker image will only use the default docker environment variables ("MODEL_NAME=model" and "MODEL_BASE_PATH=/models"), and run the model “/models/model” at startup of the docker image. "config.conf" should be used as input at "tensorflow/serving" startup. Try to run something like this instead:

docker run -p 8500:8500 8501:8501 \
  --mount type=bind,source=/path/to/models/first/,target=/models/first \
  --mount type=bind,source=/path/to/models/second/,target=/models/second \
  --mount type=bind,source=/path/to/config/config.conf,target=/config/config.conf\
  -t tensorflow/serving --model_config_file=/config/config.conf
Manwell answered 29/10, 2018 at 13:11 Comment(4)
While I appreciate the answer I specifically need this to run through docker-compose with the dockerfile which packages all of the information into a single image. I'll make a feature request on github issues to see if this can be implemented. Thanks.Pursuance
I'm sorry, but i don't know any workaround for that! An environment variable called MODEL_CONFIG_FILE would probably be a good idea for the docker image, as me, and probably more have been confused about that. Luckily for me, a docker-compose solution was not required in my case. Good luck!Manwell
I've got a work around in place it just isn't as nice and managable as a single image as I had originally hoped. I've added a feature request for the environment variable so hopefully that happens at some point.Pursuance
Looks like the reason my earlier attempts to use docker run failed was due to this. Having resolved that I'm just adding a command: property to my docker-compose. Going to give you the green tick as you got me there. Thanks!Pursuance
D
2

The error is cause serving couldn't find your model.

E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable model

Your docker compose file didn't mount your model files in the container. So the Serving couldn't find your models. I suggest to set three configure files.

1 docker-compose.yml

2 .env

3 models.config

docker-compose.yml:

Mount your model files from host to the container. I think you could do this :

 version: "3"
  services:
        sv:
                image: tensorflow/serving:latest
                restart: unless-stopped
                ports:
                        - 8500:8500
                        - 8501:8501
                volumes:
                        - ${MODEL1_PATH}:/models/${MODEL1_NAME}
                        - ${MODEL2_PATH}:/models/${MODEL2_NAME}
                        - /home/deploy/dcp-file/tf_serving/models.config:/models/models.config
                command: --model_config_file=/models/models.config

.env: docker-compose.yml loads info from this file.

MODEL1_PATH=/home/notebooks/water_model
MODEL1_NAME=water_model
MODEL2_PATH=/home/notebooks/ice_model
MODEL2_NAME=ice_model

models.config:

model_config_list: {
  config {
    name:  "water_model",
    base_path:  "/models/water_model",
    model_platform: "tensorflow",
    model_version_policy: {
        versions: 1588723537
        versions: 1588734567
    }
  },
  config {
    name:  "ice_model",
    base_path:  "/models/ice_model",
    model_platform: "tensorflow",
    model_version_policy: {
        versions: 1588799999
        versions: 1588788888
    }
  }
}

And you can see this serving official document

Delete answered 14/6, 2019 at 2:56 Comment(0)
B
1

I have encounter the same issue/error

The problem coming from the fact that when you run your tensorflow serving using Dockerfile only and using this command :

Dockerfile
CMD ["bash", "-c", "docker run -it -p 8500:8500 -p 8501:8501 -- entrypoint /bin/bash tensorflow/serving \
&& tensorflow_model_server --rest_api_port=8501 --model_config_file=${MODEL_CONFIG_FILE}}"]

your config file wont be applied, only the first part of the command that will applied, since you apply the second one in a bash.

So what I did is that I took he second half of the command and excute it in docker-compose file.

Dockerfile :
CMD ["bash", "-c", "docker run -it -p 8500:8500 -p 8501:8501 -- entrypoint /bin/bash tensorflow/serving ]
Docker-compose
version: '3'
services:
tensorflow-serving:
build:
  context: .
  dockerfile: Dockerfile_
container_name: TF_serving_container
ports:
  - "8601:8501"
command: tensorflow_model_server --rest_api_port=8501 --model_config_file="/models/model/models.config.b"
networks:
  - ml_network

Change this /models/model/models.config.b with your config file Do not forget to network your containers too.

Beaker answered 19/1 at 12:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.