tensorflow-serving Questions

4

I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file ...

2

Context: I have a simple classifier based on tf.estimator.DNNClassifier that takes text and output probabilities over an intent tags. I am able to train an export the model to a serveable as well ...
Cottonseed asked 22/8, 2018 at 16:13

1

I have a SavedModel in a folder (generator_model_final) with the following content: - saved_model.pb - variables |- variables.data-00000-of-00002 |- variables.data-00001-of-00002 |- variables.in...
Gallonage asked 20/6, 2020 at 9:17

6

Solved

I trained a simple mnist model with tensorflow 2.0 on Google Colab and saved it in the .json format. Click here to check out the Colab Notebook where I've written the code. Then on running the comm...
Mopboard asked 9/4, 2019 at 11:8

4

Solved

Having seen this github issue and this stackoverflow post I had hoped this would simply work. It seems as though passing in the environment variable MODEL_CONFIG_FILE has no affect. I am running t...
Pursuance asked 28/10, 2018 at 20:43

4

Once I have a TF server serving multiple models, is there a way to query such server to know which models are served? Would it be possible then to have information about each of such models, like ...
Bilabial asked 5/1, 2018 at 12:14

3

I have found tutorials and posts which only says to serve tensorflow models using tensor serving. In model.conf file, there is a parameter model_platform in which tensorflow or any other platform ...
Conversationalist asked 3/4, 2018 at 7:43

3

Solved

I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model. If the model being requested has not yet be...
Zahn asked 30/1, 2019 at 12:33

3

I have installed TensorFlow using the following command docker run -it b.gcr.io/tensorflow/tensorflow:latest-devel and I need to set up TensorFlow Serving on a windows machine. I followed the i...
Jeggar asked 11/10, 2016 at 19:45

2

Solved

I've created a TensorFlow model that uses RaggedTensors. Model works fine and when calling model.predict and I get the expected results. input = tf.ragged.constant([[[-0.9984272718429565, -0.942232...

2

I am running in the following scenario: Single Node Kubernetes Cluster (1x i7-8700K, 1x RTX 2070, 32GB RAM) 1 Tensorflow Serving Pod 4 Inference Client Pods What the inference clients do is the...
Saxophone asked 2/4, 2020 at 17:17

2

I am unable to run 2 or more models via TensorFlow Serving via docker on a Windows 10 machine. I have made a models.config file model_config_list: { config: { name: "ukpred2", base_path: "/mod...
Rosenwald asked 13/9, 2018 at 15:29

1

Solved

Currently, I am building the async frontend to my TF2 model. Now it works as two services, 1st service is a twisted service, and 2nd service is a TensorFlow serving. The async web client is being u...

3

My understanding is that I should be able to grab a TensorFlow model from Google's AI Hub, deploy it to TensorFlow Serving and use it to make predictions by POSTing images via REST requests using c...
Eras asked 16/9, 2019 at 21:4

3

I followed the step in one of the TF beginner tutorial to create a simple classification model. They are the following: from __future__ import absolute_import, division, print_function, unicode_li...
Scotsman asked 23/9, 2019 at 7:40

4

Solved

I am trying to create a docker Image for tensforflow serving like here. When i try to pull docker image with all the required dependencies(pip dependencies, bazel, grpc) What am i doing wrong h...
Cyd asked 19/3, 2018 at 7:13

3

Solved

In python you can simply pass a numpy array to predict() to get predictions from your model. What is the equivalent using Java with a SavedModelBundle? Python model = tf.keras.models.Sequential([...
Arianearianie asked 7/6, 2020 at 6:8

2

Solved

I've been following the TensorFlow for Poets 2 codelab on a model I've trained, and have created a frozen, quantized graph with embedded weights. It's captured in a single file - say my_quant_graph...
Flynt asked 2/6, 2017 at 12:38

3

I had deployed a model which used a tfhub model to tensorflow-serving using a docker. Here is the tfhub model contained in my model: https://tfhub.dev/google/universal-sentence-encoder-multilingu...
Spiritless asked 3/7, 2019 at 2:35

5

Solved

I was following this tutorial to use tensorflow serving using my object detection model. I am using tensorflow object detection for generating the model. I have created a frozen model using this ex...

2

I've read the basic and advanced tensorflow-serving tutorials but I am still unclear for how to build support in tensorflow-serving for the following: models built in Python (like xgboost or scik...
Hourly asked 30/3, 2018 at 9:15

3

I am trying use an embeddings module from tensorflow hub as servable. I am new to tensorflow. Currently, I am using Universal Sentence Encoder embeddings as a lookup to convert sentences to embeddi...
Underglaze asked 10/6, 2018 at 21:10

0

For some reason the time used to extract results using .float_val is extremely high. Scenario example along with its output: t2 = time.time() options = [('grpc.max_receive_message_length', 100 * 40...
Sparid asked 20/10, 2020 at 1:21

3

Is there a way to set log level in tf serving via docker? I see these params but do not see anything about logging there --port=8500 int32 Port to listen on for gRPC API --grpc_socket_path="" s...
Jonathanjonathon asked 13/11, 2019 at 17:35

1

Problem description The problem we encounter is the following. Serving is configured to load and serve 7 models, and with an increase in the number of models, Serving requests timeout more frequent...
Saucepan asked 3/9, 2018 at 13:22

© 2022 - 2024 — McMap. All rights reserved.