Tensorflow Serving: Rest API returns "Malformed request" error
Asked Answered
J

4

3

Tensorflow Serving server (run with docker) responds to my GET (and POST) requests with this:

{ "error": "Malformed request: POST /v1/models/saved_model/" }

Precisely the same problem was already reported but never solved (supposedly, this is a StackOverflow kind of question, not a GitHub issue):

https://github.com/tensorflow/serving/issues/1085

https://github.com/tensorflow/serving/issues/1095

Any ideas? Thank you very much.

Jacksonjacksonville answered 5/10, 2018 at 18:8 Comment(0)
J
2

There were two issues with my approach:

1) The status check request wasn't supported in my Tensorflow_model_server (see https://github.com/tensorflow/serving/issues/1085 for details)

2) More importantly, when using Windows you must escape quotation marks in JSON. So instead of:

curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{"instances":[{"features":[1,1,1,1,1,1,1,1,1,1]}]}"

I should have used this:

curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{\"instances\":[{\"features\":[1,1,1,1,1,1,1,1,1,1]}]}"
Jacksonjacksonville answered 10/10, 2018 at 20:12 Comment(0)
C
4

I verified that this does not work pre-v12 and does indeed work post-v12.

> docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL_NAME=55 --rm tensorflow/serving
> curl http://localhost:9009/v1/models/55
   { "error": "Malformed request: GET /v1/models/55" }

Now try with v12:

> docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL_NAME=55 --rm tensorflow/serving:1.12.0
> curl http://localhost:9009/v1/models/55
{
 "model_version_status": [
  {
   "version": "1541703514",
   "state": "AVAILABLE",
   "status": {
    "error_code": "OK",
    "error_message": ""
   }
  }
 ]
}
Climatology answered 9/11, 2018 at 16:54 Comment(0)
J
2

There were two issues with my approach:

1) The status check request wasn't supported in my Tensorflow_model_server (see https://github.com/tensorflow/serving/issues/1085 for details)

2) More importantly, when using Windows you must escape quotation marks in JSON. So instead of:

curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{"instances":[{"features":[1,1,1,1,1,1,1,1,1,1]}]}"

I should have used this:

curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{\"instances\":[{\"features\":[1,1,1,1,1,1,1,1,1,1]}]}"
Jacksonjacksonville answered 10/10, 2018 at 20:12 Comment(0)
L
1

Depends on your model, but this is what my body looks like:

{"inputs": {"text": ["Hello"]}}

I used Postman to help me out so that it knew it was a JSON.

This is for predict API, so the url ends in ":predict" Again, that depends on what API you're trying to use.

Lesbian answered 9/10, 2018 at 10:22 Comment(1)
You are right but my problem was something else (see above), nevertheless thanks.Jacksonjacksonville
C
0

Model status API is only supported in master branch. There is no TF serving release that supports it yet (the API is slated for upcoming 1.12 release). You can use the nightly docker image (tensorflow/serving:nightly) to test on master branch builds.

This solution gived by netf in issue:1128 in tensorflow/serving. I already try this solution, it's done and i can get the model status.Getting Model status img(this is the img for model status demo).

Hope I can help you.

If you not clear the master branch builds, you can contact me.

I can give your instruction.

Email:[email protected]

Crawl answered 11/10, 2018 at 1:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.