Vertex AI custom container batch prediction
Asked Answered
O

1

6

I have created a custom container for prediction and successfully uploaded the model to Vertex AI. I was also able to deploy the model to an endpoint and successfully request predictions from the endpoint. Within the custom container code, I use the parameters field as described here, which I then supply later on when making an online prediction request. My questions are regarding requesting batch predictions from a custom container for prediction.

  1. I cannot find any documentation that describes what happens when I request a batch prediction. Say, for example, I use the my_model.batch_predict function from the Python SDK and set the instances_format to "csv" and provide the gcs_source. Now, I have setup my custom container to expect prediction requests at /predict as described in this documentation. Does Vertex AI make a POST request to this path, converting the cvs data into the appropriate POST body?

  2. How do I specify the parameters field for batch prediction as I did for online prediction?

Obolus answered 11/11, 2021 at 23:44 Comment(2)
If my answer addressed your question, please consider upvoting/accepting it. If not, let me know so that I can improve the answer.Tog
Your answer does not answer anything I have asked, you simply described how batch prediction works.Obolus
B
4
  1. Yes vertex AI makes a POST request your custom containers in batch prediction.

  2. No, there is no way for batch prediction to pass a parameter since we don't know which column is "parameter". We put everything into "instances".

Britisher answered 17/11, 2021 at 4:15 Comment(2)
So, does that mean that batch prediction is not possible with custom prediction containers or is there an alternative?Obolus
If you are not following the same protocol as the stock container, you will have to use something like Jsonl as input, where you can specify whatever input format as you wishBritisher

© 2022 - 2024 — McMap. All rights reserved.