Is there a canonical way to maintain a stateful LSTM, etc. with Tensorflow Serving?
Using the Tensorflow API directly this is straightforward - but I'm not certain how best to accomplish persisting LSTM state between calls after exporting the model to Serving.
Are there any examples out there which accomplish the above? The samples within the repo are very basic.