Given X with dimensions (m samples, n sequences, and k features), and y labels with dimensions (m samples, 0/1):
Suppose I want to train a stateful LSTM (going by keras definition, where "stateful = True" means that cell states are not reset between sequences per sample -- please correct me if I'm wrong!), are states supposed to be reset on a per epoch basis or per sample basis?
Example:
for e in epoch:
for m in X.shape[0]: #for each sample
for n in X.shape[1]: #for each sequence
#train_on_batch for model...
#model.reset_states() (1) I believe this is 'stateful = False'?
#model.reset_states() (2) wouldn't this make more sense?
#model.reset_states() (3) This is what I usually see...
In summary, I am not sure if to reset states after each sequence or each epoch (after all m samples are trained in X).
Advice is much appreciated.