I am currently implementing a CNN in plain numpy and have a brief question regarding a special case of the backpropagation for a max-pool layer:
While it is clear that the gradient with respect to non-maximum values vanishes, I am not sure about the case where several entries of a slice are equal to the maximum value. Strictly speaking, the function should not be differentiable at this "point". However, I would assume that one can pick a subgradient from the corresponding subdifferential (similar to choosing the subgradient "0" for the Relu function at x=0).
Hence, I am wondering if it would be sufficient to simply form the gradient with respect to one of the maximum values and treat the remaining maxium values as non-maximum values.
If that is the case, would it be advisable to randomize the selection of the maximum value to avoid bias or is it okay to always pick the first maximum value?