I have a layer output I want to multiply by a scalar. I can do this with a lambda layer ie
sc_mult = Lambda(lambda x: x * 2)(layer)
which works fine. But if I want to use a different scalar for each example, I try to supply these as a second input, with shape (Examples, 1)
input_scalar = Input(shape = (1L,))
therefore my lambda layer becomes
sc_mult = Lambda(lambda x: x * input_scalar)(layer)
But this now throws an error at train time. Note, 32 is the batch-size, and 128 is a dimension of the layer input (and output) - the layer input being multiplied by the scalar is (batch_size x 32(filters in previous layer) x 128(spatial dim) x 128(spatial dim)).
GpuElemwise. Input dimension mis-match. Input 5 (indices start at 0) has shape[2] == 32, but the output's size on that axis is 128.
I assume I am not feeding the right shape in via the input layer, but can't work out why.