How to perform deconvolution in Keras/ Theano?
Asked Answered
E

2

7

I am trying to implement deconvolution in Keras. My model definition is as follows:

model=Sequential()


model.add(Convolution2D(32, 3, 3, border_mode='same',
                        input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3,border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))

I want to perform deconvolution or transposed convolution on the output given by the first convolution layer i.e. convolution2d_1.

Lets say the feature map we have after first convolution layer is X which is of (9, 32, 32, 32) where 9 is the no of images of dimension 32x32 I have passed through the layer. The weight matrix of the first layer obtained by get_weights() function of Keras. The dimension of weight matrix is (32, 3, 3, 2).

The code I am using for performing transposed convolution is

 conv_out = K.deconv2d(self.x, W, (9,3,32,32), dim_ordering = "th")
 deconv_func = K.function([self.x, K.learning_phase()], conv_out)
 X_deconv = deconv_func([X, 0 ])

But getting error:

 CorrMM shape inconsistency:
  bottom shape: 9 32 34 34
  weight shape: 3 32 3 3
  top shape: 9 32 32 32 (expected 9 3 32 32)

Can anyone please tell me where I am going wrong?

Everything answered 23/11, 2016 at 16:16 Comment(0)
O
5

You can easily use Deconvolution2D layer.

Here is what you are trying to achieve:

batch_sz = 1
output_shape = (batch_sz, ) + X_train.shape[1:]
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)

deconv_func = K.function([model.input, K.learning_phase()], [conv_out])

test_x = np.random.random(output_shape)
X_deconv = deconv_func([test_x, 0 ])

But its better to create a functional model which will help both for training and prediction..

batch_sz = 10
output_shape = (batch_sz, ) + X_train.shape[1:]
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)

model2 = Model(model.input, [model.output, conv_out])
model2.summary()
model2.compile(loss=['categorical_crossentropy', 'mse'], optimizer='adam')
model2.fit(X_train, [Y_train, X_train], batch_size=batch_sz)
Odontoid answered 28/1, 2017 at 10:10 Comment(0)
S
1

In Keras, Conv2DTranspose layer perform transposed convolution in other terms deconvolution. It supports both backend lib i.e. Theano & Keras.

Keras Documentation says:

Conv2DTranspose

Transposed convolution layer (sometimes called Deconvolution).

The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.

Sap answered 1/8, 2017 at 13:43 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.