TensorFlow: Unpooling
Asked Answered
Y

6

26

Is there TensorFlow native function that does unpooling for Deconvolutional Networks ?

I have written this in normal python, but it is getting complicated when want to translate it to TensorFlow as it's objects does not even support item assignment at the moment, and I think this is a great inconvenience with TF.

Ysabel answered 11/4, 2016 at 12:29 Comment(3)
Curious, can you post what your normal Python look like for deconv? (maybe I could see a better TF way)Pfeiffer
This might be of help, github.com/tensorflow/tensorflow/issues/…Gurtner
pyTorch has support out of the box, pytorch.org/docs/stable/nn.html?highlight=unpooling#maxunpool2dGurtner
R
16

I don't think there is an official unpooling layer yet which is frustrating because you have to use image resize (bilinear interpolation or nearest neighbor) which is like an average unpooling operation and it's reaaaly slow. Look at the tf api in the section 'image' and you will find it.

Tensorflow has a maxpooling_with_argmax thing where you get you maxpooled output as well as the activation map which is nice as you could use it in an unpooling layer to preserve the 'lost' spacial information but it seems as there isn't such an unpooling operation that does it. I guess that they are planning to add it ... soon.

Edit: I found some guy on google discuss a week ago who seems to have implemented something like this but I personally haven't tried it yet. https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66

Resa answered 11/4, 2016 at 19:52 Comment(0)
O
11

There is a couple of tensorflow implementations here pooling.py

Namely:

1) unpool operation (source) that utilizes output of tf.nn.max_pool_with_argmax. Although please notice, that as of tensorflow 1.0 tf.nn.max_pool_with_argmax is GPU-only

2) upsample operation that mimics inverse of max-pooling by filling positions of unpooled region with either zeros or copies of max element. Comparing to tensorpack it allows copies of elements instead of zeros and supports strides other than [2, 2].

No recompile, back-prop friendly.

Illustration: Upsampling

Unpooling

Osculation answered 2/3, 2017 at 7:31 Comment(0)
W
5

I was searching for a maxunpooling operation and tried implementing it. I came up with some kind of hacky implementation for the gradient, as I was struggling with CUDA.

The code is here, you will need to build it from source with GPU support. Below is a demo application. No warranties, though!

There also exists an open issue for this operation.

import tensorflow as tf
import numpy as np

def max_pool(inp, k=2):
    return tf.nn.max_pool_with_argmax_and_mask(inp, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding="SAME")

def max_unpool(inp, argmax, argmax_mask, k=2):
    return tf.nn.max_unpool(inp, argmax, argmax_mask, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding="SAME")

def conv2d(inp, name):
    w = weights[name]
    b = biases[name]
    var = tf.nn.conv2d(inp, w, [1, 1, 1, 1], padding='SAME')
    var = tf.nn.bias_add(var, b)
    var = tf.nn.relu(var)
    return var

def conv2d_transpose(inp, name, dropout_prob):
    w = weights[name]
    b = biases[name]

    dims = inp.get_shape().dims[:3]
    dims.append(w.get_shape()[-2]) # adpot channels from weights (weight definition for deconv has switched input and output channel!)
    out_shape = tf.TensorShape(dims)

    var = tf.nn.conv2d_transpose(inp, w, out_shape, strides=[1, 1, 1, 1], padding="SAME")
    var = tf.nn.bias_add(var, b)
    if not dropout_prob is None:
        var = tf.nn.relu(var)
        var = tf.nn.dropout(var, dropout_prob)
    return var


weights = {
    "conv1":    tf.Variable(tf.random_normal([3, 3,  3, 16])),
    "conv2":    tf.Variable(tf.random_normal([3, 3, 16, 32])),
    "conv3":    tf.Variable(tf.random_normal([3, 3, 32, 32])),
    "deconv2":  tf.Variable(tf.random_normal([3, 3, 16, 32])),
    "deconv1":  tf.Variable(tf.random_normal([3, 3,  1, 16])) }

biases = {
    "conv1":    tf.Variable(tf.random_normal([16])),
    "conv2":    tf.Variable(tf.random_normal([32])),
    "conv3":    tf.Variable(tf.random_normal([32])),
    "deconv2":  tf.Variable(tf.random_normal([16])),
    "deconv1":  tf.Variable(tf.random_normal([ 1])) }


## Build Miniature CEDN
x = tf.placeholder(tf.float32, [12, 20, 20, 3])
y = tf.placeholder(tf.float32, [12, 20, 20, 1])
p = tf.placeholder(tf.float32)

conv1                                   = conv2d(x, "conv1")
maxp1, maxp1_argmax, maxp1_argmax_mask  = max_pool(conv1)

conv2                                   = conv2d(maxp1, "conv2")
maxp2, maxp2_argmax, maxp2_argmax_mask  = max_pool(conv2)

conv3                                   = conv2d(maxp2, "conv3")

maxup2                                  = max_unpool(conv3, maxp2_argmax, maxp2_argmax_mask)
deconv2                                 = conv2d_transpose(maxup2, "deconv2", p)

maxup1                                  = max_unpool(deconv2, maxp1_argmax, maxp1_argmax_mask)
deconv1                                 = conv2d_transpose(maxup1, "deconv1", None)


## Optimizing Stuff
loss        = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(deconv1, y))
optimizer   = tf.train.AdamOptimizer(learning_rate=1).minimize(loss)


## Test Data
np.random.seed(123)
batch_x = np.where(np.random.rand(12, 20, 20, 3) > 0.5, 1.0, -1.0)
batch_y = np.where(np.random.rand(12, 20, 20, 1) > 0.5, 1.0,  0.0)
prob    = 0.5


with tf.Session() as session:
    tf.set_random_seed(123)
    session.run(tf.initialize_all_variables())

    print "\n\n"
    for i in range(10):
        session.run(optimizer, feed_dict={x: batch_x, y: batch_y, p: prob})
        print "step", i + 1
        print "loss",  session.run(loss, feed_dict={x: batch_x, y: batch_y, p: 1.0}), "\n\n"

Edit 29.11.17

Some time back, I reimplemented it in a clean fashion against TensorFlow 1.0, the forward operations are also available as CPU-version. You can find it in this branch, I recommend you looking up the last few commits if you want to use it.

Wesson answered 26/8, 2016 at 13:6 Comment(2)
don't you need to have first a conv2d_transpose(conv3, "deconv3") before the maxup2 = max_unpool(conv3, maxp2_argmax, maxp2_argmax_mask)?Chao
@RoxanaIstrate I guess you would do that, if that were a real cedn model. In priniciple you can plug anything compliant with the layer dimensions of the unpooling part in there. The example was more to demonstrate the coupling of pooling and unpooling.Wesson
S
1

Nowadays there's a Tensorflow Addon MaxUnpooling2D:

Unpool the outputs of a maximum pooling operation.

tfa.layers.MaxUnpooling2D(
    pool_size: Union[int, Iterable[int]] = (2, 2),
    strides: Union[int, Iterable[int]] = (2, 2),
    padding: str = 'SAME',
    **kwargs
)

This class can e.g. be used as

import tensorflow as tf
import tensorflow_addons as tfa

pooling, max_index = tf.nn.max_pool_with_argmax(input, 2, 2, padding='SAME')
unpooling = tfa.layers.MaxUnpooling2D()(pooling, max_index)
Shortlived answered 17/2, 2022 at 19:31 Comment(0)
A
0

I checked this which shagas mentioned here and it is working.

x = [[[[1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3]],
  [[1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3]],
[[1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3],
  [1, 1, 2,2, 3, 3]]]]

x = np.array(x)

inp = tf.convert_to_tensor(x)

out = UnPooling2x2ZeroFilled(inp)

out
Out[19]: 
<tf.Tensor: id=36, shape=(1, 6, 12, 6), dtype=int64, numpy=
array([[[[1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0]],

        [[0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0]],

        [[1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0]],

        [[0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0]],

        [[1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0],
         [1, 1, 2, 2, 3, 3],
         [0, 0, 0, 0, 0, 0]],

        [[0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0],
         [0, 0, 0, 0, 0, 0]]]])>


out1 = tf.keras.layers.MaxPool2D()(out)

out1
Out[37]: 
<tf.Tensor: id=118, shape=(1, 3, 6, 6), dtype=int64, numpy=
array([[[[1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3]],

        [[1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3]],

        [[1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3],
         [1, 1, 2, 2, 3, 3]]]])>

If you need max unpooling then you can use (though I didn't check it) this one

Apportionment answered 4/11, 2019 at 5:30 Comment(0)
B
0

Here it is my implementation. You should apply the max-pooling using tf.nn.max_pool_with_argmax and then pass the argmax result of tf.nn.max_pool_with_argmax

def unpooling(inputs, output_shape, argmax):
        """
        Performs unpooling, as explained in:
        https://www.oreilly.com/library/view/hands-on-convolutional-neural/9781789130331/6476c4d5-19f2-455f-8590-c6f99504b7a5.xhtml
        :param inputs: Input Tensor.
        :param output_shape: Desired output shape. For example, on 2D unpooling, this should be 4D (because of number of samples and channels).
        :param argmax: Result argmax from tf.nn.max_pool_with_argmax
            https://www.tensorflow.org/api_docs/python/tf/nn/max_pool_with_argmax
        """
        flat_output_shape = tf.cast(tf.reduce_prod(output_shape), tf.int64)

        updates = tf.reshape(inputs, [-1])
        indices = tf.expand_dims(tf.reshape(argmax, [-1]), axis=-1)

        ret = tf.scatter_nd(indices, updates, shape=[flat_output_shape])
        ret = tf.reshape(ret, output_shape)
        return ret

This has a small bug/feature that is that if argmax has a repeated value it will perform an addition instead of just putting the value once. Beware of this if stride is 1. I don't know, however, if this is desired or not.

Buonarroti answered 18/3, 2021 at 16:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.