Why do Keras Conv1D layers' output tensors not have the input dimension?
Asked Answered
F

2

8

According to the keras documentation (https://keras.io/layers/convolutional/) the shape of a Conv1D output tensor is (batch_size, new_steps, filters) while the input tensor shape is (batch_size, steps, input_dim). I don't understand how this could be since that implies that if you pass a 1d input of length 8000 where batch_size = 1 and steps = 1 (I've heard steps means the # of channels in your input) then this layer would have an output of shape (1,1,X) where X is the number of filters in the Conv layer. But what happens to the input dimension? Since the X filters in the layer are applied to the entire input dimension shouldn't one of the output dimensions be 8000 (or less depending on padding), something like (1,1,8000,X)? I checked and Conv2D layers behave in a way that makes more sense their output_shape is (samples, filters, new_rows, new_cols) where new_rows and new_cols would be the dimensions of an input image again adjusted based on padding. If Conv2D layers preserve their input dimensions why don't Conv1D layers? Is there something I'm missing here?

Background Info:

I'm trying to visualize 1d convolutional layer activations of my CNN but most tools online I've found seem to just work for 2d convolutional layers so I've decided to write my own code for it. I've got a pretty good understanding of how it works here is the code I've got so far:

# all the model's activation layer output tensors
activation_output_tensors = [layer.output for layer in model.layers if type(layer) is keras.layers.Activation]

# make a function that computes activation layer outputs
activation_comp_function = K.function([model.input, K.learning_phase()], activation_output_tensors)

# 0 means learning phase = False (i.e. the model isn't learning right now)
activation_arrays = activation_comp_function([training_data[0,:-1], 0])

This code is based off of julienr's first comment in this thread, with some modifications for the current version of keras. Sure enough when I use it though all the activation arrays are of shape (1,1,X)... I spent all day yesterday trying to figure out why this is but no luck any help is greatly appreciated.

UPDATE: Turns out I mistook the meaning of the input_dimension with the steps dimension. This is mostly because the architecture I used came from another group that build their model in mathematica and in mathematica an input shape of (X,Y) to a Conv1D layer means X "channels" (or input_dimension of X) and Y steps. A thank you to gionni for helping me realize this and explaining so well how the "input_dimension" becomes the "filter" dimension.

Flannel answered 13/7, 2017 at 16:56 Comment(0)
J
8

I used to have the same problem with 2D convolutions. The thing is that when you apply a convolutional layer the kernel you are applying is not of size (kernel_size, 1) but actually (kernel_size, input_dim).

If you think of it if it wasn't this way a 1D convolutional layer with kernel_size = 1 would be doing nothing to the inputs it received.

Instead it is computing a weighted average of the input features at each time step, using the same weights for each time step (although every filter uses a different set of weights). I think it helps to visualize input_dim as the number of channels in a 2D convolution of an image, where the same reaoning applies (in that case is the channels that "get lost" and trasformed into the number of filters).

To convince yourself of this, you can reproduce the 1D convolution with a 2D convolution layer using kernel_size=(1D_kernel_size, input_dim) and the same number of filters. Here an example:

from keras.layers import Conv1D, Conv2D
import keras.backend as K
import numpy as np

# create an input with 4 steps and 5 channels/input_dim
channels = 5
steps = 4
filters = 3
val = np.array([list(range(i * channels, (i + 1) * channels)) for i in range(1, steps + 1)])
val = np.expand_dims(val, axis=0)
x = K.variable(value=val)

# 1D convolution. Initialize the kernels to ones so that it's easier to compute the result by hand

conv1d = Conv1D(filters=filters, kernel_size=1, kernel_initializer='ones')(x)

# 2D convolution that replicates the 1D one

# need to add a dimension to your input since conv2d expects 4D inputs. I add it at axis 4 since my keras is setup with `channel_last`
val1 = np.expand_dims(val, axis=3)
x1 = K.variable(value=val1)

conv2d = Conv2D(filters=filters, kernel_size=(1, 5), kernel_initializer='ones')(x1)

# evaluate and print the outputs

print(K.eval(conv1d))
print(K.eval(conv2d))

As I said, it took me a while too to understand this, I think mostly because no tutorial explains it clearly

Jacklighter answered 13/7, 2017 at 18:25 Comment(0)
T
0

Thanks, It's very useful.

here the same code adapted using recent version of tensorflow + keras and stacking on axis 0 to build the 4D

# %%
from tensorflow.keras.layers import Conv1D, Conv2D
from tensorflow.keras.backend import eval
import tensorflow as tf
import numpy as np

# %%
# create an 3D input with format BLC (Batch, Layer, Channel)
batch = 10
layers = 3
channels = 5
kernel = 2

val3D = np.random.randint(0, 100, size=(batch, layers, channels))
x = tf.Variable(val3D.astype('float32'))

# %%
# 1D convolution. Initialize the kernels to ones so that it's easier to compute the result by hand / compare
conv1d = Conv1D(filters=layers, kernel_size=kernel, kernel_initializer='ones')(x)

# %%
# 2D convolution that replicates the 1D one

# need to add a dimension to your input since conv2d expects 4D inputs. I add it at axis 0 since my keras is setup with `channel_last`
# stack 3 time the same
val4D = np.stack([val3D,val3D,val3D], axis=0)
x1 = tf.Variable(val4D.astype('float32'))

# %%
# 2D convolution. Initialize the kernel_size to one for the 1st kernel size so that replicate the conv1D
conv2d = Conv2D(filters=layers, kernel_size=(1, kernel), kernel_initializer='ones')(x1)

# %%
# evaluate and print the outputs

print(eval(conv1d))
print('---------------------------------------------')
# display only one of the stacked
print(eval(conv2d)[0])
Twandatwang answered 22/12, 2022 at 2:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.