Tensorflow : ValueError: Shape must be rank 2 but is rank 3
Asked Answered
H

1

6

I'm new to tensorflow and I'm trying to update some code for a bidirectional LSTM from an old version of tensorflow to the newest (1.0), but I get this error:

Shape must be rank 2 but is rank 3 for 'MatMul_3' (op: 'MatMul') with input shapes: [100,?,400], [400,2].

The error happens on pred_mod.

    _weights = {
    # Hidden layer weights => 2*n_hidden because of foward + backward cells
        'w_emb' : tf.Variable(0.2 * tf.random_uniform([max_features,FLAGS.embedding_dim], minval=-1.0, maxval=1.0, dtype=tf.float32),name='w_emb',trainable=False),
        'c_emb' : tf.Variable(0.2 * tf.random_uniform([3,FLAGS.embedding_dim],minval=-1.0, maxval=1.0, dtype=tf.float32),name='c_emb',trainable=True),
        't_emb' : tf.Variable(0.2 * tf.random_uniform([tag_voc_size,FLAGS.embedding_dim], minval=-1.0, maxval=1.0, dtype=tf.float32),name='t_emb',trainable=False),
        'hidden_w': tf.Variable(tf.random_normal([FLAGS.embedding_dim, 2*FLAGS.num_hidden])),
        'hidden_c': tf.Variable(tf.random_normal([FLAGS.embedding_dim, 2*FLAGS.num_hidden])),
        'hidden_t': tf.Variable(tf.random_normal([FLAGS.embedding_dim, 2*FLAGS.num_hidden])),
        'out_w': tf.Variable(tf.random_normal([2*FLAGS.num_hidden, FLAGS.num_classes]))}

    _biases = {
         'hidden_b': tf.Variable(tf.random_normal([2*FLAGS.num_hidden])),
         'out_b': tf.Variable(tf.random_normal([FLAGS.num_classes]))}


    #~ input PlaceHolders
    seq_len = tf.placeholder(tf.int64,name="input_lr")
    _W = tf.placeholder(tf.int32,name="input_w")
    _C = tf.placeholder(tf.int32,name="input_c")
    _T = tf.placeholder(tf.int32,name="input_t")
    mask = tf.placeholder("float",name="input_mask")

    # Tensorflow LSTM cell requires 2x n_hidden length (state & cell)
    istate_fw = tf.placeholder("float", shape=[None, 2*FLAGS.num_hidden])
    istate_bw = tf.placeholder("float", shape=[None, 2*FLAGS.num_hidden])
    _Y = tf.placeholder("float", [None, FLAGS.num_classes])

    #~ transfortm into Embeddings
    emb_x = tf.nn.embedding_lookup(_weights['w_emb'],_W)
    emb_c = tf.nn.embedding_lookup(_weights['c_emb'],_C)
    emb_t = tf.nn.embedding_lookup(_weights['t_emb'],_T)

    _X = tf.matmul(emb_x, _weights['hidden_w']) + tf.matmul(emb_c, _weights['hidden_c']) + tf.matmul(emb_t, _weights['hidden_t']) + _biases['hidden_b']

    inputs = tf.split(_X, FLAGS.max_sent_length, axis=0, num=None, name='split')

    lstmcell = tf.contrib.rnn.BasicLSTMCell(FLAGS.num_hidden, forget_bias=1.0, 
    state_is_tuple=False)

    bilstm = tf.contrib.rnn.static_bidirectional_rnn(lstmcell, lstmcell, inputs, 
    sequence_length=seq_len, initial_state_fw=istate_fw, initial_state_bw=istate_bw)


    pred_mod = [tf.matmul(item, _weights['out_w']) + _biases['out_b'] for item in bilstm]

Any help appreciated.

Hardunn answered 6/3, 2017 at 9:15 Comment(1)
What computation are you trying to perform? TensorFlow's tf.matmul() can perform individual matrix multiplications or batch matrix multiplications, but it needs information about the shapes to know which one it should do.Tall
G
1

For anyone encountering this issue in the future, the snippet above should not be used.

From tf.contrib.rnn.static_bidirectional_rnn v1.1 documentation:

Returns:

A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length T list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

The list comprehension above is expecting LSTM outputs, and the correct way to get those is this:

outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(lstmcell, lstmcell, ...)
pred_mod = [tf.matmul(item, _weights['out_w']) + _biases['out_b'] 
            for item in outputs]

This will work, because each item in outputs has the shape [batch_size, 2 * num_hidden] and can be multiplied with the weights by tf.matmul().


Add-on: from tensorflow v1.2+, the recommended function to use is in another package: tf.nn.static_bidirectional_rnn. The returned tensors are the same, so the code doesn't change much:

outputs, _, _ = tf.nn.static_bidirectional_rnn(lstmcell, lstmcell, ...)
pred_mod = [tf.matmul(item, _weights['out_w']) + _biases['out_b'] 
            for item in outputs]
Gainor answered 2/1, 2018 at 17:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.