For the application, such as pair text similarity, the input data is similar to: pair_1, pair_2
. In these problems, we usually have multiple input data. Previously, I implemented my models successfully:
model.fit([pair_1, pair_2], labels, epochs=50)
I decided to replace my input pipeline with tf.data API. To this end, I create a Dataset similar to:
dataset = tf.data.Dataset.from_tensor_slices((pair_1, pair2, labels))
It compiles successfully but when start to train it throws the following exception:
AttributeError: 'tuple' object has no attribute 'ndim'
My Keras and Tensorflow version respectively are 2.1.6
and 1.11.0
. I found a similar issue in Tensorflow repository:
tf.keras multi-input models don't work when using tf.data.Dataset.
Does anyone know how to fix the issue?
Here is some main part of the code:
(q1_test, q2_test, label_test) = test
(q1_train, q2_train, label_train) = train
def tfdata_generator(sent1, sent2, labels, is_training):
'''Construct a data generator using tf.Dataset'''
dataset = tf.data.Dataset.from_tensor_slices((sent1, sent2, labels))
if is_training:
dataset = dataset.shuffle(1000) # depends on sample size
dataset = dataset.repeat()
dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE)
return dataset
train_dataset = tfdata_generator(q1_train, q2_train, label_train, is_training=True, batch_size=_BATCH_SIZE)
test_dataset = tfdata_generator(q1_test, q2_test, label_test, is_training=False, batch_size=_BATCH_SIZE)
inps1 = keras.layers.Input(shape=(50,))
inps2 = keras.layers.Input(shape=(50,))
embed = keras.layers.Embedding(input_dim=nb_vocab, output_dim=300, weights=[embedding], trainable=False)
embed1 = embed(inps1)
embed2 = embed(inps2)
gru = keras.layers.CuDNNGRU(256)
gru1 = gru(embed1)
gru2 = gru(embed2)
concat = keras.layers.concatenate([gru1, gru2])
preds = keras.layers.Dense(1, 'sigmoid')(concat)
model = keras.models.Model(inputs=[inps1, inps2], outputs=preds)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
print(model.summary())
model.fit(
train_dataset.make_one_shot_iterator(),
steps_per_epoch=len(q1_train) // _BATCH_SIZE,
epochs=50,
validation_data=test_dataset.make_one_shot_iterator(),
validation_steps=len(q1_test) // _BATCH_SIZE,
verbose=1)