not able to accurately input a batch of images into a model.fit
Asked Answered
D

1

0

My model is designed to train dual images. Since the dataset is very huge I used tf.data.Dataset method to get them as batches as suggested here. However I had a difficulty at properly inputting a batch of images for training. I looked up some possible solutions to no avail. Still, after these modifications:

ds_train = tf.data.Dataset.zip((tr_inputs, tr_labels)).batch(64)
iterator = ds_train.make_one_shot_iterator()
next_batch = iterator.get_next()
result = list()
with tf.Session() as sess:
    try:
        while True:
           result.append(sess.run(next_batch))
   except tf.errors.OutOfRangeError:
        pass
train_examples = np.array(list(zip(*result))[0])        # tr_examples[0][0].shape (64, 224, 224, 3)
val_examples = np.array(list(zip(*val_result))[0])      # val_examples[0][0].shape (64, 224, 224, 3)

The training code snippet is as follows:

hist = base_model.fit((tr_examples[0][0], tr_examples[0][1]), epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)

And the error trace:

Traceback (most recent call last):
  File "/home/user/00_files/project/DOUBLE_INPUT/dual_input.py", line 177, in <module>
    validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training.py", line 955, in fit
    batch_size=batch_size)
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training.py", line 754, in _standardize_user_data
    exception_prefix='input')
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 90, in standardize_input_data
    data = [standardize_single_array(x) for x in data]
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 90, in <listcomp>
    data = [standardize_single_array(x) for x in data]
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 25, in standardize_single_array
    elif x.ndim == 1:
AttributeError: 'tuple' object has no attribute 'ndim'

Looking at the shapes of inputs (in the code snippets' comments), it should work. I guess there is only one step left, but I am not sure what is missing.

I am using python 3.5, keras 2.2.0, tensorflow-gpu 1.9.0 on Ubuntu 16.04.

Help is much appreciated.

EDIT: after correcting the parantheses, it threw this error:

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.9607844 , 0.9607844 , 0.9607844 ],
         [0.9987745 , 0.9987745 , 0.9987745 ],
         [0.9960785 , 0.9960785 , 0.9960785 ],
         ...,
         [0.9609069 , 0.9609069 , 0.96017164...

Process finished with exit code 1
Drabbet answered 31/8, 2020 at 10:29 Comment(2)
Why do you turn the huge dataset into arrays to pass them through your network? My suggestion in the previous post is quite the opposite of that. Most importantly, is it possible for you to upgrade your Python and Tensorflow? People skilled in TF 1.X are getting more and more scarceWig
as I mentioned in that stack, I do have both on my local machine and remote server. So feel free to give your insights for TF 2.xDrabbet
S
0
hist = base_model.fit((tr_examples[0][0], tr_examples[0][1]), epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)

should be:

hist = base_model.fit(tr_examples[0][0], tr_examples[0][1], epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)

Note that while the validation_data parameter expects a tuple, the training input/label pair should not be a tuple (i.e., remove the parenthesis).

Sarson answered 31/8, 2020 at 11:8 Comment(3)
@Drabbet if that's you, you shouldn't upvote answers that don't work...Wig
validation_data doesn't necessarily expect a tupleWig
@NicolasGervais it's not me though:)Drabbet

© 2022 - 2024 — McMap. All rights reserved.