TensorFlow: training on my own image
Asked Answered
L

4

52

I am new to TensorFlow. I am looking for the help on the image recognition where I can train my own image dataset.

Is there any example for training the new dataset?

Lift answered 20/5, 2016 at 7:7 Comment(2)
I have read this googleresearch.blogspot.hk/2016/03/… However, i have no idea that where should I change the code.Lift
new link location ai.googleblog.com/2016/03/…Caramel
M
98

If you are interested in how to input your own data in TensorFlow, you can look at this tutorial.
I've also written a guide with best practices for CS230 at Stanford here.


New answer (with tf.data) and with labels

With the introduction of tf.data in r1.4, we can create a batch of images without placeholders and without queues. The steps are the following:

  1. Create a list containing the filenames of the images and a corresponding list of labels
  2. Create a tf.data.Dataset reading these filenames and labels
  3. Preprocess the data
  4. Create an iterator from the tf.data.Dataset which will yield the next batch

The code is:

# step 1
filenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'])
labels = tf.constant([0, 1, 0, 1])

# step 2: create a dataset returning slices of `filenames`
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))

# step 3: parse every image in the dataset using `map`
def _parse_function(filename, label):
    image_string = tf.read_file(filename)
    image_decoded = tf.image.decode_jpeg(image_string, channels=3)
    image = tf.cast(image_decoded, tf.float32)
    return image, label

dataset = dataset.map(_parse_function)
dataset = dataset.batch(2)

# step 4: create iterator and final input tensor
iterator = dataset.make_one_shot_iterator()
images, labels = iterator.get_next()

Now we can run directly sess.run([images, labels]) without feeding any data through placeholders.


Old answer (with TensorFlow queues)

To sum it up you have multiple steps:

  1. Create a list of filenames (ex: the paths to your images)
  2. Create a TensorFlow filename queue
  3. Read and decode each image, resize them to a fixed size (necessary for batching)
  4. Output a batch of these images

The simplest code would be:

# step 1
filenames = ['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg']

# step 2
filename_queue = tf.train.string_input_producer(filenames)

# step 3: read, decode and resize images
reader = tf.WholeFileReader()
filename, content = reader.read(filename_queue)
image = tf.image.decode_jpeg(content, channels=3)
image = tf.cast(image, tf.float32)
resized_image = tf.image.resize_images(image, [224, 224])

# step 4: Batching
image_batch = tf.train.batch([resized_image], batch_size=8)
Marlette answered 20/5, 2016 at 9:58 Comment(25)
For shuffle_batch(), it requires at least 4 arguments. After I adding two more arguments: num_threads=1, capacity=5000. It says: TypeError: 'Tensor' object is not iterable.Lift
You are right, the first argument of tf.train.batch or tf.train.shuffle_batch should be a list [image] instead of just image. I fixed it in the code.Marlette
@Olivier Moindrot Sorry, I still have some error. It says: ValueError: All shapes must be fully defined: [TensorShape([Dimension(None), Dimension(None), Dimension(3)])]. These error happens in the batching step.Lift
Once again you are right, I fixed it in the code. You have to resize all the images to the same shape to make a batch of them.Marlette
@Olivier Moindrot Thank you very much. It works now. I want to ask, after training the model, how can I classify the input image with my owned database?Lift
If you have the labels of the training images, you should also get them as input and batch them with the images: image_batch, label_batch = tf.train.batch([resized_image, label], batch_size=8). Then you have to build a model with images as input and labels as output, refer to this tutorial for more info.Marlette
@Olivier Moindrot What if I am doing one class classification. I only got the data for one class. I wanna classify between "Target" and "Outlier". Then how can I make up an array for label?Lift
Let us continue this discussion in chat.Marlette
resized_image = tf.image.resize_images(images, 224, 224) Here, the first argument of resize_images method should be image instead of images, right?Mullens
resized_image = tf.image.resize_images(images, [224, 224])Giza
where will the image labels go into?Demimonde
thank's Mr @olivier-moindrot but if I have a batch of picture format .tifVermiform
Thanks, Mr @olivier-moindrot I used tf.image.decode_gif and this my DataSetGen code but I don't know If this the right wayVermiform
How to handle the labels in such case?Lulululuabourg
@OlivierMoindrot i got error out of range at get_next when the files are only 2, and 2 matching label [0,1] instead of 4 files and labels [0,1,0,1]Crusty
@datdinhquoc: if you have only two files and labels, with a batch size of 2, you can only do one iteration and then you will receive an OutOfRange error.Marlette
After train my data ,how can use to detect a image ?Cocoa
How about if there is more than three 3 channel image and format of file .mat.. the code will be same?Chericheria
@AadnanFarooqA: in this case you need to change the _parse_function to read the .mat fileMarlette
from step 1, I have 100's images and stored in a folder like Root directory -> Class1 -> images; Class 2 -> images; Class 3 -> images; how I will read all the images with the label as folder name?Chericheria
You can just get all the filenames and labels in python, then use my code to put it in tensorflowMarlette
for Tensorflow 2, replace tf.read_file(filename) with tf.io.read_file(filename)Pursuer
Hey @OlivierMoindrot , thank you for this answer. I'm encountering an issue understanding the expected input for TFLite model. Can you please take a look? #63486940Torse
how do you resize the images in the new version?Citizenship
@SamanthaCruz: you can add it in the _parse_functionMarlette
S
8

Based on @olivier-moindrot's answer, but for Tensorflow 2.0+:

# step 1
filenames = tf.constant(['im_01.jpg', 'im_02.jpg', 'im_03.jpg', 'im_04.jpg'])
labels = tf.constant([0, 1, 0, 1])

# step 2: create a dataset returning slices of `filenames`
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))

def im_file_to_tensor(file, label):
    def _im_file_to_tensor(file, label):
        path = f"../foo/bar/{file.numpy().decode()}"
        im = tf.image.decode_jpeg(tf.io.read_file(path), channels=3)
        im = tf.cast(image_decoded, tf.float32) / 255.0
        return im, label
    return tf.py_function(_im_file_to_tensor, 
                          inp=(file, label), 
                          Tout=(tf.float32, tf.uint8))

dataset = dataset.map(im_file_to_tensor)

If you are hitting an issue similar to:

ValueError: Cannot take the length of Shape with unknown rank

when passing tf.data.Dataset tensors to model.fit, then take a look at https://github.com/tensorflow/tensorflow/issues/24520. A fix for the code snippet above would be:

def im_file_to_tensor(file, label):
    def _im_file_to_tensor(file, label):
        path = f"../foo/bar/{file.numpy().decode()}"
        im = tf.image.decode_jpeg(tf.io.read_file(path), channels=3)
        im = tf.cast(image_decoded, tf.float32) / 255.0
        return im, label

    file, label = tf.py_function(_im_file_to_tensor, 
                                 inp=(file, label), 
                                 Tout=(tf.float32, tf.uint8))
    file.set_shape([192, 192, 3])
    label.set_shape([])
    return (file, label)
Sarene answered 1/2, 2020 at 22:12 Comment(0)
S
0

2.0 Compatible Answer using Tensorflow Hub: Tensorflow Hub is a Provision/Product Offered by Tensorflow, which comprises the Models developed by Google, for Text and Image Datasets.

It saves Thousands of Hours of Training Time and Computational Effort, as it reuses the Existing Pre-Trained Model.

If we have an Image Dataset, we can take the Existing Pre-Trained Models from TF Hub and can adopt it to our Dataset.

Code for Re-Training our Image Dataset using the Pre-Trained Model, MobileNet, is shown below:

import itertools
import os

import matplotlib.pylab as plt
import numpy as np

import tensorflow as tf
import tensorflow_hub as hub

module_selection = ("mobilenet_v2_100_224", 224) #@param ["(\"mobilenet_v2_100_224\", 224)", "(\"inception_v3\", 299)"] {type:"raw", allow-input: true}
handle_base, pixels = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {}".format(MODULE_HANDLE, IMAGE_SIZE))

BATCH_SIZE = 32 #@param {type:"integer"}

#Here we need to Pass our Dataset

data_dir = tf.keras.utils.get_file(
    'flower_photos',
    'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
    untar=True)

model = tf.keras.Sequential([
    hub.KerasLayer(MODULE_HANDLE, trainable=do_fine_tuning),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(train_generator.num_classes, activation='softmax',
                          kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None,)+IMAGE_SIZE+(3,))
model.summary()

Complete Code for Image Retraining Tutorial can be found in this Github Link.

More information about Tensorflow Hub can be found in this TF Blog.

The Pre-Trained Modules related to Images can be found in this TF Hub Link.

All the Pre-Trained Modules, related to Images, Text, Videos, etc.. can be found in this TF HUB Modules Link.

Finally, this is the Basic Page for Tensorflow Hub.

Semiporcelain answered 28/1, 2020 at 9:55 Comment(0)
S
0

If your dataset consists of subfolders, you can use ImageDataGenerator it has flow_from_directory it helps to load data from a directory,

train_batches = ImageDataGenerator().flow_from_directory(
    directory=train_path, target_size=(img_height,img_weight), batch_size=32 ,color_mode="grayscale")

The structure of the folder hierarchy can be as follows,

train 
    -- cat
    -- dog
    -- moneky
Sumac answered 7/11, 2021 at 11:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.