I have the problem that I get a set of pictures and need to classify those.
The thing is, i do not really have any knowledge of these images. So i plan on using as many descriptors as I can find and then do a PCA on those to identify only the descriptors that are of use to me.
I can do supervised learning on a lot of datapoints, if that helps. However there is a chance that pictures are connected to each other. Meaning there could be a development from Image X to Image X+1, although I kinda hope this gets sorted out with the information in each Image.
My question are:
- How do i do this best when using Python? (I want to make a proof of concept first where speed is a non-issue). What libraries should i use?
- Are there examples already for an image Classification of such a kind? Example of using a bunch of descriptors and cooking them down via PCA? This part is kinda scary for me, to be honest. Although I think python should already do something like this for me.
Edit: I have found a neat kit that i am currently trying out for this: http://scikit-image.org/ There seem to be some descriptors in there. Is there a way to do automatic feature extraction and rank the features according to their descriptive power towards the target classification? PCA should be able to rank automatically.
Edit 2: I have my framework for the storage of the data now a bit more refined. I will be using the Fat system as a database. I will have one folder for each instance of a combination of classes. So if an image belongs to class 1 and 2, there will be a folder img12 that contains those images. This way i can better control the amount of data i have for each class.
Edit 3: I found an example of a libary (sklearn) for python that does some sort of what i want to do. it is about recognizing hand-written digits. I am trying to convert my dataset into something that i can use with this.
here is the example i found using sklearn:
import pylab as pl
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, metrics
# The digits dataset
digits = datasets.load_digits()
# The data that we are interested in is made of 8x8 images of digits,
# let's have a look at the first 3 images, stored in the `images`
# attribute of the dataset. If we were working from image files, we
# could load them using pylab.imread. For these images know which
# digit they represent: it is given in the 'target' of the dataset.
for index, (image, label) in enumerate(zip(digits.images, digits.target)[:4]):
pl.subplot(2, 4, index + 1)
pl.axis('off')
pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest')
pl.title('Training: %i' % label)
# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001)
# We learn the digits on the first half of the digits
classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])
# Now predict the value of the digit on the second half:
expected = digits.target[n_samples / 2:]
predicted = classifier.predict(data[n_samples / 2:])
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, predicted)))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
for index, (image, prediction) in enumerate(
zip(digits.images[n_samples / 2:], predicted)[:4]):
pl.subplot(2, 4, index + 5)
pl.axis('off')
pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest')
pl.title('Prediction: %i' % prediction)
pl.show()