I've just run through the Wikipedia page about SVMs, and this line caught my eyes: "If the kernel used is a Gaussian radial basis function, the corresponding feature space is a Hilbert space of infinite dimensions." http://en.wikipedia.org/wiki/Support_vector_machine#Nonlinear_classification
In my understanding, if I apply Gaussian kernel in SVM, the resulting feature space will be m
-dimensional (where m
is the number of training samples), as you choose your landmarks to be your training examples, and you're measuring the "similarity" between a specific example and all the examples with the Gaussian kernel. As a consequence, for a single example you'll have as many similarity values as training examples. These are going to be the new feature vectors which are going to m
-dimensional vectors, and not infinite dimensionals.
Could somebody explain to me what do I miss?
Thanks, Daniel
m
is only the upper bound -- the whole point of the SVM is to pick a sparse set of support vectors from the training samples. – Importunacy