Yes, you are correct, if f(z) is positive, then the instance belongs to class +1, if its negative it belongs to class -1. The value of f(z) is not interpretable.
While the function:
f(z) = sign(w*z+b)
looks like an equation for a hyperplane, it differs in that w is not a normal vector - its length is not 1, so the value of f(z) is not the distance from the hyperplane, this is why it is specified as sign(..), to make it clear the value is only used to determine which side of the hyperplane the instance falls on.
Some background:
The goal is to find the hyperplane which gives the maximum margin between the two classes:
So, the purpose is to maximize the margin, which is , therefore minimizing . Remember, usually when w is used to represent a hyperplane as the normal vector, is 1. That isn't the case here clearly, as there would be no optimization problem. Instead of keeping = 1 and varying the width of the margin, we've fixed the width of the margin to 2 and are allowing to vary in size instead.
This gives us the primal optimization problem (with soft margins):
This seems to be what you are referring to. However, this equation comes from basic soft maximum margin classifier, which is foundation of SVM. True SVM is formulated as a Lagrangian dual to allow the use of kernels. The neat thing about SVM is that when the above problem (and its constraints) are formulated in the Lagrangian, all the variables except for the lagrangian multipliers drop out, leaving us with the following problem:
Notice there is no w. The training points x (y are the labels, 1 or -1), now only appear together as a dot product, allowing us to employ the kernel trick to obtain a non-linear model.
But if we don't have w what is our decision function? It becomes a function of our support vectors and the lagrangian multipliers we found.
This is what libsvm produces and what it stores as the model you have trained. It stores the support vectors and the associated alphas. For linear SVM, you can obtain the primal w, this is explained here in the LibSVM FAQ, but it is not going to be what you get back automatically from LibSVM, and this can only be done for the linear kernel.
The value of the SVM decision function based on the lagrangian multipliers and support vectors should only be interpreted by its sign as well.