What you mean by classification is very important.
Classification is a supervised task, which requires a pre-labeled corpus beforehand. Moving from the already labeled corpus, you have to create a model by using several methods and approaches and finally you can classify an unlabeled test corpus by using that model. If this is the case, you can use a multi-class classifier which is generally a binary tree application of a binary classifier. State of the art approach for such kind of a task is using a branch of machine learning, SVM. Two of the best SVM classifiers are LibSVM and SVMlight. These are open-source, easy to use and include multi-class classification tools. Finally, you have to make a literature survey in order to understand what to do in addition to obtain good results, because using those classifiers are not enough by themselves. You have to manipulate/pre-process your corpus in order to extract information bearing parts (e.g. unigrams) and excluding noisy parts. In general, you most probably have a long way to go, but NLP is a very interesting topic and worthwhile to work on.
However, if what you mean by classification is clustering, then the problem will be more complicated. Clustering is an un-supervised task, which means you will include no information to the program you are using about which example belongs to which group/topic/class. There are also academic work on hybrid semi-supervised approaches, but they are a bit diverging from the real purpose of clustering problem. The pre-processing that you need to use while manipulating your corpus bears a similar nature with what you have to do in classification problem, so I will not mention it again. To do clustering, there are several approaches you have to follow. First, you can use LDA (Latent Dirichlet Allocation) method to reduce the dimensionality (number of dimensions of your feature-space) of your corpus, which will contribute to efficiency and information gain from features. Beside or after LDA, you can use Hierarchical Clustering or similar other methods such as K-Means in order to cluster your unlabeled corpus. You can use Gensim or Scikit-Learn as open-source tools for clustering. Both are powerful, well documented and easy to use tools.
In all cases, make a lot of academic reading and try to understand the theory beneath those tasks and problems. By this way, you can come up with innovative and efficient solutions for what you are specifically dealing with, because the problems in NLP are generally corpus dependent and you are generally on your own while dealing with your specific problem. It is very difficult to find generic and ready-to-use solutions and I do not recommend to rely on such an option as well.
I may over-answered your question, sorry for the irrelevant parts.
Good luck =)