Very simple text classification by machine learning? [duplicate]
Asked Answered
D

1

15

Possible Duplicate:
Text Classification into Categories

I am currently working on a solution to get the type of food served in a database with 10k restaurants based on their description. I'm using lists of keywords to decide which kind of food is being served.

I read a little bit about machine learning but I have no practical experience with it at all. Can anyone explain to me if/why it would a be better solution to a simple problem like this? I find accuracy more important than performance!

simplified example:

["China", "Chinese", "Rice", "Noodles", "Soybeans"]
["Belgium", "Belgian", "Fries", "Waffles", "Waterzooi"]

a possible description could be:

"Hong's Garden Restaurant offers savory, reasonably priced Chinese to our customers. If you find that you have a sudden craving for rice, noodles or soybeans at 8 o’clock on a Saturday evening, don’t worry! We’re open seven days a week and offer carryout service. You can get fries here as well!"

Dearman answered 9/12, 2012 at 14:20 Comment(2)
It's difficult to make a practical suggestion here... It's a rather specific problem... You could use natural language processing (such as nltk) to get "nouns", and then use pybrain to train a neural net, but ultimately, were this for commercial purposes and I couldn't rely on machine learning to be completely accurate, I'd be inclined to think about splitting the DB into chunks of 500, and employ 20 people for a days workSleeper
(+1 to Jon Clements) and rather than hire 20 people, I would get 1-2 people possibly myself to label 500 and then use mechanical turk (or a competitor) to label the rest, using the labelled cases as ground truth and redundant assignments for checking the turkers work.Banderole
U
57

You are indeed describing a classification problem, which can be solved with machine learning.

In this problem, your features are the words in the description. You should use the Bag Of Words model - which basically says that the words and their number of occurrences for each word is what matters to the classification process.

To solve your problem, here are the steps you should do:

  1. Create a feature extractor - that given a description of a restaurant, returns the "features" (under the Bag Of Words model explained above) of this restaurant (denoted as example in the literature).
  2. Manually label a set of examples, each will be labeled with the desired class (Chinese, Belgian, Junk food,...)
  3. Feed your labeled examples into a learning algorithm. It will generate a classifier. From personal experience, SVM usually gives the best results, but there are other choices such as Naive Bayes, Neural Networks and Decision Trees (usually C4.5 is used), each has its own advantage.
  4. When a new (unlabeled) example (restaurant) comes - extract the features and feed it to your classifier - it will tell you what it thinks it is (and usually - what is the probability the classifier is correct).

Evaluation:
Evaluation of your algorithm can be done with cross-validation, or seperating a test set out of your labeled examples that will be used only for evaluating how accurate the algorithm is.


Optimizations:

From personal experience - here are some optimizations I found helpful for the feature extraction:

  1. Stemming and eliminating stop words usually helps a lot.
  2. Using Bi-Grams tends to improve accuracy (though increases the feature space significantly).
  3. Some classifiers are prone to large feature space (SVM not included), there are some ways to overcome it, such as decreasing the dimensionality of your features. PCA is one thing that can help you with it. Genethic Algorithms are also (empirically) pretty good for subset selection.

Libraries:

Unfortunately, I am not fluent enough with python, but here are some libraries that might be helpful:

  • Lucene might help you a lot with the text analysis, for example - stemming can be done with EnglishAnalyzer. There is a python version of lucene called PyLucene, which I believe might help you out.
  • Weka is an open source library that implements a lot of useful things for Machine Learning - many classifiers and feature selectors included.
  • Libsvm is a library that implements the SVM algorithm.
Uncounted answered 9/12, 2012 at 14:44 Comment(1)
I'm not sure what you mean by "prone to large feature spaces", but LibSVM is not a very good choice for text classification because its training algorithm scales as O(n³) in the number of samples. Liblinear, by the same authors, is much better for this kind of task. Blatant ad for my own project: scikit-learn offers Python bindings for both, as well as implementations of nearly all the other algorithms you suggest.Serpigo

© 2022 - 2024 — McMap. All rights reserved.