How do I compare phrases for similarity?
Asked Answered
S

5

12

When entering a question, stackoverflow presents you with a list of questions that it thinks likely to cover the same topic. I have seen similar features on other sites or in other programs, too (Help file systems, for example), but I've never programmed something like this myself. Now I'm curious to know what sort of algorithm one would use for that.

The first approach that comes to my mind is splitting the phrase into words and look for phrases containing these words. Before you do that, you probably want to throw away insignificant words (like 'the', 'a', 'does' etc), and then you will want to rank the results.

Hey, wait - let's do that for web pages, and then we can have a ... watchamacallit ... - a "search engine", and then we can sell ads, and then ...

No, seriously, what are the common ways to solve this problem?

Stringendo answered 16/9, 2008 at 9:4 Comment(0)
A
12

One approach is the so called bag-of-words model.

As you guessed, first you count how many times words appear in the text (usually called document in the NLP-lingo). Then you throw out the so called stop words, such as "the", "a", "or" and so on.

You're left with words and word counts. Do this for a while and you get a comprehensive set of words that appear in your documents. You can then create an index for these words: "aardvark" is 1, "apple" is 2, ..., "z-index" is 70092.

Now you can take your word bags and turn them into vectors. For example, if your document contains two references for aardvarks and nothing else, it would look like this:

[2 0 0 ... 70k zeroes ... 0].

After this you can count the "angle" between the two vectors with a dot product. The smaller the angle, the closer the documents are.

This is a simple version and there other more advanced techniques. May the Wikipedia be with you.

Aquileia answered 16/9, 2008 at 9:18 Comment(0)
H
3

@Hanno you should try the Levenshtein distance algorithm. Given an input string s and a list of of strings t iterate for each string u in t and return the one with the minimum Levenshtein distance.

http://en.wikipedia.org/wiki/Levenshtein_distance

See a Java implementation example in http://www.javalobby.org/java/forums/t15908.html

Homocentric answered 16/9, 2008 at 10:11 Comment(0)
N
3

To augment the bag-of-words idea:

There are a few ways you can also pay some attention to n-grams, strings of two or more words kept in order. You might want to do this because a search for "space complexity" is much more than a search for things with "space" AND "complexity" in them, since the meaning of this phrase is more than the sum of its parts; that is, if you get a result that talks about the complexity of outer space and the universe, this is probably not what the search for "space complexity" really meant.

A key idea from natural language processing here is that of mutual information, which allows you (algorithmically) to judge whether or not a phrase is really a specific phrase (such as "space complexity") or just words which are coincidentally adjacent. Mathematically, the main idea is to ask, probabilistically, if these words appear next to each other more often than you would guess by their frequencies alone. If you see a phrase with a high mutual information score in your search query (or while indexing), you can get better results by trying to keep these words in sequence.

Nullification answered 16/9, 2008 at 13:8 Comment(0)
T
2

From my (rather small) experience developing full-text search engines: I would look up questions which contain some words from query (in your case, query is your question). Sure, noise words should be ignored and we might want to check query for 'strong' words like 'ASP.Net' to narrow down search scope. http://en.wikipedia.org/wiki/Index_(search_engine)#Inverted_indices'>Inverted indexes are commonly used to find questions with words we are interested in.

After finding questions with words from query, we might want to calculate distance between words we are interested in in questions, so question with 'phrases similarity' text ranks higher than question with 'discussing similarity, you hear following phrases...' text.

Tegan answered 16/9, 2008 at 9:26 Comment(0)
H
0

Here is the bag of words solution with tfidfvectorizer in python 3

#from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
s=set(stopwords.words('english'))

train_x_cleaned = []
for i in train_x:
    sentence = filter(lambda w: not w in s,i.split(","))
    train_x_cleaned.append(' '.join(sentence))

vectorizer = TfidfVectorizer(binary=True)
train_x_vectors = vectorizer.fit_transform(train_x_cleaned)

print(vectorizer.get_feature_names_out())
print(train_x_vectors.toarray())

from sklearn import svm

clf_svm = svm.SVC(kernel='linear')
clf_svm.fit(train_x_vectors, train_y)

test_x = vectorizer.transform(["test phrase 1", "test phrase 2", "test phrase 3"])
print (type(test_x))
clf_svm.predict(test_x)
Hognut answered 25/9, 2022 at 5:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.