What tried and true algorithms for suggesting related articles are out there?
Asked Answered
C

5

25

Pretty common situation, I'd wager. You have a blog or news site and you have plenty of articles or blags or whatever you call them, and you want to, at the bottom of each, suggest others that seem to be related.

Let's assume very little metadata about each item. That is, no tags, categories. Treat as one big blob of text, including the title and author name.

How do you go about finding the possibly related documents?

I'm rather interested in the actual algorithm, not ready solutions, although I'd be ok with taking a look at something implemented in ruby or python, or relying on mysql or pgsql.

edit: the current answer is pretty good but I'd like to see more. Maybe some really bare example code for a thing or two.

Chunky answered 10/8, 2009 at 12:38 Comment(3)
I'm turning out to be a terrible tagger. Tag edits very welcome.Chunky
Check out contest.github.com for a ton of open-source solutions to a similar problem.Frontispiece
there ya go, complete Ruby example added.Ecuador
D
39

This is a pretty big topic -- in addition to the answers people come up with here, I recommend tracking down the syllabi for a couple of information retrieval classes and checking out the textbooks and papers assigned for them. That said, here's a brief overview from my own grad-school days:

The simplest approach is called a bag of words. Each document is reduced to a sparse vector of {word: wordcount} pairs, and you can throw a NaiveBayes (or some other) classifier at the set of vectors that represents your set of documents, or compute similarity scores between each bag and every other bag (this is called k-nearest-neighbour classification). KNN is fast for lookup, but requires O(n^2) storage for the score matrix; however, for a blog, n isn't very large. For something the size of a large newspaper, KNN rapidly becomes impractical, so an on-the-fly classification algorithm is sometimes better. In that case, you might consider a ranking support vector machine. SVMs are neat because they don't constrain you to linear similarity measures, and are still quite fast.

Stemming is a common preprocessing step for bag-of-words techniques; this involves reducing morphologically related words, such as "cat" and "cats", "Bob" and "Bob's", or "similar" and "similarly", down to their roots before computing the bag of words. There are a bunch of different stemming algorithms out there; the Wikipedia page has links to several implementations.

If bag-of-words similarity isn't good enough, you can abstract it up a layer to bag-of-N-grams similarity, where you create the vector that represents a document based on pairs or triples of words. (You can use 4-tuples or even larger tuples, but in practice this doesn't help much.) This has the disadvantage of producing much larger vectors, and classification will accordingly take more work, but the matches you get will be much closer syntactically. OTOH, you probably don't need this for semantic similarity; it's better for stuff like plagiarism detection. Chunking, or reducing a document down to lightweight parse trees, can also be used (there are classification algorithms for trees), but this is more useful for things like the authorship problem ("given a document of unknown origin, who wrote it?").

Perhaps more useful for your use case is concept mining, which involves mapping words to concepts (using a thesaurus such as WordNet), then classifying documents based on similarity between concepts used. This often ends up being more efficient than word-based similarity classification, since the mapping from words to concepts is reductive, but the preprocessing step can be rather time-consuming.

Finally, there's discourse parsing, which involves parsing documents for their semantic structure; you can run similarity classifiers on discourse trees the same way you can on chunked documents.

These pretty much all involve generating metadata from unstructured text; doing direct comparisons between raw blocks of text is intractable, so people preprocess documents into metadata first.

Disharoon answered 10/8, 2009 at 13:25 Comment(1)
Are there any C++ libraries for these? Also, is this information still relatively up to date.Harrison
L
5

You should read the book "Programming Collective Intelligence: Building Smart Web 2.0 Applications" (ISBN 0596529325)!

For some method and code: First ask yourself, whether you want to find direct similarities based on word matches, or whether you want to show similar articles that may not directly relate to the current one, but belong to the same cluster of articles.

See Cluster analysis / Partitional clustering.

A very simple (but theoretical and slow) method for finding direct similarities would be:

Preprocess:

  1. Store flat word list per article (do not remove duplicate words).
  2. "Cross join" the articles: count number of words in article A that match same words in article B. You now have a matrix int word_matches[narticles][narticles] (you should not store it like that, similarity of A->B is same as B->A, so a sparse matrix saves almost half the space).
  3. Normalize the word_matches counts to range 0..1! (find max count, then divide any count by this) - you should store floats there, not ints ;)

Find similar articles:

  1. select the X articles with highest matches from word_matches
Lustful answered 11/12, 2009 at 1:47 Comment(0)
C
4

This is a typical case of Document Classification which is studied in every class of Machine Learning. If you like statistics, mathematics and computer science, I recommend that you have a look at the unsupervised methods like kmeans++, Bayesian methods and LDA. In particular, Bayesian methods are pretty good at what are you looking for, their only problem is being slow (but unless you run a very large site, that shouldn't bother you much).

On a more practical and less theoretical approach, I recommend that you have a look a this and this other great code examples.

Colloquy answered 6/12, 2009 at 5:6 Comment(1)
Actually, I'd argue that this is much tougher than the normal definition of "document classification." The reason why? you're trying to hit a moving target! your classes are defined by the texts you are reading, not by predefined classes such as "spam" or "code" or "english"Rayburn
E
3

A small vector-space-model search engine in Ruby. The basic idea is that two documents are related if they contain the same words. So we count the occurrence of words in each document and then compute the cosine between these vectors (each terms has a fixed index, if it appears there is a 1 at that index, if not a zero). Cosine will be 1.0 if two documents have all terms common, and 0.0 if they have no common terms. You can directly translate that to % values.

terms = Hash.new{|h,k|h[k]=h.size}
docs = DATA.collect { |line| 
  name = line.match(/^\d+/)
  words = line.downcase.scan(/[a-z]+/)
  vector = [] 
  words.each { |word| vector[terms[word]] = 1 }
  {:name=>name,:vector=>vector}
}
current = docs.first # or any other
docs.sort_by { |doc| 
  # assume we have defined cosine on arrays
  doc[:vector].cosine(current[:vector]) 
}
related = docs[1..5].collect{|doc|doc[:name]}

puts related

__END__
0 Human machine interface for Lab ABC computer applications
1 A survey of user opinion of computer system response time
2 The EPS user interface management system
3 System and human system engineering testing of EPS
4 Relation of user-perceived response time to error measurement
5 The generation of random, binary, unordered trees
6 The intersection graph of paths in trees
7 Graph minors IV: Widths of trees and well-quasi-ordering
8 Graph minors: A survey

the definition of Array#cosine is left as an exercise to the reader (should deal with nil values and different lengths, but well for that we got Array#zip right?)

BTW, the example documents are taken from the SVD paper by Deerwester etal :)

Ecuador answered 6/12, 2009 at 11:59 Comment(0)
G
1

Some time ago I implemented something similiar. Maybe this idea is now outdated, but I hope it can help.

I ran a ASP 3.0 website for programming common tasks and started from this principle: user have a doubt and will stay on website as long he/she can find interesting content on that subject.

When an user arrived, I started an ASP 3.0 Session object and recorded all user navigation, just like a linked list. At Session.OnEnd event, I take first link, look for next link and incremented a counter column like:

<Article Title="Cookie problem A">
    <NextPage Title="Cookie problem B" Count="5" />
    <NextPage Title="Cookie problem C" Count="2" />
</Article>

So, to check related articles I just had to list top n NextPage entities, ordered by counter column descending.

Gauffer answered 5/12, 2009 at 23:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.