Probabilistic latent semantic analysis/Indexing - Introduction
Asked Answered
D

1

5

But recently I found this link quite helpful to understand the principles of LSA without too much math. http://www.puffinwarellc.com/index.php/news-and-articles/articles/33-latent-semantic-analysis-tutorial.html. It forms a good basis on which I can build further.

currently, I'm looking out for a similar introduction to Probabilistic Latent Semantic Analysis/Indexing. Less of math and more of examples explaining the principles behind it. If you would know such an introduction, please let me know.

Can it be used to find the measure of similarity between sentences? Does it handle polysemy?

Is there a python implementation for the same?

Thank you.

Danettedaney answered 26/6, 2011 at 6:32 Comment(2)
It doesn't seem to do PLSI, but I recommend gensim anyway. It's a Python library that implements classical LSI as well as Latent Dirichlet Allocation (LDA), a stronger document model designed to overcome weaknesses in PLSI.Pursuit
@larsmans, Thank you for the pointer. I'm trying out LDA. It would be great if you can add the above as an answer :)Danettedaney
B
8

There is a good talk by Thomas Hofmann that explains both LSA and its connections to Probabilistic Latent Semantic Analysis (PLSA). The talk has some math, but is much easier to follow than the PLSA paper (or even its Wikipedia page).

PLSA can be used to get some similarity measure between sentences, as two sentences can be viewed as short documents drawn from a probability distribution over latent classes. Your similarity will heavily depend on your training set though. The documents you use to training the latent class model should reflect the types of documents you want to compare. Generating a PLSA model with two sentences won't create meaningful latent classes. Similarly, training with a corpus of very similar contexts may create latent classes that are overly sensitive to slight changes on the documents. Moreover, because sentences contain relative few tokens (as compared to documents), I don't believe you'll get high quality similarity results from PLSA at the sentence level.

PLSA does not handle polysemy. However, if you are concerned with polysemy, you might try running a Word Sense Disambiguation tool over your input text to tag each word with its correct sense. Running PLSA (or LDA) over this tagged corpus will remove the effects of polysemy in the resulting document representations.

As Sharmila noted, Latent Dirichlet allocation (LDA) is considered the state of the art for document comparison, and is superior to PLSA, which tends to overfit the training data. In addition, there are many more tools to support LDA and analyze whether the results you get with LDA are meaningful. (If you're feeling adventurous, you can read David Mimno's two papers from EMNLP 2011 on how to assess the quality of the latent topics you get from LDA.)

Bartell answered 28/7, 2011 at 11:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.