Cosine similarity and tf-idf
Asked Answered
P

6

44

I am confused by the following comment about TF-IDF and Cosine Similarity.

I was reading up on both and then on wiki under Cosine Similarity I find this sentence "In case of of information retrieval, the cosine similarity of two documents will range from 0 to 1, since the term frequencies (tf-idf weights) cannot be negative. The angle between two term frequency vectors cannot be greater than 90."

Now I'm wondering....aren't they 2 different things?

Is tf-idf already inside the cosine similarity? If yes, then what the heck - I can only see the inner dot products and euclidean lengths.

I thought tf-idf was something you could do before running cosine similarity on the texts. Did I miss something?

Philender answered 6/6, 2011 at 17:36 Comment(3)
I found an awesome blog. It really helps.Apterous
Yes, tfidf is just one of many ways to compute a (non-negative) feature vector for a text document, and Cosine Similarity is just one of several ways to compare feature vectors for similarity. Others include Jaccard, Pearson, Levenshtein... see e.g. [lylelin317.wordpress.com/2014/03/11/…. Use of tfidf does not imply Cosine Similarity and v.v.Melaniamelanic
Unfortunately that blog post didn't help and only made things less clear how to find the similarly between documents.Semiology
D
50

Tf-idf is a transformation you apply to texts to get two real-valued vectors. You can then obtain the cosine similarity of any pair of vectors by taking their dot product and dividing that by the product of their norms. That yields the cosine of the angle between the vectors.

If d2 and q are tf-idf vectors, then

enter image description here

where θ is the angle between the vectors. As θ ranges from 0 to 90 degrees, cos θ ranges from 1 to 0. θ can only range from 0 to 90 degrees, because tf-idf vectors are non-negative.

There's no particularly deep connection between tf-idf and the cosine similarity/vector space model; tf-idf just works quite well with document-term matrices. It has uses outside of that domain, though, and in principle you could substitute another transformation in a VSM.

(Formula taken from the Wikipedia, hence the d2.)

Domain answered 6/6, 2011 at 17:48 Comment(5)
Thanks then I wasn't wrong :D Nice to have your questions answered this fast instead of waiting for school^^Philender
Is this still the case "There's no particularly deep connection between tf-idf and the cosine similarity/vector space model;"? I wonder if there is any implication of normalization tf-idf vectors, or using cosine similarity just happens to work well in practice?Beata
So if I were to create an Information Retrieval Python Program from scratch. I have a dataset of documents and queries. Since I need two vectors for the cosine similarity, am I supposed to work out the TFIDF for both documents and queries or docs only?Watteau
I read somewhere and could validate that for tf-idf vectors, dot product will directly give you cosine similarity. Could you elaborate why?Jamisonjammal
@HosseinKalbasi Because they're L2-normalized as part of TF-IDF.Dionnadionne
W
46

TF-IDF is just a way to measure the importance of tokens in text; it's just a very common way to turn a document into a list of numbers (the term vector that provides one edge of the angle you're getting the cosine of).

To compute cosine similarity, you need two document vectors; the vectors represent each unique term with an index, and the value at that index is some measure of how important that term is to the document and to the general concept of document similarity in general.

You could simply count the number of times each term occurred in the document (Term Frequency), and use that integer result for the term score in the vector, but the results wouldn't be very good. Extremely common terms (such as "is", "and", and "the") would cause lots of documents to appear similar to each other. (Those particular examples can be handled by using a stopword list, but other common terms that are not general enough to be considered a stopword cause the same sort of issue. On Stackoverflow, the word "question" might fall into this category. If you were analyzing cooking recipes, you'd probably run into issues with the word "egg".)

TF-IDF adjusts the raw term frequency by taking into account how frequent each term occurs in general (the Document Frequency). Inverse Document Frequency is usually the log of the number of documents divided by the number of documents the term occurs in (image from Wikipedia):

IDF, credit to wikipedia

Think of the 'log' as a minor nuance that helps things work out in the long run -- it grows when it's argument grows, so if the term is rare, the IDF will be high (lots of documents divided by very few documents), if the term is common, the IDF will be low (lots of documents divided by lots of documents ~= 1).

Say you have 100 recipes, and all but one requires eggs, now you have three more documents that all contain the word "egg", once in the first document, twice in the second document and once in the third document. The term frequency for 'egg' in each document is 1 or 2, and the document frequency is 99 (or, arguably, 102, if you count the new documents. Let's stick with 99).

The TF-IDF of 'egg' is:

1 * log (100/99) = 0.01    # document 1
2 * log (100/99) = 0.02    # document 2
1 * log (100/99) = 0.01    # document 3

These are all pretty small numbers; in contrast, let's look at another word that only occurs in 9 of your 100 recipe corpus: 'arugula'. It occurs twice in the first doc, three times in the second, and does not occur in the third document.

The TF-IDF for 'arugula' is:

1 * log (100/9) = 2.40  # document 1
2 * log (100/9) = 4.81  # document 2
0 * log (100/9) = 0     # document 3

'arugula' is really important for document 2, at least compared to 'egg'. Who cares how many times egg occurs? Everything contains egg! These term vectors are a lot more informative than simple counts, and they will result in documents 1 & 2 being much closer together (with respect to document 3) than they would be if simple term counts were used. In this case, the same result would probably arise (hey! we only have two terms here), but the difference would be smaller.

The take-home here is that TF-IDF generates more useful measures of a term in a document, so you don't focus on really common terms (stopwords, 'egg'), and lose sight of the important terms ('arugula').

Wilkes answered 26/10, 2013 at 22:5 Comment(1)
The actual formula, the one used by sklearn's TfIdf, is TF + TF*IDF.Deweydewhirst
V
9

The complete mathematical procedure for cosine similarity is explained in these tutorials

Suppose if you want to calculate cosine similarity between two documents, first step will be to calculate the tf-idf vectors of the two documents. and then find the dot product of these two vectors. Those tutorials will help you :)

Veliger answered 3/12, 2014 at 5:41 Comment(0)
O
-1

tf/idf weighting has some cases where they fail and generate NaN error in code while computing. It's very important to read this: http://www.p-value.info/2013/02/when-tfidf-and-cosine-similarity-fail.html

Ormandy answered 7/6, 2013 at 22:43 Comment(0)
B
-1

Tf-idf is just used to find the vectors from the documents based on tf - Term Frequency - which is used to find how many times the term occurs in the document and inverse document frequency - which gives the measure of how many times the term appears in the whole collection.

Then you can find the cosine similarity between the documents.

Bogie answered 20/6, 2016 at 11:29 Comment(0)
C
-1

TFIDF is inverse documet frequency matrix and finding cosine similarity against document matrix returns similar listings

Chronological answered 13/11, 2020 at 19:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.