I came across several methods for measuring semantic similarity that use the structure and hierarchy of WordNet, e.g. Jiang and Conrath measure (JNC), Resnik measure(RES), Lin measure (LIN) etc.
The way they are measured using NLTK is:
sim2=wn.jcn_similarity(entry1,entry2,brown_ic)
sim3=entry1.res_similarity(entry2, brown_ic)
sim4=entry1.lin_similarity(entry2,brown_ic)
If WordNet is the basis of calculating semantic similarity, what is the use of Brown Corpus here?
wn_ic=wn.ic(wn)
could be used, to have a valid similarity measurement it should come from a text (e.g. brown) that is not wordnet? because the paper you refer to says:We feel that WordNet can also be used as a statistical resource with no need for external ones
– Thurlow