sklearn matrix factorization example
Asked Answered
S

1

13

I am using a code currently given at http://www.quuxlabs.com/blog

It gives good result. And I can clearly see what changes in the matrix has happened .

Also I tried to use sklearn library at sklearn.decomposition.NMF But results I got with the same input are not good enough. Maybe i am missing something.

Here is my sample code -

from sklearn.decomposition import NMF , ProjectedGradientNMF
R = [
     [5,3,0,1],
     [4,0,0,1],
     [1,1,0,5],
     [1,0,0,4],
     [0,1,5,4],
    ]
R = numpy.array(R)
nmf = NMF(beta=0.001, eta=0.0001, init='random', max_iter=2000,nls_max_iter=20000, random_state=0, sparseness=None,tol=0.001)
nR = nmf.fit_transform(R)
print nR
print
print nmf.reconstruction_err_
print

It is not maintaining exiting/filled values in matrix as I can see using the code given in the blog.

Can someone help me understand !

Shepley answered 15/4, 2015 at 11:25 Comment(0)
S
20

Hmmm ... very dumb of me !!! I gone through nmf.py and found out that fit_tranform returns only W and nmf.component_ get value of H. Dot product of those gives new R.

from sklearn.decomposition import NMF , ProjectedGradientNMF
R = [
     [5,3,0,1],
     [4,0,0,1],
     [1,1,0,5],
     [1,0,0,4],
     [0,1,5,4],
    ]
R = numpy.array(R)
nmf = NMF()
W = nmf.fit_transform(R);
H = nmf.components_;
nR = numpy.dot(W,H)
print nR
Shepley answered 15/4, 2015 at 12:39 Comment(2)
you are not dumb. The documentation isn't good if we have to read the source. (besides we should always read the code and understand how it works rather than treating it like a black box)Krever
This seems to be an old Answer. Did you notice that the results of quuxlabs.com/blog and sklearn code are different for 0 values?Twyla

© 2022 - 2024 — McMap. All rights reserved.