Numeric example of the Expectation Maximization Algorithm [duplicate]
Asked Answered
I

1

6

Could anyone provide a simple numeric example of the EM algorithm as I am not sure about the formulas given? A really simple one with 4 or 5 Cartesian coordinates would perfectly do.

Insectarium answered 11/2, 2013 at 12:8 Comment(4)
which of the EM variants do you mean? The common Mixtures-of-Gaussians clusting algorithm? What have you understood? Is Mahalanobis distance totally clear yet?Sway
Right, I need the Gaussian Mixture Model. Well, I think I get the intuition(high level) but I just can't apply the formulas to a simple example.Insectarium
Try this Tutorial. It does just one step (and it does not recompute the matrices!) but I think it will answer some of your questions and is quite visual.Sway
EM is so generic. Here's another EM: bioen.utah.edu/wiki/images/7/7e/HW3_Miaomiao_ZHANG.pdfKidwell
K
5

what about this: http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/Expectation_Maximization_(EM)#A_simple_example

I had also written a simple example in (edit)R a year ago, unfortunately I am unable to locate it. I'll try again to find it later.

EDIT: Here it is -

EM <- function() 
{
    ### Read file, get necessary cols 
    dataFile <- read.csv("wine.csv", head = FALSE, sep = ",")
    sl <- dataFile[, 2]
    #sw <- dataFile[, 3]
    #pl <- dataFile[, 3]
    #pw <- dataFile[, 4]
    class <- dataFile[, 5]
    N <- length(sl)
    pi1 <- 0.5
        ### Init ### 
    rand1 <- floor(runif(1) * N) 
    rand2 <- floor(runif(1) * N) 
    mu1 <- sl[rand1]
    mu2 <- sl[rand2] 
    mean1 <- sum(sl)/N
    sigma1 <- sum(  (sl - mean1) ** 2)   / N 
    sigma2 <- sigma1
    print(mu1)
    print(mu2)
    print(sigma1)
    print(sigma2)
    COUNTLIM <- 10
    count <- 1 
    prevmu1 <- 0.0; 
    prevmu2 <- 0.0; 
    prevsigma1 <- 0.0; 
    prevsigma2 <- 0.0; 
    gamma <- array(0, length(sl)) 
    while (count <= COUNTLIM) 
    { 
        gamma <- pi1 * dnorm(sl, mu2, sigma2)/ ( (1 - pi1) * dnorm(sl, mu1, sigma1) + pi1 * dnorm(sl, mu2, sigma2))
        mu1 <- sum((1 - gamma) * sl) / sum(1 - gamma)
mu2 <- sum((gamma) * sl) / sum(gamma)
sigma1 <- sum((1 - gamma) * (sl - mu1) ** 2)/sum(1 - gamma) sigma2 <- sum((gamma) * (sl - mu2) ** 2)/sum(gamma) pi1 <- sum(gamma)/N print(c(mu1, mu2, sigma1, sigma2, pi1)) if (count == 1) { prevmu1 <- mu1; prevmu2 <- mu2; prevsigma1 <- sigma1; prevsigma2 <- sigma2; } else { val <- ((prevmu1 - mu1)*2 + (prevmu2 - mu2)*2 + (prevsigma1 - sigma1)*2 + (prevsigma2 - sigma2)*2) ** 0.5; print(c("val: " , val)) if (val <= 1) { break; } } count <- count + 1 } print(mu1) print(mu2) print(sigma1) print(sigma2) }
Kahn answered 11/2, 2013 at 14:49 Comment(2)
could you link (or|and describe) wine.csv data? I supose, that I found it. It is: archive.ics.uci.edu/ml/machine-learning-databases/wine/… ?Riki
yep, sorry, that's the one.Kahn

© 2022 - 2024 — McMap. All rights reserved.