Calculate how a value differs from the average of values using the Gaussian Kernel Density (Python)
Asked Answered
R

1

11

I use this code to calculate a Gaussian Kernel Density on this values

from random import randint
x_grid=[]
for i in range(1000):
    x_grid.append(randint(0,4))
print (x_grid)

This is the code to calculate the Gaussian Kernel Density

from statsmodels.nonparametric.kde import KDEUnivariate
import matplotlib.pyplot as plt

def kde_statsmodels_u(x, x_grid, bandwidth=0.2, **kwargs):
    """Univariate Kernel Density Estimation with Statsmodels"""
    kde = KDEUnivariate(x)
    kde.fit(bw=bandwidth, **kwargs)
    return kde.evaluate(x_grid)

import numpy as np
from scipy.stats.distributions import norm

# The grid we'll use for plotting
from random import randint
x_grid=[]
for i in range(1000):
    x_grid.append(randint(0,4))
print (x_grid)

# Draw points from a bimodal distribution in 1D
np.random.seed(0)
x = np.concatenate([norm(-1, 1.).rvs(400),
                    norm(1, 0.3).rvs(100)])

pdf_true = (0.8 * norm(-1, 1).pdf(x_grid) +
            0.2 * norm(1, 0.3).pdf(x_grid))

# Plot the three kernel density estimates
fig, ax = plt.subplots(1, 2, sharey=True, figsize=(13, 8))
fig.subplots_adjust(wspace=0)

pdf=kde_statsmodels_u(x, x_grid, bandwidth=0.2)
ax[0].plot(x_grid, pdf, color='blue', alpha=0.5, lw=3)
ax[0].fill(x_grid, pdf_true, ec='gray', fc='gray', alpha=0.4)
ax[0].set_title("kde_statsmodels_u")
ax[0].set_xlim(-4.5, 3.5)

plt.show()

All the values in the grid are between 0 e 4. If I receive a new value of 5 I want to calculate how that value differs from the average values and assign to it a score between 0 and 1. (setting a threshold)

So if I receive as a new value 5 its score must be close to 0.90, while if I receive as a new value 500 its score must be close to 0.0.

How can I do that? Is my function to calculate the Gaussian Kernel Density correct or is there a better way/library to do that?

* UPDATE * I read an example in a paper. The weight of a washing machine is typically of 100 kg. Usually vendors use the kg unit to also refer its capacity (example 9 kg). For a human is easy to understand that 9 gk is the capacity and not the total weight of the washing machine. We can “fake” this form of intelligence without deep language understanding, by instead modeling a distribution of values over training data for each attribute.

For a given attribute a (weight of a washing machine for example), let Va = {va1, va2, . . . van} (|Va| = n) be the set of values of attribute a corresponding to products in the training data. If I found a new value v Intuitively it is “close” to (the distribution estimated from) Va, then we should feel more confident assigning this value to a (example weight of a washing machine).

An idea could be to measure the number of standard deviations by which the new value v differs from the average of values in Va but a better one could be to model a (Gaussian) kernel density on Va, and then express the support at new value v as the density at that point:

enter image description here

where where σ^(2)ak is the variance of the kth Gaussian, and Z is a constant to make sure S(c.s.v, Va) ∈ [0, 1]. How can I obtain it in Python using the statsmodels library?

* UPDATED 2 * Example of data... but I think that is not very important... Generated by this code...

from random import randint
x_grid=[]
for i in range(1000):
    x_grid.append(randint(1,3))
print (x_grid)

[2, 2, 1, 2, 2, 3, 1, 1, 1, 2, 2, 2, 1, 1, 3, 3, 1, 2, 1, 3, 2, 3, 3, 1, 2, 3, 1, 1, 3, 2, 2, 1, 1, 1, 2, 3, 2, 1, 2, 3, 3, 2, 2, 3, 3, 2, 2, 1, 2, 1, 2, 2, 3, 3, 1, 1, 2, 3, 3, 2, 1, 2, 3, 3, 3, 3, 2, 1, 3, 2, 2, 1, 3, 3, 1, 2, 1, 3, 2, 3, 3, 1, 2, 3, 3, 2, 1, 2, 3, 2, 1, 1, 2, 1, 1, 2, 3, 2, 1, 2, 2, 2, 3, 2, 3, 3, 1, 1, 3, 2, 1, 1, 3, 3, 3, 2, 1, 2, 2, 1, 3, 2, 3, 1, 3, 1, 2, 3, 1, 3, 2, 2, 1, 1, 2, 2, 3, 1, 1, 3, 2, 2, 1, 2, 1, 2, 3, 1, 3, 3, 1, 2, 1, 2, 1, 3, 1, 3, 3, 2, 1, 1, 3, 2, 2, 2, 3, 2, 1, 3, 2, 1, 1, 3, 3, 3, 2, 1, 1, 3, 2, 1, 2, 2, 2, 1, 3, 1, 3, 2, 3, 1, 2, 1, 1, 2, 2, 2, 3, 3, 3, 3, 2, 2, 2, 3, 1, 1, 2, 2, 1, 1, 1, 3, 3, 3, 3, 1, 3, 1, 3, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 3, 1, 2, 3, 1, 3, 2, 2, 2, 2, 2, 1, 1, 2, 3, 1, 1, 1, 3, 1, 3, 2, 2, 3, 1, 3, 3, 2, 2, 3, 2, 1, 2, 1, 1, 1, 2, 2, 3, 2, 1, 1, 3, 1, 2, 1, 3, 3, 3, 1, 2, 2, 2, 1, 1, 2, 2, 1, 2, 3, 1, 3, 2, 2, 2, 2, 2, 2, 1, 3, 1, 3, 3, 2, 3, 2, 1, 3, 3, 3, 3, 3, 1, 2, 2, 2, 1, 1, 3, 2, 3, 1, 2, 3, 2, 3, 2, 1, 1, 3, 3, 1, 1, 2, 3, 2, 3, 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 2, 1, 1, 2, 3, 2, 3, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 1, 3, 1, 1, 2, 3, 1, 1, 2, 3, 1, 2, 3, 1, 2, 1, 3, 3, 2, 2, 3, 3, 3, 2, 1, 1, 2, 2, 3, 2, 3, 2, 1, 1, 1, 1, 2, 3, 1, 3, 3, 3, 2, 1, 2, 3, 1, 2, 1, 1, 2, 3, 3, 1, 1, 3, 2, 1, 3, 3, 2, 1, 1, 3, 1, 3, 1, 2, 2, 1, 3, 3, 2, 3, 1, 1, 3, 1, 2, 2, 1, 3, 2, 3, 1, 1, 3, 1, 3, 1, 2, 1, 3, 2, 2, 2, 2, 1, 3, 2, 1, 3, 3, 2, 3, 2, 1, 3, 1, 2, 1, 2, 3, 2, 3, 2, 3, 3, 2, 3, 3, 1, 1, 3, 2, 3, 2, 2, 2, 3, 1, 3, 2, 2, 3, 3, 2, 3, 2, 2, 2, 3, 3, 1, 3, 2, 3, 1, 1, 2, 1, 3, 1, 2, 2, 3, 3, 1, 3, 1, 1, 2, 2, 1, 3, 3, 3, 1, 2, 2, 2, 1, 3, 1, 2, 2, 2, 3, 3, 3, 1, 1, 2, 3, 3, 1, 1, 2, 3, 2, 3, 3, 2, 2, 1, 3, 3, 3, 3, 2, 3, 1, 3, 3, 2, 1, 3, 2, 1, 1, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 2, 3, 3, 3, 2, 1, 3, 1, 1, 1, 1, 3, 1, 2, 3, 3, 3, 2, 3, 1, 2, 2, 2, 3, 2, 1, 2, 3, 3, 2, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 3, 2, 1, 1, 1, 2, 3, 1, 3, 3, 2, 1, 3, 3, 3, 2, 2, 1, 2, 3, 2, 3, 3, 3, 3, 2, 3, 2, 1, 2, 1, 1, 3, 3, 3, 2, 2, 3, 1, 3, 2, 1, 3, 1, 1, 3, 3, 1, 2, 2, 2, 3, 3, 1, 2, 1, 2, 1, 3, 2, 3, 3, 3, 3, 3, 3, 3, 1, 2, 3, 1, 3, 3, 2, 2, 1, 3, 1, 1, 3, 2, 1, 2, 3, 2, 1, 3, 3, 3, 2, 3, 1, 2, 3, 3, 1, 2, 2, 2, 3, 1, 2, 1, 1, 1, 3, 1, 3, 1, 3, 3, 2, 3, 1, 3, 2, 3, 3, 1, 2, 1, 3, 2, 2, 2, 2, 2, 2, 1, 2, 2, 3, 2, 2, 3, 2, 2, 2, 3, 1, 1, 3, 3, 1, 3, 1, 2, 1, 2, 1, 3, 2, 2, 1, 3, 1, 3, 3, 1, 3, 1, 1, 1, 1, 3, 2, 1, 2, 3, 1, 1, 3, 1, 1, 3, 1, 3, 3, 3, 1, 1, 3, 1, 3, 2, 2, 2, 1, 1, 2, 3, 3, 2, 3, 3, 1, 2, 3, 2, 2, 3, 1, 2, 2, 2, 1, 1, 3, 1, 2, 2, 2, 1, 1, 2, 3, 1, 3, 1, 1, 3, 2, 2, 3, 2, 2, 3, 3, 1, 1, 2, 2, 3, 1, 1, 2, 3, 2, 2, 3, 1, 2, 2, 1, 1, 3, 2, 3, 1, 1, 3, 1, 3, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 1, 1, 1, 3, 3, 1, 2, 1, 3, 2, 3, 2, 2, 1, 2, 3, 3, 1, 1, 1, 1, 3, 3, 1, 3, 3, 1, 1, 3, 1, 3, 1, 3, 2, 3, 1, 3, 3, 3, 1, 1, 2, 2, 3, 2, 3, 2, 2, 1, 2, 1, 2, 1, 2, 2, 3, 1, 1, 3, 2, 2, 3, 2, 3, 3, 2, 2, 2, 2, 2, 2, 3, 2, 3, 1, 2, 2, 1, 1, 2, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 2, 2, 2, 1, 1, 2, 1, 3, 1, 1, 1, 2, 3, 3, 2, 3, 1, 3]

This array represents the ram of new smartphones in the market... Usually they have 1,2,3 GB of ram.

That's the kernel density

enter image description here

*** UPDATE

I try the code with this values

[1024, 1, 1024, 1000, 1024, 128, 1536, 16, 192, 2048, 2000, 2048, 24, 250, 256, 278, 288, 290, 3072, 3, 3000, 3072, 32, 384, 4096, 4, 4096, 448, 45, 512, 576, 64, 768, 8, 96]

The values are all in mb... do you think that is working well? I think that I must set a threshold

      100%      cdfv      kdev
1       42  0.210097  0.499734
1024    96  0.479597  0.499983
5000     0  0.000359  0.498885
2048    36  0.181609  0.499700
3048     8  0.040299  0.499424

* UPDATE 3 *

[256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 512, 512, 512, 256, 256, 256, 512, 512, 512, 128, 128, 128, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 128, 128, 128, 512, 512, 512, 256, 256, 256, 256, 256, 256, 1024, 1024, 1024, 512, 512, 512, 128, 128, 128, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 4, 4, 4, 3, 3, 3, 24, 24, 24, 8, 8, 8, 16, 16, 16, 16, 16, 16, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 256, 256, 256, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 2048, 2048, 2048, 2048, 2048, 2048, 4096, 4096, 4096, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 768, 768, 768, 768, 768, 768, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 256, 256, 256, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 3072, 3072, 3072, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 256, 256, 256, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 64, 64, 64, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 64, 64, 64, 64, 64, 64, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 128, 128, 128, 576, 576, 576, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 576, 576, 576, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 2048, 2048, 2048, 768, 768, 768, 768, 768, 768, 768, 768, 768, 512, 512, 512, 192, 192, 192, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 384, 384, 384, 448, 448, 448, 576, 576, 576, 384, 384, 384, 288, 288, 288, 768, 768, 768, 384, 384, 384, 288, 288, 288, 64, 64, 64, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 64, 64, 64, 128, 128, 128, 128, 128, 128, 128, 128, 128, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 256, 256, 256, 768, 768, 768, 768, 768, 768, 768, 768, 768, 256, 256, 256, 192, 192, 192, 256, 256, 256, 64, 64, 64, 256, 256, 256, 192, 192, 192, 128, 128, 128, 256, 256, 256, 192, 192, 192, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 128, 128, 128, 128, 128, 128, 384, 384, 384, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 32, 32, 32, 768, 768, 768, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 128, 128, 128, 128, 128, 128, 1024, 1024, 1024, 1024, 1024, 1024, 128, 128, 128, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 3072, 3072, 3072, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 384, 384, 384, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 128, 128, 128, 256, 256, 256, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 768, 768, 768, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 128, 128, 128, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 64, 64, 64, 64, 64, 64, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 16, 16, 16, 3072, 3072, 3072, 3072, 3072, 3072, 256, 256, 256, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 32, 32, 32, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 32, 32, 32, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 1, 1, 1, 1024, 1024, 1024, 32, 32, 32, 32, 32, 32, 45, 45, 45, 8, 8, 8, 512, 512, 512, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 16, 16, 16, 4, 4, 4, 4, 4, 4, 4, 4, 4, 16, 16, 16, 16, 16, 16, 16, 16, 16, 64, 64, 64, 8, 8, 8, 8, 8, 8, 8, 8, 8, 64, 64, 64, 64, 64, 64, 256, 256, 256, 64, 64, 64, 64, 64, 64, 512, 512, 512, 512, 512, 512, 512, 512, 512, 32, 32, 32, 32, 32, 32, 32, 32, 32, 128, 128, 128, 128, 128, 128, 128, 128, 128, 32, 32, 32, 128, 128, 128, 64, 64, 64, 64, 64, 64, 16, 16, 16, 256, 256, 256, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 256, 256, 256, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 256, 256, 256, 256, 256, 256, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 3072, 3072, 3072, 3072, 3072, 3072, 128, 128, 128, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 128, 128, 128, 128, 128, 128, 64, 64, 64, 256, 256, 256, 256, 256, 256, 512, 512, 512, 768, 768, 768, 768, 768, 768, 16, 16, 16, 32, 32, 32, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 2048, 2048, 2048, 1024, 1024, 1024, 3072, 3072, 3072, 3072, 3072, 3072, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 3072, 3072, 3072, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 64, 64, 64, 96, 96, 96, 512, 512, 512, 64, 64, 64, 64, 64, 64, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 3072, 3072, 3072, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 64, 64, 64, 64, 64, 64, 256, 256, 256, 1024, 1024, 1024, 512, 512, 512, 256, 256, 256, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 3072, 3072, 3072, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048]

With this data if I try as new value this number

# new values
x = np.asarray([128,512,1024,2048,3072,2800])

Something goes wrong with the 3072 (all values are in MB).

This is the result:

      100%      cdfv      kdev
128     26  0.129688  0.499376
512     55  0.275874  0.499671
1024    91  0.454159  0.499936
2048    12  0.062298  0.499150
3072     0  0.001556  0.498364
2800     1  0.004954  0.498573

I can't understand why this happens... the 3072 value appears a lot of time in the data... This is the histogram of my datas... this is very strange because there are some values for 3072 and also for 4096.

enter image description here

Replevy answered 14/6, 2015 at 10:43 Comment(17)
It sounds like what you're really asking for is a p-value reflecting the probability that the new value is drawn from the same underlying distribution as the other values. A p-value reflects the probability of drawing a value at least as extreme, i.e. p(x >= 500) rather than p(x == 500).Perlie
thanks @Perlie how can I get the p-value?Replevy
While it's possible to obtain a p-value using KDE, it's probably not the best tool for the job, since it is pretty much guaranteed to be biased towards overly conservative (large) p-values (see here). A more sensible option might be to fit a parametric distribution to your previous values, then derive the p-value by evaluating the CDF at your new value. How are your real data distributed? Could you show a histogram of the distribution, or post a sample of your real data?Perlie
Careful with the p-values guys. It's very important to be pedantically clear about what a p-value is and what it isn't. When you say that "A p-value reflects the probability of drawing a value at least as extreme..." this is not actually correct, because it requires some null hypothesis assumption. Specifically, a p-value is not something derived from assuming that a fitted value is a true value. So you could not fit a parametric distributional form, assume it is the true form, and then observe how extreme the next incoming data point appears under that "assumed true" distribution.Hemimorphic
If your goal was to use the new data as evidence either to support your belief in the parametric fit, or support/reject your belief that the new data point was drawn from the same distribution, this would be a statistical fallacy. Instead, you would need to postulate a null hypothesis, such as hypothesizing that the vector of parameters for the parametric fit was all zeros (or some sensible default values if all zeros is unphysical). Then, you can look at how extreme your data is under that null hypothesis's distribution and if the data are very extreme (i.e. very small p-value) ...Hemimorphic
then it is considered as some form of evidence that it is OK to have some degree of confidence in rejecting the null hypothesis, i.e. rejecting the idea that the "true" parameters are all zero (or are equal to your baseline test value). And the result of the test will just be some degree of belief that the parameter "effect sizes" are meaningfully different than what their values would have been under the null hypothesis. This is hugely important because misunderstanding what you can and cannot say regarding what sort of conclusion a calculated p-value supports is a big source of errors.Hemimorphic
In general, a p-value is not a probability that data as extreme or more extreme would be observed under the fitted (or "true") model. A p-value is not a direct function of the probability that the fitted model is true conditional upon the data. A p-value is not the probability that the null hypothesis is false. A p-value is only the probability of observing data as extreme or more extreme under the assumption that the null hypothesis is true.Hemimorphic
The proposed idea of measuring the outlier-ness of the next observation is much more closely related to measure an assumed model's likelihood, under an assumption that the data are drawn IID from some underlying distribution. Yet another idea is to use the fitted model as a prior, re-fit using the prior and the new data point in a Bayesian model, and then compute the Bayesian surprise between the two as a proxy for how 'surprising' the new data point is under the prior distribution.Hemimorphic
@Mr.F thank for your answer, Yes considering a training set ... for example considering a smartphone training set in which all have RAM attribute. As you can imagine all the values are in this range (0,4) GB. If I receive a new value 5, considering the distribution of my training data, It would be probable that a new smartphone could enter in the market with 5 gb of ram. So the formula in the image posted in the question must give me a good score.Replevy
If I receive a value of 100 the formula must give me a bad score. So I will discard that value as RAM memory... that value could be refers to the internal memory or whatever else... My real questions are two: 1. Does I correctly understood that solution? Gaussian Kernel density are ok to obtain what I want? 2. how can I reproduce that formula in my python code??Replevy
@Mr.F I was trying to imply the assumption of the null hypothesis when I said "drawn from the same underlying [parametric] distribution as the other values", although I should have made this clearer.Perlie
@UsiUsi It would be really helpful if you could show us how your real data are distributed. If the empirical distribution is well-approximated by some parametric distribution then the problem becomes much, much easier.Perlie
@Perlie I see: I think you are trying to say you want to use something like a classical NHST test for whether two samples have the same mean (or same parameter vector, or whatever) but in this case where one of the samples has size = 1.Hemimorphic
@Mr.F Yes, that's one way of putting it. I'm still not 100% clear on what the OP is trying to do, but it sounds like they are trying to identify outliers. If the empirical distribution of "previous" values is well-approximated by some parametric distribution, then all that's needed is evaluate the CDF (or SF) at the "new" value. I suppose one could also use the PDF directly, although it's a bit less obvious how to use this as a criterion for rejection.Perlie
What your real data look like is extremely important! Since your example data are discrete, a Gaussian kernel-based approach seems totally inappropriate to me. If you smooth with a Gaussian kernel, you would predict non-zero probability density at non-integer values of RAM, even though you never observe these in your real data.Perlie
Also If an attribute could be Clock Frequency of a CPU?Replevy
In your example, we actually know that the data are drawn from a discrete uniform distribution between 1 & 3. Therefore if we assume that the new point is drawn from the same underlying distribution, we know for a fact that the probability of it being anything other than 1, 2 or 3 is zero. If you think the probability of seeing a 5 should be non-zero, then you would have to assume that the data were drawn from some other underlying distribution. What do you think this other distribution should look like?Perlie
G
3

A few general comments without going into statsmodels details.

statsmodels also has cdf kernels, but I don't remember how well they work, and I don't think it has automatic bandwidth selection for it.

Related to the answer of glen_b that ali_m linked to in the comment:

The cdf estimate converges much faster to the true distribution than the estimate of the density as the sample grows. To balance the bias - variance tradeoff we should use a smaller bandwidth for cdf kernels, that is undersmooth relative to density estimation. The estimates should be more accurate than the corresponding density estimates.

Number of tail observations:

If your largest observation in the sample is 4 and you want to know the cdf at 5, then your data has no information about it. For tails where you only have very few observations the variance of a nonparametric estimator like kernel distribution estimators will be large in relative terms (is it 1e-5 or 1e-20?).

As alternative to kernel density or kernel distribution estimation, we can estimate a Pareto distribution for the tail parts. For example, take the largest 10 or 20 percent of observations and fit a Pareto distribution, and use this to extrapolate the tail density. There are several Python packages for powerlaw estimation, that might be used for the this.

update

The following shows how to calculate "outlyingness" using a parametric normal distribution assumption and a gaussian kernel density estimate with fixed bandwidth.

This is only really correct if the sample comes from a continuous distribution or can be approximated by a continuous distribution. Here we pretend that a sample that has only 3 distinct values comes from a normal distribution. Essentially, the calculated cdf value is like a distance measure not a probability for a discrete random variable.

This uses kde from scipy.stats with fixed bandwidth instead of the statsmodels version.

I'm not sure how the bandwidth is set in scipy's gaussian_kde, so, my fixed bandwidth choice equal to scale Is likely wrong. I don't know how I would choose a bandwidth if there are only three distinct values, there is not enough information in the data. The default bandwidth is intended for distributions that are approximately normal, or at least single peaked.

import numpy as np
from scipy import stats

# data
ram = np.array([2, <truncated from data in description>, 3])

loc = ram.mean()
scale = ram.std()

# new values
x = np.asarray([-1, 0, 2, 3, 4, 5, 100])

# assume normal distribution
cdf_val = stats.norm.cdf(x, loc=loc, scale=scale)
cdfv = np.minimum(cdf_val, 1 - cdf_val)

# use gaussian kde but fix bandwidth
kde = stats.gaussian_kde(ram, bw_method=scale)
kde_val = np.asarray([kde.integrate_box_1d(-np.inf, xx) for xx in  x])
kdev = np.minimum(kde_val, 1 - kde_val)


#print(np.column_stack((x, cdfv, kdev)))
# use pandas for prettier table
import pandas as pd
print(pd.DataFrame({'cdfv': cdfv, 'kdev': kdev}, index=x))

'''
          cdfv      kdev
-1    0.000096  0.000417
 0    0.006171  0.021262
 2    0.479955  0.482227
 3    0.119854  0.199565
 5    0.000143  0.000472
 100  0.000000  0.000000
 '''
Geilich answered 14/6, 2015 at 12:48 Comment(9)
Given your update, everything what I said still applies, except that is appropriate for continuous data, or at least many discrete values in the support so that the continuous cdf is a good approximation to the discrete cdf. However, you need to include some prior information about points that are not in the support, for example is 5 close to 4 if we never saw a 5 or larger before. Another example: Are fractional counts possible for number of CPUs? Can we have a computer that has 2.5 CPUs, or can we rule out that it is the number of CPUs if we see 2.5?Geilich
In terms of the statistic I would consider it just as a nonparametric classification problem. Given the estimated nonparametric densities for each category, we can use the cdf to calculate the "closeness" of a new observation to the different categories. (Like a nonparametric Multinomial Logit)Geilich
For a simpler start, I would just assume normal distribution, calculate means and variances, and use the normal cdf as "outlyingness" measure. If the approximation by a continuous distribution makes sense, then I would refine the same approach with kernel distribution functions.Geilich
Could you make an example?Replevy
By default, stats.gaussian_kde uses Scott's factor to generate an estimate of the optimal kernel bandwidth. This method is optimal assuming normally distributed data (which obviously isn't the case here...).Perlie
Thank you so much for your great answer... but it misses something really important to me... I way to calculate a score between [0,1] for a new value. It must be close to 1 if it fit the distribution (or is close to like a value of 5)Replevy
No, the largest value will be 0.5 at the median. If you want it to be 1, then you could multiply the cdf (tail) value by 2. Then it would correspond to a two sided alternative if it were a hypothesis test. This actually sounds more plausible since you don't care whether you are in the upper or lower tail. (Just to emphasize again, this is for the continuous approximation. The principle is the same for discrete distributions, but the cdf would be a step function at the discrete points.)Geilich
[1024, 1, 1024, 1000, 1024, 128, 1536, 16, 192, 2048, 2000, 2048, 24, 250, 256, 278, 288, 290, 3072, 3, 3000, 3072, 32, 384, 4096, 4, 4096, 448, 45, 512, 576, 64, 768, 8, 96] I have this values in mb... do you think that is working well? I mut set a tresholdReplevy
Getting the bandwidth for kde requires some trials, because the data doesn't look close to a continuous unimodal distribution. The documentation for scipy has an example how to set it as a multiple of the predefined scott's or silverman factors. Something like 2 or 10 times that factor might work in a larger range. Also for MB an similar it might be useful to transform the data, e.g. with log2. np.histogram(np.log2(np.sort(a)))Geilich

© 2022 - 2024 — McMap. All rights reserved.