sklearn agglomerative clustering linkage matrix
Asked Answered
P

5

37

I'm trying to draw a complete-link scipy.cluster.hierarchy.dendrogram, and I found that scipy.cluster.hierarchy.linkage is slower than sklearn.AgglomerativeClustering.

However, sklearn.AgglomerativeClustering doesn't return the distance between clusters and the number of original observations, which scipy.cluster.hierarchy.dendrogram needs. Is there a way to take them?

Patchouli answered 10/11, 2014 at 19:33 Comment(2)
Can you post details about the "slower" thing? I have worked with agglomerative hierarchical clustering in scipy, too, and found it to be rather fast, if one of the built-in distance metrics was used.Korean
Does this answer your question? Plot dendrogram using sklearn.AgglomerativeClusteringSartorius
A
13

I made a scipt to do it without modifying sklearn and without recursive functions. Before using note that:

  • Merge distance can sometimes decrease with respect to the children merge distance. I added three ways to handle those cases: Take the max, do nothing or increase with the l2 norm. The l2 norm logic has not been verified yet. Please check yourself what suits you best.

Import the packages:

from sklearn.cluster import AgglomerativeClustering
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram

Function to compute weights and distances:

def get_distances(X,model,mode='l2'):
    distances = []
    weights = []
    children=model.children_
    dims = (X.shape[1],1)
    distCache = {}
    weightCache = {}
    for childs in children:
        c1 = X[childs[0]].reshape(dims)
        c2 = X[childs[1]].reshape(dims)
        c1Dist = 0
        c1W = 1
        c2Dist = 0
        c2W = 1
        if childs[0] in distCache.keys():
            c1Dist = distCache[childs[0]]
            c1W = weightCache[childs[0]]
        if childs[1] in distCache.keys():
            c2Dist = distCache[childs[1]]
            c2W = weightCache[childs[1]]
        d = np.linalg.norm(c1-c2)
        cc = ((c1W*c1)+(c2W*c2))/(c1W+c2W)

        X = np.vstack((X,cc.T))

        newChild_id = X.shape[0]-1

        # How to deal with a higher level cluster merge with lower distance:
        if mode=='l2':  # Increase the higher level cluster size suing an l2 norm
            added_dist = (c1Dist**2+c2Dist**2)**0.5 
            dNew = (d**2 + added_dist**2)**0.5
        elif mode == 'max':  # If the previrous clusters had higher distance, use that one
            dNew = max(d,c1Dist,c2Dist)
        elif mode == 'actual':  # Plot the actual distance.
            dNew = d


        wNew = (c1W + c2W)
        distCache[newChild_id] = dNew
        weightCache[newChild_id] = wNew

        distances.append(dNew)
        weights.append( wNew)
    return distances, weights

Make sample data of 2 clusters with 2 subclusters:

# Make 4 distributions, two of which form a bigger cluster
X1_1 = np.random.randn(25,2)+[8,1.5]
X1_2 = np.random.randn(25,2)+[8,-1.5]
X2_1 = np.random.randn(25,2)-[8,3]
X2_2 = np.random.randn(25,2)-[8,-3]

# Merge the four distributions
X = np.vstack([X1_1,X1_2,X2_1,X2_2])

# Plot the clusters
colors = ['r']*25 + ['b']*25 + ['g']*25 + ['y']*25
plt.scatter(X[:,0],X[:,1],c=colors)

Sample data:

Clusters sample data

Fit the clustering model

model = AgglomerativeClustering(n_clusters=2,linkage="ward")
model.fit(X)

Call the function to find the distances, and pass it to the dendogram

distance, weight = get_distances(X,model)
linkage_matrix = np.column_stack([model.children_, distance, weight]).astype(float)
plt.figure(figsize=(20,10))
dendrogram(linkage_matrix)
plt.show()

Ouput dendogram: enter image description here

Ailsa answered 12/12, 2017 at 9:46 Comment(2)
nice solution, would do it this way if I had to do it all over againRoberson
Here another approach from the official doc scikit-learn.org/stable/auto_examples/cluster/…Hoopes
D
17

It's possible, but it isn't pretty. It requires (at a minimum) a small rewrite of AgglomerativeClustering.fit (source). The difficulty is that the method requires a number of imports, so it ends up getting a bit nasty looking. To add in this feature:

  1. Insert the following line after line 748:

    kwargs['return_distance'] = True

  2. Replace line 752 with:

    self.children_, self.n_components_, self.n_leaves_, parents, self.distance = \

This will give you a new attribute, distance, that you can easily call.

A couple things to note:

  1. When doing this, I ran into this issue about the check_array function on line 711. This can be fixed by using check_arrays (from sklearn.utils.validation import check_arrays). You can modify that line to become X = check_arrays(X)[0]. This appears to be a bug (I still have this issue on the most recent version of scikit-learn).

  2. Depending on which version of sklearn.cluster.hierarchical.linkage_tree you have, you may also need to modify it to be the one provided in the source.

To make things easier for everyone, here is the full code that you will need to use:

from heapq import heapify, heappop, heappush, heappushpop
import warnings
import sys

import numpy as np
from scipy import sparse

from sklearn.base import BaseEstimator, ClusterMixin
from sklearn.externals.joblib import Memory
from sklearn.externals import six
from sklearn.utils.validation import check_arrays
from sklearn.utils.sparsetools import connected_components
from sklearn.cluster import _hierarchical
from sklearn.cluster.hierarchical import ward_tree
from sklearn.cluster._feature_agglomeration import AgglomerationTransform
from sklearn.utils.fast_dict import IntFloatDict

def _fix_connectivity(X, connectivity, n_components=None,
                      affinity="euclidean"):
    """
    Fixes the connectivity matrix
        - copies it
        - makes it symmetric
        - converts it to LIL if necessary
        - completes it if necessary
    """
    n_samples = X.shape[0]
    if (connectivity.shape[0] != n_samples or
        connectivity.shape[1] != n_samples):
        raise ValueError('Wrong shape for connectivity matrix: %s '
                         'when X is %s' % (connectivity.shape, X.shape))

    # Make the connectivity matrix symmetric:
    connectivity = connectivity + connectivity.T

    # Convert connectivity matrix to LIL
    if not sparse.isspmatrix_lil(connectivity):
        if not sparse.isspmatrix(connectivity):
            connectivity = sparse.lil_matrix(connectivity)
        else:
            connectivity = connectivity.tolil()

    # Compute the number of nodes
    n_components, labels = connected_components(connectivity)

    if n_components > 1:
        warnings.warn("the number of connected components of the "
                      "connectivity matrix is %d > 1. Completing it to avoid "
                      "stopping the tree early." % n_components,
                      stacklevel=2)
        # XXX: Can we do without completing the matrix?
        for i in xrange(n_components):
            idx_i = np.where(labels == i)[0]
            Xi = X[idx_i]
            for j in xrange(i):
                idx_j = np.where(labels == j)[0]
                Xj = X[idx_j]
                D = pairwise_distances(Xi, Xj, metric=affinity)
                ii, jj = np.where(D == np.min(D))
                ii = ii[0]
                jj = jj[0]
                connectivity[idx_i[ii], idx_j[jj]] = True
                connectivity[idx_j[jj], idx_i[ii]] = True

    return connectivity, n_components

# average and complete linkage
def linkage_tree(X, connectivity=None, n_components=None,
                 n_clusters=None, linkage='complete', affinity="euclidean",
                 return_distance=False):
    """Linkage agglomerative clustering based on a Feature matrix.
    The inertia matrix uses a Heapq-based representation.
    This is the structured version, that takes into account some topological
    structure between samples.
    Parameters
    ----------
    X : array, shape (n_samples, n_features)
        feature matrix representing n_samples samples to be clustered
    connectivity : sparse matrix (optional).
        connectivity matrix. Defines for each sample the neighboring samples
        following a given structure of the data. The matrix is assumed to
        be symmetric and only the upper triangular half is used.
        Default is None, i.e, the Ward algorithm is unstructured.
    n_components : int (optional)
        Number of connected components. If None the number of connected
        components is estimated from the connectivity matrix.
        NOTE: This parameter is now directly determined directly
        from the connectivity matrix and will be removed in 0.18
    n_clusters : int (optional)
        Stop early the construction of the tree at n_clusters. This is
        useful to decrease computation time if the number of clusters is
        not small compared to the number of samples. In this case, the
        complete tree is not computed, thus the 'children' output is of
        limited use, and the 'parents' output should rather be used.
        This option is valid only when specifying a connectivity matrix.
    linkage : {"average", "complete"}, optional, default: "complete"
        Which linkage critera to use. The linkage criterion determines which
        distance to use between sets of observation.
            - average uses the average of the distances of each observation of
              the two sets
            - complete or maximum linkage uses the maximum distances between
              all observations of the two sets.
    affinity : string or callable, optional, default: "euclidean".
        which metric to use. Can be "euclidean", "manhattan", or any
        distance know to paired distance (see metric.pairwise)
    return_distance : bool, default False
        whether or not to return the distances between the clusters.
    Returns
    -------
    children : 2D array, shape (n_nodes-1, 2)
        The children of each non-leaf node. Values less than `n_samples`
        correspond to leaves of the tree which are the original samples.
        A node `i` greater than or equal to `n_samples` is a non-leaf
        node and has children `children_[i - n_samples]`. Alternatively
        at the i-th iteration, children[i][0] and children[i][1]
        are merged to form node `n_samples + i`
    n_components : int
        The number of connected components in the graph.
    n_leaves : int
        The number of leaves in the tree.
    parents : 1D array, shape (n_nodes, ) or None
        The parent of each node. Only returned when a connectivity matrix
        is specified, elsewhere 'None' is returned.
    distances : ndarray, shape (n_nodes-1,)
        Returned when return_distance is set to True.
        distances[i] refers to the distance between children[i][0] and
        children[i][1] when they are merged.
    See also
    --------
    ward_tree : hierarchical clustering with ward linkage
    """
    X = np.asarray(X)
    if X.ndim == 1:
        X = np.reshape(X, (-1, 1))
    n_samples, n_features = X.shape

    linkage_choices = {'complete': _hierarchical.max_merge,
                       'average': _hierarchical.average_merge,
                      }
    try:
        join_func = linkage_choices[linkage]
    except KeyError:
        raise ValueError(
            'Unknown linkage option, linkage should be one '
            'of %s, but %s was given' % (linkage_choices.keys(), linkage))

    if connectivity is None:
        from scipy.cluster import hierarchy  # imports PIL

        if n_clusters is not None:
            warnings.warn('Partial build of the tree is implemented '
                          'only for structured clustering (i.e. with '
                          'explicit connectivity). The algorithm '
                          'will build the full tree and only '
                          'retain the lower branches required '
                          'for the specified number of clusters',
                          stacklevel=2)

        if affinity == 'precomputed':
            # for the linkage function of hierarchy to work on precomputed
            # data, provide as first argument an ndarray of the shape returned
            # by pdist: it is a flat array containing the upper triangular of
            # the distance matrix.
            i, j = np.triu_indices(X.shape[0], k=1)
            X = X[i, j]
        elif affinity == 'l2':
            # Translate to something understood by scipy
            affinity = 'euclidean'
        elif affinity in ('l1', 'manhattan'):
            affinity = 'cityblock'
        elif callable(affinity):
            X = affinity(X)
            i, j = np.triu_indices(X.shape[0], k=1)
            X = X[i, j]
        out = hierarchy.linkage(X, method=linkage, metric=affinity)
        children_ = out[:, :2].astype(np.int)

        if return_distance:
            distances = out[:, 2]
            return children_, 1, n_samples, None, distances
        return children_, 1, n_samples, None

    if n_components is not None:
        warnings.warn(
            "n_components is now directly calculated from the connectivity "
            "matrix and will be removed in 0.18",
            DeprecationWarning)
    connectivity, n_components = _fix_connectivity(X, connectivity)

    connectivity = connectivity.tocoo()
    # Put the diagonal to zero
    diag_mask = (connectivity.row != connectivity.col)
    connectivity.row = connectivity.row[diag_mask]
    connectivity.col = connectivity.col[diag_mask]
    connectivity.data = connectivity.data[diag_mask]
    del diag_mask

    if affinity == 'precomputed':
        distances = X[connectivity.row, connectivity.col]
    else:
        # FIXME We compute all the distances, while we could have only computed
        # the "interesting" distances
        distances = paired_distances(X[connectivity.row],
                                     X[connectivity.col],
                                     metric=affinity)
    connectivity.data = distances

    if n_clusters is None:
        n_nodes = 2 * n_samples - 1
    else:
        assert n_clusters <= n_samples
        n_nodes = 2 * n_samples - n_clusters

    if return_distance:
        distances = np.empty(n_nodes - n_samples)
    # create inertia heap and connection matrix
    A = np.empty(n_nodes, dtype=object)
    inertia = list()

    # LIL seems to the best format to access the rows quickly,
    # without the numpy overhead of slicing CSR indices and data.
    connectivity = connectivity.tolil()
    # We are storing the graph in a list of IntFloatDict
    for ind, (data, row) in enumerate(zip(connectivity.data,
                                          connectivity.rows)):
        A[ind] = IntFloatDict(np.asarray(row, dtype=np.intp),
                              np.asarray(data, dtype=np.float64))
        # We keep only the upper triangular for the heap
        # Generator expressions are faster than arrays on the following
        inertia.extend(_hierarchical.WeightedEdge(d, ind, r)
                       for r, d in zip(row, data) if r < ind)
    del connectivity

    heapify(inertia)

    # prepare the main fields
    parent = np.arange(n_nodes, dtype=np.intp)
    used_node = np.ones(n_nodes, dtype=np.intp)
    children = []

    # recursive merge loop
    for k in xrange(n_samples, n_nodes):
        # identify the merge
        while True:
            edge = heappop(inertia)
            if used_node[edge.a] and used_node[edge.b]:
                break
        i = edge.a
        j = edge.b

        if return_distance:
            # store distances
            distances[k - n_samples] = edge.weight

        parent[i] = parent[j] = k
        children.append((i, j))
        # Keep track of the number of elements per cluster
        n_i = used_node[i]
        n_j = used_node[j]
        used_node[k] = n_i + n_j
        used_node[i] = used_node[j] = False

        # update the structure matrix A and the inertia matrix
        # a clever 'min', or 'max' operation between A[i] and A[j]
        coord_col = join_func(A[i], A[j], used_node, n_i, n_j)
        for l, d in coord_col:
            A[l].append(k, d)
            # Here we use the information from coord_col (containing the
            # distances) to update the heap
            heappush(inertia, _hierarchical.WeightedEdge(d, k, l))
        A[k] = coord_col
        # Clear A[i] and A[j] to save memory
        A[i] = A[j] = 0

    # Separate leaves in children (empty lists up to now)
    n_leaves = n_samples

    # # return numpy array for efficient caching
    children = np.array(children)[:, ::-1]

    if return_distance:
        return children, n_components, n_leaves, parent, distances
    return children, n_components, n_leaves, parent

# Matching names to tree-building strategies
def _complete_linkage(*args, **kwargs):
    kwargs['linkage'] = 'complete'  
    return linkage_tree(*args, **kwargs)

def _average_linkage(*args, **kwargs):
    kwargs['linkage'] = 'average'
    return linkage_tree(*args, **kwargs)

_TREE_BUILDERS = dict(
    ward=ward_tree,
    complete=_complete_linkage,
    average=_average_linkage,
    )

def _hc_cut(n_clusters, children, n_leaves):
    """Function cutting the ward tree for a given number of clusters.
    Parameters
    ----------
    n_clusters : int or ndarray
        The number of clusters to form.
    children : list of pairs. Length of n_nodes
        The children of each non-leaf node. Values less than `n_samples` refer
        to leaves of the tree. A greater value `i` indicates a node with
        children `children[i - n_samples]`.
    n_leaves : int
        Number of leaves of the tree.
    Returns
    -------
    labels : array [n_samples]
        cluster labels for each point
    """
    if n_clusters > n_leaves:
        raise ValueError('Cannot extract more clusters than samples: '
                         '%s clusters where given for a tree with %s leaves.'
                         % (n_clusters, n_leaves))
    # In this function, we store nodes as a heap to avoid recomputing
    # the max of the nodes: the first element is always the smallest
    # We use negated indices as heaps work on smallest elements, and we
    # are interested in largest elements
    # children[-1] is the root of the tree
    nodes = [-(max(children[-1]) + 1)]
    for i in xrange(n_clusters - 1):
        # As we have a heap, nodes[0] is the smallest element
        these_children = children[-nodes[0] - n_leaves]
        # Insert the 2 children and remove the largest node
        heappush(nodes, -these_children[0])
        heappushpop(nodes, -these_children[1])
    label = np.zeros(n_leaves, dtype=np.intp)
    for i, node in enumerate(nodes):
        label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i
    return label

class AgglomerativeClustering(BaseEstimator, ClusterMixin):
    """
    Agglomerative Clustering
    Recursively merges the pair of clusters that minimally increases
    a given linkage distance.
    Parameters
    ----------
    n_clusters : int, default=2
        The number of clusters to find.
    connectivity : array-like or callable, optional
        Connectivity matrix. Defines for each sample the neighboring
        samples following a given structure of the data.
        This can be a connectivity matrix itself or a callable that transforms
        the data into a connectivity matrix, such as derived from
        kneighbors_graph. Default is None, i.e, the
        hierarchical clustering algorithm is unstructured.
    affinity : string or callable, default: "euclidean"
        Metric used to compute the linkage. Can be "euclidean", "l1", "l2",
        "manhattan", "cosine", or 'precomputed'.
        If linkage is "ward", only "euclidean" is accepted.
    memory : Instance of joblib.Memory or string (optional)
        Used to cache the output of the computation of the tree.
        By default, no caching is done. If a string is given, it is the
        path to the caching directory.
    n_components : int (optional)
        Number of connected components. If None the number of connected
        components is estimated from the connectivity matrix.
        NOTE: This parameter is now directly determined from the connectivity
        matrix and will be removed in 0.18
    compute_full_tree : bool or 'auto' (optional)
        Stop early the construction of the tree at n_clusters. This is
        useful to decrease computation time if the number of clusters is
        not small compared to the number of samples. This option is
        useful only when specifying a connectivity matrix. Note also that
        when varying the number of clusters and using caching, it may
        be advantageous to compute the full tree.
    linkage : {"ward", "complete", "average"}, optional, default: "ward"
        Which linkage criterion to use. The linkage criterion determines which
        distance to use between sets of observation. The algorithm will merge
        the pairs of cluster that minimize this criterion.
        - ward minimizes the variance of the clusters being merged.
        - average uses the average of the distances of each observation of
          the two sets.
        - complete or maximum linkage uses the maximum distances between
          all observations of the two sets.
    pooling_func : callable, default=np.mean
        This combines the values of agglomerated features into a single
        value, and should accept an array of shape [M, N] and the keyword
        argument ``axis=1``, and reduce it to an array of size [M].
    Attributes
    ----------
    labels_ : array [n_samples]
        cluster labels for each point
    n_leaves_ : int
        Number of leaves in the hierarchical tree.
    n_components_ : int
        The estimated number of connected components in the graph.
    children_ : array-like, shape (n_nodes-1, 2)
        The children of each non-leaf node. Values less than `n_samples`
        correspond to leaves of the tree which are the original samples.
        A node `i` greater than or equal to `n_samples` is a non-leaf
        node and has children `children_[i - n_samples]`. Alternatively
        at the i-th iteration, children[i][0] and children[i][1]
        are merged to form node `n_samples + i`
    """

    def __init__(self, n_clusters=2, affinity="euclidean",
                 memory=Memory(cachedir=None, verbose=0),
                 connectivity=None, n_components=None,
                 compute_full_tree='auto', linkage='ward',
                 pooling_func=np.mean):
        self.n_clusters = n_clusters
        self.memory = memory
        self.n_components = n_components
        self.connectivity = connectivity
        self.compute_full_tree = compute_full_tree
        self.linkage = linkage
        self.affinity = affinity
        self.pooling_func = pooling_func

    def fit(self, X, y=None):
        """Fit the hierarchical clustering on the data
        Parameters
        ----------
        X : array-like, shape = [n_samples, n_features]
            The samples a.k.a. observations.
        Returns
        -------
        self
        """
        X = check_arrays(X)[0]
        memory = self.memory
        if isinstance(memory, six.string_types):
            memory = Memory(cachedir=memory, verbose=0)

        if self.linkage == "ward" and self.affinity != "euclidean":
            raise ValueError("%s was provided as affinity. Ward can only "
                             "work with euclidean distances." %
                             (self.affinity, ))

        if self.linkage not in _TREE_BUILDERS:
            raise ValueError("Unknown linkage type %s."
                             "Valid options are %s" % (self.linkage,
                                                       _TREE_BUILDERS.keys()))
        tree_builder = _TREE_BUILDERS[self.linkage]

        connectivity = self.connectivity
        if self.connectivity is not None:
            if callable(self.connectivity):
                connectivity = self.connectivity(X)
            connectivity = check_arrays(
                connectivity, accept_sparse=['csr', 'coo', 'lil'])

        n_samples = len(X)
        compute_full_tree = self.compute_full_tree
        if self.connectivity is None:
            compute_full_tree = True
        if compute_full_tree == 'auto':
            # Early stopping is likely to give a speed up only for
            # a large number of clusters. The actual threshold
            # implemented here is heuristic
            compute_full_tree = self.n_clusters < max(100, .02 * n_samples)
        n_clusters = self.n_clusters
        if compute_full_tree:
            n_clusters = None

        # Construct the tree
        kwargs = {}
        kwargs['return_distance'] = True
        if self.linkage != 'ward':
            kwargs['linkage'] = self.linkage
            kwargs['affinity'] = self.affinity
        self.children_, self.n_components_, self.n_leaves_, parents, \
            self.distance = memory.cache(tree_builder)(X, connectivity,
                                       n_components=self.n_components,
                                       n_clusters=n_clusters,
                                       **kwargs)
        # Cut the tree
        if compute_full_tree:
            self.labels_ = _hc_cut(self.n_clusters, self.children_,
                                   self.n_leaves_)
        else:
            labels = _hierarchical.hc_get_heads(parents, copy=False)
            # copy to avoid holding a reference on the original array
            labels = np.copy(labels[:n_samples])
            # Reasign cluster numbers
            self.labels_ = np.searchsorted(np.unique(labels), labels)
        return self

Below is a simple example showing how to use the modified AgglomerativeClustering class:

import numpy as np
import AgglomerativeClustering # Make sure to use the new one!!!
d = np.array(
    [
        [1, 2, 3],
        [4, 5, 6],
        [7, 8, 9]
    ]
)

clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True,
    affinity='euclidean', linkage='complete')
clustering.fit(d)
print clustering.distance

That example has the following output:

[  5.19615242  10.39230485]

This can then be compared to a scipy.cluster.hierarchy.linkage implementation:

import numpy as np
from scipy.cluster.hierarchy import linkage

d = np.array(
        [
            [1, 2, 3],
            [4, 5, 6],
            [7, 8, 9]
        ]
)
print linkage(d, 'complete')

Output:

[[  1.           2.           5.19615242   2.        ]
 [  0.           3.          10.39230485   3.        ]]

Just for kicks I decided to follow up on your statement about performance:

import AgglomerativeClustering
from scipy.cluster.hierarchy import linkage
import numpy as np
import time

l = 1000; iters = 50
d = [np.random.random(100) for _ in xrange(1000)]

t = time.time()
for _ in xrange(iters):
    clustering = AgglomerativeClustering(n_clusters=l-1,
        affinity='euclidean', linkage='complete')
    clustering.fit(d)
scikit_time = (time.time() - t) / iters
print 'scikit-learn Time: {0}s'.format(scikit_time)

t = time.time()
for _ in xrange(iters):
    linkage(d, 'complete')
scipy_time = (time.time() - t) / iters
print 'SciPy Time: {0}s'.format(scipy_time)

print 'scikit-learn Speedup: {0}'.format(scipy_time / scikit_time)

This gave me the following results:

scikit-learn Time: 0.566560001373s
SciPy Time: 0.497740001678s
scikit-learn Speedup: 0.878530077083

According to this, the implementation from Scikit-Learn takes 0.88x the execution time of the SciPy implementation, i.e. SciPy's implementation is 1.14x faster. It should be noted that:

  1. I modified the original scikit-learn implementation

  2. I only did a small number of iterations

  3. I only tested a small number of test cases (both cluster size as well as number of items per dimension should be tested)

  4. I ran SciPy second, so it is had the advantage of obtaining more cache hits on the source data

  5. The two methods don't exactly do the same thing.

With all of that in mind, you should really evaluate which method performs better for your specific application. There are also functional reasons to go with one implementation over the other.

Dogwood answered 17/3, 2015 at 7:17 Comment(0)
A
13

I made a scipt to do it without modifying sklearn and without recursive functions. Before using note that:

  • Merge distance can sometimes decrease with respect to the children merge distance. I added three ways to handle those cases: Take the max, do nothing or increase with the l2 norm. The l2 norm logic has not been verified yet. Please check yourself what suits you best.

Import the packages:

from sklearn.cluster import AgglomerativeClustering
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram

Function to compute weights and distances:

def get_distances(X,model,mode='l2'):
    distances = []
    weights = []
    children=model.children_
    dims = (X.shape[1],1)
    distCache = {}
    weightCache = {}
    for childs in children:
        c1 = X[childs[0]].reshape(dims)
        c2 = X[childs[1]].reshape(dims)
        c1Dist = 0
        c1W = 1
        c2Dist = 0
        c2W = 1
        if childs[0] in distCache.keys():
            c1Dist = distCache[childs[0]]
            c1W = weightCache[childs[0]]
        if childs[1] in distCache.keys():
            c2Dist = distCache[childs[1]]
            c2W = weightCache[childs[1]]
        d = np.linalg.norm(c1-c2)
        cc = ((c1W*c1)+(c2W*c2))/(c1W+c2W)

        X = np.vstack((X,cc.T))

        newChild_id = X.shape[0]-1

        # How to deal with a higher level cluster merge with lower distance:
        if mode=='l2':  # Increase the higher level cluster size suing an l2 norm
            added_dist = (c1Dist**2+c2Dist**2)**0.5 
            dNew = (d**2 + added_dist**2)**0.5
        elif mode == 'max':  # If the previrous clusters had higher distance, use that one
            dNew = max(d,c1Dist,c2Dist)
        elif mode == 'actual':  # Plot the actual distance.
            dNew = d


        wNew = (c1W + c2W)
        distCache[newChild_id] = dNew
        weightCache[newChild_id] = wNew

        distances.append(dNew)
        weights.append( wNew)
    return distances, weights

Make sample data of 2 clusters with 2 subclusters:

# Make 4 distributions, two of which form a bigger cluster
X1_1 = np.random.randn(25,2)+[8,1.5]
X1_2 = np.random.randn(25,2)+[8,-1.5]
X2_1 = np.random.randn(25,2)-[8,3]
X2_2 = np.random.randn(25,2)-[8,-3]

# Merge the four distributions
X = np.vstack([X1_1,X1_2,X2_1,X2_2])

# Plot the clusters
colors = ['r']*25 + ['b']*25 + ['g']*25 + ['y']*25
plt.scatter(X[:,0],X[:,1],c=colors)

Sample data:

Clusters sample data

Fit the clustering model

model = AgglomerativeClustering(n_clusters=2,linkage="ward")
model.fit(X)

Call the function to find the distances, and pass it to the dendogram

distance, weight = get_distances(X,model)
linkage_matrix = np.column_stack([model.children_, distance, weight]).astype(float)
plt.figure(figsize=(20,10))
dendrogram(linkage_matrix)
plt.show()

Ouput dendogram: enter image description here

Ailsa answered 12/12, 2017 at 9:46 Comment(2)
nice solution, would do it this way if I had to do it all over againRoberson
Here another approach from the official doc scikit-learn.org/stable/auto_examples/cluster/…Hoopes
R
8

Update: I recommend this solution - https://mcmap.net/q/372790/-sklearn-agglomerative-clustering-linkage-matrix, if you found my attempt useful please examine Arjun's solution and re-examine your vote

You will need to generate a "linkage matrix" from children_ array where every row in the linkage matrix has the format [idx1, idx2, distance, sample_count].

This is not meant to be a paste-and-run solution, I'm not keeping track of what I needed to import - but it should be pretty clear anyway.

Here is one way to generate the required structure Z and visualize the result

X is your n_samples x n_features input data

cluster

agg_cluster = sklearn.cluster.AgglomerativeClustering(n_clusters=n)
agg_labels = agg_cluster.fit_predict(X)

some empty data structures

Z = []
# should really call this cluster dict
node_dict = {}
n_samples = len(X)

write a recursive function to gather all leaf nodes associated with a given cluster, compute distances, and centroid positions

def get_all_children(k, verbose=False):
    i,j = agg_cluster.children_[k]

    if k in node_dict:
        return node_dict[k]['children']

    if i < leaf_count:
        left = [i]
    else:
        # read the AgglomerativeClustering doc. to see why I select i-n_samples
        left = get_all_children(i-n_samples)

    if j < leaf_count:
        right = [j]
    else:
        right = get_all_children(j-n_samples)

    if verbose:
        print k,i,j,left, right
    left_pos = np.mean(map(lambda ii: X[ii], left),axis=0)
    right_pos = np.mean(map(lambda ii: X[ii], right),axis=0)

    # this assumes that agg_cluster used euclidean distances
    dist = metrics.pairwise_distances([left_pos,right_pos],metric='euclidean')[0,1]

    all_children = [x for y in [left,right] for x in y]
    pos = np.mean(map(lambda ii: X[ii], all_children),axis=0)

    # store the results to speed up any additional or recursive evaluations
    node_dict[k] = {'top_child':[i,j],'children':all_children, 'pos':pos,'dist':dist, 'node_i':k + n_samples}
    return all_children
    #return node_di|ct

populate node_dict and generate Z - with distance and n_samples per node

for k,x in enumerate(agg_cluster.children_):   
    get_all_children(k,verbose=False)

# Every row in the linkage matrix has the format [idx1, idx2, distance, sample_count].
Z = [[v['top_child'][0],v['top_child'][1],v['dist'],len(v['children'])] for k,v in node_dict.iteritems()]
# create a version with log scaled distances for easier visualization
Z_log =[[v['top_child'][0],v['top_child'][1],np.log(1.0+v['dist']),len(v['children'])] for k,v in node_dict.iteritems()]

plot it using scipy dendrogram

   from scipy.cluster import hierarchy
   plt.figure()
   dn = hierarchy.dendrogram(Z_log,p=4,truncate_mode='level')
   plt.show()

dendrogram

be disappointed by how opaque this visualization is and wish you could interactively drill down into larger clusters and examine directional (not scalar) distances between centroids :( - maybe a bokeh solution exists?

references

http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html

https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/#Selecting-a-Distance-Cut-Off-aka-Determining-the-Number-of-Clusters

Roberson answered 10/8, 2016 at 20:4 Comment(3)
Do not copy answers between questions. If the same answer really applies to both questions, flag the newer one as a duplicate.Mooch
ok - marked the newer question as a dup - and deleted my answer to it - so this answer is no longer redundantRoberson
Where do you first define leaf_count?Ailsa
E
5

I think the official example of sklearn on the AgglomerativeClustering would be helpful.

Plot Hierarchical Clustering Dendrogram:

import numpy as np

from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram
from sklearn.datasets import load_iris
from sklearn.cluster import AgglomerativeClustering


def plot_dendrogram(model, **kwargs):
    # Create linkage matrix and then plot the dendrogram

    # create the counts of samples under each node
    counts = np.zeros(model.children_.shape[0])
    n_samples = len(model.labels_)
    for i, merge in enumerate(model.children_):
        current_count = 0
        for child_idx in merge:
            if child_idx < n_samples:
                current_count += 1  # leaf node
            else:
                current_count += counts[child_idx - n_samples]
        counts[i] = current_count

    linkage_matrix = np.column_stack([model.children_, model.distances_,
                                      counts]).astype(float)

    # Plot the corresponding dendrogram
    dendrogram(linkage_matrix, **kwargs)


iris = load_iris()
X = iris.data

# setting distance_threshold=0 ensures we compute the full tree.
model = AgglomerativeClustering(distance_threshold=0, n_clusters=None)

model = model.fit(X)
plt.title('Hierarchical Clustering Dendrogram')
# plot the top three levels of the dendrogram
plot_dendrogram(model, truncate_mode='level', p=3)
plt.xlabel("Number of points in node (or index of point if no parenthesis).")
plt.show()

NB This solution relies on distances_ variable which only is set when calling AgglomerativeClustering with the distance_threshold parameter.

Ephedrine answered 29/12, 2019 at 12:54 Comment(1)
When the question was originally asked, and when most of the other answers were posted, sklearn did not expose the distances. It does now (github.com/scikit-learn/scikit-learn/pull/14526), as demonstrated here, so this is the best up-to-date answer.Pattipattie
R
1

I ran into the same problem when setting n_clusters. I think the problem is that if you set n_clusters, the distances don't get evaluated. If you set n_clusters = None and set a distance_threshold, then it works with the code provided on sklearn. I understand that this will probably not help in your situation but I hope a fix is underway.

Rejoinder answered 8/4, 2020 at 7:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.