Finding K-Means Clustering

Deepak Panday's journal club session, where he will present the papers "Recovering the number of clusters in data sets with noise features using feature rescaling factors (Renato Cordeiro de Amorima and Christian Hennig, 2015)" and "Intelligent Choice of the Number of Clusters in K-Means Clustering An Experimental Study with Different Cluster Spreads (Mark Ming-Tso Chiang and Boris Mirkin, 2010)".


"Recovering the number of clusters in data sets with noise features using feature rescaling factors (Renato Cordeiro de Amorima and Christian Hennig, 2015)" abstract:

In this paper we introduce three methods for re-scaling data sets aiming at improving the likelihood of clustering validity indexes to return the true number of spherical Gaussian clusters with additional noise features. Our method obtains feature re-scaling factors taking into account the structure of a given data set and the intuitive idea that different features may have different degrees of relevance at different clusters. We experiment with the Silhouette (using squared Euclidean, Manhattan, and the pth power of the Minkowski distance), Dunn’s, Calinski–Harabasz and Hartigan indexes on data sets with spherical Gaussian clusters with and without noise features. We conclude that our methods indeed increase the chances of estimating the true number of clusters in a data set.

"Intelligent Choice of the Number of Clusters in K-Means Clustering An Experimental Study with Different Cluster Spreads (Mark Ming-Tso Chiang and Boris Mirkin, 2010)" abstract:

The issue of determining “the right number of clusters” in K-Means has attracted considerable interest, especially in the recent years. Cluster intermix appears to be a factor most affecting the clustering results. This paper proposes an experimental setting for comparison of different approaches at data generated from Gaussian clusters with the controlled parameters of between- and within-cluster spread to model cluster intermix. The setting allows for evaluating the centroid recovery on par with conventional evaluation of the cluster recovery. The subjects of our interest are two versions of the “intelligent” K-Means method, ik-Means, that find the “right” number of clusters by extracting “anomalous patterns” from the data one-by-one. We compare them with seven other methods, including Hartigan’s rule, averaged Silhouette width and Gap statistic, under different between- and within-cluster spread-shape conditions. There are several consistent patterns in the results of our experiments, such as that the right K is reproduced best by Hartigan’s rule – but not clusters or their centroids. This leads us to propose an adjusted version of iK-Means, which performs well in the current experiment setting.

Date: 14/12/2018
Time: 16:00
Location: LB250

Share this post on: Twitter| Facebook| Google+| Email