One of the questions typical for clustering is the decision about the number of clusters. There are various methods to automatically select the number of clusters, or even claim optimality. One of the classical ones is the Ward’s method 1, 2, 3, another is the silhouette 4, yet another would be

There is a good wikipedia page on the subject.

Also the *cubic clustering criterion* seems interesting (taken from 5): “The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At that point I take the number of clusters at the (local) maximum. This would be similar to using a scree plot to picking the number of principal components.

Cubic Clustering Criterion CCC SAS Technical Report #108

n = number of observations nk= number in cluster k p = number of variables q = number of clusters X = nxp data matrix M = qxp matrix of cluster means Z = cluster indicator (zik=1 if obs. i in cluster k)

Assume each variable has mean 0. Z’Z = diag(n1, …, nq), M = (Z’Z)-1Z’X

SS(total) matrix = T= X’X SS(between clusters) matrix = B = M’ Z’Z M SS(within clusters) matrix = W = T-B

R2 = 1 – trace(W)/trace(T) (trace = sum of diagonal elements)

Stack columns of X into one long column. Regress on Kronecker product of Z with pxp identity matrix Compute R2 for this regression – same R2. The CCC idea is to compare the R2 you get for a given set of clusters with the R2 you would get by clustering a uniformly distributed set of points in p dimensional space.”

**UPDATE:** interesting blog post on gap statistic and prediction strength