Skip to main content

Comparison of hierarchical cluster analysis methods by cophenetic correlation

Abstract

Purpose

This study proposes the best clustering method(s) for different distance measures under two different conditions using the cophenetic correlation coefficient.

Methods

In the first one, the data has multivariate standard normal distribution without outliers for n=10,50,100 and the second one is with outliers (5%) for n=10,50,100. The proposed method is applied to simulated multivariate normal data via MATLAB software.

Results

According the results of simulation the Average (especially for n=10) and Centroid (especially for n=50 and n=100) methods are recommended at both conditions.

Conclusions

This study hopes to contribute to literature for making better decisions on selection of appropriate cluster methods by using subgroup sizes, variable numbers, subgroup means and variances.

1 Introduction

Classification, in its widest sense, has to do with forms of the relatedness and with the organization and display of the relations in a useful manner. The items to be studied could be anything: people, bacteria, religions, books, etc. The attributes in each case would be those features of the items that are of interest for the purpose of the study [1]. Classifications are generally pictured in the form of hierarchical trees, also called a dendrogram. A dendrogram is the graphical representation of an ultrametric (= cophenetic) matrix; so dendrograms can be compared to one another by comparing their cophenetic matrices [2].

Cluster Analysis (CA), Principal Components Analysis (PCA) and Discriminant Analysis (DA) are three of the primary methods of modern multivariate analysis. Because of its utility, clustering has emerged as one of the leading methods of multivariate analysis [3].

Cluster analysis is a multivariate statistical technique which was originally developed for biological classification. Biologists Robert Soka1 and Peter Sneath published their seminal text ‘Principles of Numerical Taxonomy’ in 1963. Sokal and Sneath demonstrated that cluster analysis could be utilized to efficiently classification a data set which contained all relevant characteristics of an organism. When the organisms had been classified based on these characteristics, it could be determined in which way they differed, and if they belonged to different species. In this way, Sokal and Sneath asserted, researchers could trace the path of evolution from one species to another [4].

In this study for clustering, two measures of cluster ‘goodness’ or quality are used. One type of measure allows us to compare different sets of clusters without reference to external knowledge and is called an internal quality which is used as a measure of ‘overall similarity’ based on the pairwise similarity of documents in a cluster. The other type of measures allows evaluating how well the clustering is working by comparing the groups produced by the clustering techniques to known classes. This type of measure is called an external quality measure, which is not scope of this study [5].

The joining or tree clustering method uses the dissimilarities (similarities) or distances (Euclidean distance, squared Euclidean distance, city-block (Manhattan) distance, Chebychev distance, power distance, Mahalanobis distance, etc.) between objects when forming the clusters. Similarities are a set of rules that serve as criteria for grouping or separating items. These distances (similarities) can be based on a single dimension or multiple dimensions, with each dimension representing a rule or condition for grouping objects. The joining algorithm does not ‘care’ whether the distances that are ‘fed’ to it are actual real distances, or some other derived measure of distance that is more meaningful to the researcher; and it is up to the researcher to select the right method for his/her specific application [6].

The next step is to identify how one can find the natural clusters among items characterized by many attributes. A number of cluster analysis procedures (single linkage (nearest neighbor), Complete linkage (furthest neighbor), Unweighted pair-group average (UPGMA), Weighted pair-group average (WPGMA), Unweighted pair-group centroid (UPGMC), Weighted pair-group centroid (median), Ward’s method, etc.) are available; many of these begin with an n-dimensional space in which each entity is represented by a single point. The dimensions in the space represent the characteristics upon which the entities are to be compared. Similarity between entities can be measured by: (1) the correlation of entities’ scores on the dimensions (cophenetic correlation) or (2) the distance between points in the space (points closest to each other are most similar) [7, 8].

Suppose that the original data { X i } have been modeled using a cluster method to produce a dendrogram { T i }; that is, a simplified model in which data that are ‘close’ have been grouped into a hierarchical tree. Define the following distance measures. x(i,j)=| X i X j |, the ordinary Euclidean distance between the i th and j th observations. t(i,j)= the dendrogrammatic distance between the model points T i and T j . This distance is the height of the node at which these two points are first joined together. Then, letting x be the average of the x(i,j), and letting t be the average of the t(i,j), the cophenetic correlation coefficient c is defined as in (1) [9].

c= i < j ( x ( i , j ) x ) ( t ( i , j ) t ) [ i < j ( x ( i , j ) x ) 2 ] [ i < j ( t ( i , j ) t ) 2 ] .
(1)

Since its introduction by Sokal and Rohlf [10], the cophenetic correlation coefficient has been widely used in numerical phenetic studies, both as a measure of degree of fit of a classification to a set of data and as a criterion for evaluating the efficiency of various clustering techniques [11]. In statistics, and especially in biostatistics, cophenetic correlation (more precisely, the cophenetic correlation coefficient) is a measure of how faithfully a dendrogram preserves the pairwise distances between the original unmodeled data points. Although it has been most widely applied in the field of biostatistics (typically to assess cluster-based models of DNA sequences, or other taxonomic models), it can also be used in other fields of inquiry where raw data tend to occur in clumps, or clusters. This coefficient has also been proposed for use as a test for nested clusters [12].

The problem of comparing classifications with numerical methods is not new; the first effective numerical method known to us is the ‘cophenetic correlation’ technique of Sokal and Rohlf [10]. Beginning with the development of cophenetic correlations methods for comparison of dendrograms have recently been the object of strong interest. Baker [13] investigated the impact of observational errors on the dendrograms produced by the complete linkage and single linkage hierarchical grouping techniques. The goodness of fit of the dendrograms was measured by means of the Goodman-Kruskal gamma coefficient. The gamma coefficients indicated that the single linkage grouping technique was more sensitive to the type of data errors employed than the complete linkage technique. Hubert [14] compared two rank orderings of the object pairs. He tested hypothesis that the given set of proximity values have been assigned randomly by referring the Goodman-Kruskal rank correlation γ statistic to an approximate permutation distribution. Kuiper and Fisher [15] compared six hierarchical clustering procedures (single linkage, complete linkage, median, average linkage, centroid and Ward’s method) for multivariate normal data, assuming that the true number of clusters was known. The authors used the Rand index, which gives a proportion of correct groupings, to compare the clustering methods. In their study for clusters of equal sizes, Ward’s method and complete linkage method, with very unequal cluster sizes centroid and average linkage method found best, respectively. Blashfield [16] compared four types of hierarchical clustering methods (single linkage, complete linkage, average linkage and Ward’s method) for accuracy in recovery of original population clusters. He used Cohen’s statistic to measure the accuracy of the clustering methods. According to his results, Ward’s method performed significantly better than the other clustering procedures and average linkage gave relatively poor results. According to Milligan [17], complete linkage and Ward’s method reacted badly when outliers were introduced into the simulated data.

Hands and Everitt [18] compared five hierarchical clustering techniques (single linkage, complete linkage, average, centroid, and Ward’s method) on multivariate binary data. They found that Ward’s method was the best overall than other hierarchical methods. Yao [19] discussed six classical clustering algorithms: k-means, SOM, EM-based clustering, classification EM clustering, fuzzy k-means, leader clustering and different combination scenarios of these algorithms. He used a count of cluster categories, classification accuracy and cluster entropy. Ferreira and Hitchcock [20] compared the performance of four major hierarchical methods (single linkage, complete linkage, average linkage and Ward’s method) for clustering functional data. They used the Rand index to compare the performance of each clustering method. According to their study, Ward’s method was usually the best, while average linkage performed best in some special situations, in particular, when the number of clusters is over specified. Milligan and Cooper [21] used four agglomerative hierarchical clustering methods to generate partition solutions and formed one factor in the overall design. These were the single link, complete link, group average (UPGMA) and Ward’s minimum variance methods. As a result, they found that the single link technique was least effective while the group average and Ward’s methods gave the best overall recovery.

Consider the studies in the literature and the importance of using the most convenient cluster method under different conditions (sample size, variables number and distance measures), a detailed simulation study is undertaken. This study gives more insight into the functioning of the cluster method under different conditions. The purpose of this research is to investigate the best clustering method under different conditions.

2 Method

In this study, seven cluster analysis methods are compared by the cophenetic correlation coefficient computed according to different clustering methods with a sample size (n=10, n=50 and n=100), variables number (x=3, x=5 and x=10) and distance measures via a simulation study. The simulation program is developed in a MATLAB software development environment by the authors. We have 567 different simulation scenarios and 100,000/n replications for each scenario. The performance is monitored by two different conditions that are mentioned in Table 1 and Table 2 with 7 cluster methods, 9 distance measures by cophenetic correlation coefficient in various settings of subgroup means, variances, sample size and variable numbers simultaneously.

Table 1 The cophenetic correlation coefficient values for μ=0 , σ 2 =1 (without outliers)
Table 2 The cophenetic correlation coefficient values for μ=0 , σ 2 =1 (with outliers)

For 567 different simulation scenarios, the data was derived from multivariate normal distribution for μ=0, δ 2 =1 with and without outliers, respectively. The data set for outliers is obtained according to Dixon’s [22] ‘Outlier Model’ like (Nr)N(0,1)+rN(0,5). In this study, r=[0,5+0,1N] means that while 95% of the data set does not include any outliers, 5% of the data set includes outliers.

3 Results and discussion

All numerical results, obtained by running the simulation program, are given in Table 1 and Table 2. According to Table 1 and Table 2, the average method gives the best results at all measures and at all variable numbers for both distributions with sample size n=10. Moreover, increasing the sample size to n=50 and n=100 favors the complete, weighted, and centroid methods for all measures. However, the cophenetic correlation coefficient for the Mahalanobis measure cannot be calculated in both distributions when there are 10 variables with sample size n=10, whereas there is not any meaningful explanation for this unexpected result, we still could not find the main reason for this situation, but the same result is obtained for more than three times run of the simulation program.

4 Conclusion

In general, researchers especially nonstatisticians use cluster analysis methods and distance measures in different conditions. In addition, they choose to use the most famous cluster analysis methods and distance measures, which are available in statistical packages, without evaluating the validity of different conditions. When the different conditions are considered, drawn inferences are dubious, and may lead the decision-makers to incorrect decisions. It is noted that, with respect to the selection of a distance measures, the researcher must be aware that their choice can often significantly affect the results of the clustering. For example, some distance measures are inappropriate when different conditions of the variables are not met. On this point, the determination of the correct distance measures to use under various cases is the main motivation of researchers working on this subject to determine which distance measures should be used in case of different conditions.

One may conclude that the results of this study, which is similar to findings of Johnson and Wichern [23], indicate the data set with outliers have higher cophenetic correlation values than the data set without outliers.

This study hopes to contribute to literature for making better decisions on selection of appropriate cluster methods by using subgroup sizes, variable numbers, subgroup means and variances.

References

  1. Carmichael JW, George JA, Julius RS: Finding natural clusters. Syst. Zool. 1968, 17(2):144–150. 10.2307/2412355

    Article  Google Scholar 

  2. Lapointe FJ, Legendre P: Comparison tests for dendrograms: a comparative evaluation. J. Classif. 1995, 12: 265–282. 10.1007/BF03040858

    Article  Google Scholar 

  3. Kettenring JR: The practice of cluster analysis. J. Classif. 2006, 23: 3–30. 10.1007/s00357-006-0002-6

    Article  MathSciNet  Google Scholar 

  4. Gunnarsson, J: Portfolio-Based Segmentation and Consumer Behaviour: Empirical Evidence and Methodological Issues. Ph.D. Dissertation, Stockholm School of Economics, The Economic Research Institute, p. 274 (1999)

  5. Steinbach M, Karypis G, Kumar V: A comparison of document clustering techniques. Text mining workshop. Proc. of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2000) 2000, 20–23.

    Google Scholar 

  6. Hill T, Lewicki P: STATISTICS: Methods and Applications. StatSoft, Tulsa; 2007.

    Google Scholar 

  7. Lessig VP: Comparing cluster analyses with cophenetic correlation. J. Mark. Res. 1972, 9(1):82–84. 10.2307/3149615

    Article  Google Scholar 

  8. Sneath HA, Sokal RR: Numerical Taxonomy: The Principles and Practice of Numerical Classification. Freeman, San Francisco; 1973:573.

    Google Scholar 

  9. Mathworks statistics toolbox: http://www.mathworks.com/help/stats/cophenet.html (2012)

  10. Sokal RR, Rohlf FJ: The comparison of dendrograms by objective methods. Taxon 1962, 11: 33–40. 10.2307/1217208

    Article  Google Scholar 

  11. Farris JS: On the cophenetic correlation coefficient. Syst. Zool. 1969, 18(3):279–285. 10.2307/2412324

    Article  Google Scholar 

  12. Rohlf FJ, David LF: Test for hierarchical structure in random data sets. Syst. Zool. 1968, 17: 407–412. 10.2307/2412038

    Article  Google Scholar 

  13. Baker FB: Stability of two hierarchical grouping techniques - case I: sensitivity to data errors. J. Am. Stat. Assoc. 1974, 69: 440–445.

    Google Scholar 

  14. Hubert L: Approximate evaluation techniques for the single-link and complete-link hierarchical clustering procedures. J. Am. Stat. Assoc. 1974, 69: 698–704. 10.1080/01621459.1974.10480191

    Article  MathSciNet  Google Scholar 

  15. Kuiper FK, Fisher LA: A Monte Carlo comparison of six clustering procedures. Biometrics 1975, 31: 777–783. 10.2307/2529565

    Article  Google Scholar 

  16. Blashfield RK: Mixture model tests of cluster analysis: accuracy of four agglomerative hierarchical methods. Psychol. Bull. 1976, 83: 377–388.

    Article  Google Scholar 

  17. Milligan GW: An examination of the effect of six types of error perturbation on fifteen clustering algorithms. Psychometrika 1980, 45: 325–342. 10.1007/BF02293907

    Article  Google Scholar 

  18. Hands S, Everitt B: A Monte Carlo study of the recovery of cluster structure in binary data by hierarchical clustering techniques. Multivar. Behav. Res. 1987, 22: 235–243. 10.1207/s15327906mbr2202_6

    Article  Google Scholar 

  19. Yao, KB: A comparison of clustering methods for unsupervised anomaly detection in network traffic. Ph.D. Thesis, University of Copenhagen (2006)

  20. Ferreira L, Hitchcock DB: A comparison of hierarchical methods for clustering functional data. Commun. Stat., Simul. Comput. 2009, 38: 1925–1949. 10.1080/03610910903168603

    Article  MathSciNet  Google Scholar 

  21. Milligan GW, Cooper MC: A study of standardization of variables in cluster analysis. J. Classif. 1988, 5: 181–204. 10.1007/BF01897163

    Article  MathSciNet  Google Scholar 

  22. Dixon WJ: Analysis of extreme value. Ann. Math. Stat. 1950, 21: 488–506. 10.1214/aoms/1177729747

    Article  Google Scholar 

  23. Johnson RA, Wichern DW: Applied Multivariate Statistical Analysis. 5th edition. Prentice Hall, New York; 2002.

    Google Scholar 

Download references

Acknowledgements

Dedicated to Professor Hari M Srivastava.

The authors would like to thank Rıdvan ÜNAL for support of technical help. He is a lecturer at the Afyon Kocatepe University, Faculty of Science, Department of Physics, Afyonkarahisar/Turkey.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sinan Saraçli.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SS has made intellectual contributions in order to carry out this study and also has carried out the simulation study. ND has determined the research design as well as has coordinated the whole process. İD has made theoretical contributions and has performed statistical analysis of the study. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Saraçli, S., Doğan, N. & Doğan, İ. Comparison of hierarchical cluster analysis methods by cophenetic correlation. J Inequal Appl 2013, 203 (2013). https://doi.org/10.1186/1029-242X-2013-203

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-203

Keywords