David Zelený

en:non-hier

# Differences

This shows you the differences between two versions of the page.

 en:non-hier [2019/04/06 18:48]David Zelený [K-means (non-hierarchical classification)] en:non-hier [2019/04/06 18:53] (current)David Zelený Both sides previous revision Previous revision 2019/04/06 18:53 David Zelený 2019/04/06 18:48 David Zelený [K-means (non-hierarchical classification)] 2019/04/06 18:47 David Zelený 2019/03/22 21:58 David Zelený 2019/01/26 20:20 David Zelený 2017/10/11 20:36 external edit2017/04/22 08:58 David Zelený 2016/06/28 16:29 external edit2014/12/19 02:18 David Zelený created 2019/04/06 18:53 David Zelený 2019/04/06 18:48 David Zelený [K-means (non-hierarchical classification)] 2019/04/06 18:47 David Zelený 2019/03/22 21:58 David Zelený 2019/01/26 20:20 David Zelený 2017/10/11 20:36 external edit2017/04/22 08:58 David Zelený 2016/06/28 16:29 external edit2014/12/19 02:18 David Zelený created Line 7: Line 7: [[{|width: 7em; background-color:​ white; color: navy}non-hier_exercise|Exercise {{::​lock-icon.png?​nolink|}}]] [[{|width: 7em; background-color:​ white; color: navy}non-hier_exercise|Exercise {{::​lock-icon.png?​nolink|}}]] - This is a non-hierarchical agglomerative clustering algorithm, based on Euclidean distances among samples and using an iterative algorithm to find the solution. It minimizes the total error sum of squares (TESS), the same objective function as in the case of Ward’s algorithm. The number of clusters (k) is defined by the user. Other than Euclidean distance can be used, but they need to be converted into metric distances and submitted to PCoA. For example, in the case of Bray-Curtis distance, which is not metric, one may calculate square-rooted Bray-Curtis distances (which are metric), submit them to PCoA, and then use all PCoA axes as the input matrix in K-means method instead of the raw data. The K-means algorithm, similarly to other iterative methods (like NMDS) can get trapped in local minima, and it may be useful to repeat the analysis many times and choose the solution with the lowest overall TESS. + **K-means** ​is a non-hierarchical agglomerative clustering algorithm, based on Euclidean distances among samples and using an iterative algorithm to find the solution. It minimizes the total error sum of squares (TESS), the same objective function as in the case of Ward’s algorithm. The number of clusters (k) is defined by the user. Other than Euclidean distance can be used, but they need to be converted into metric distances and submitted to PCoA. For example, in the case of Bray-Curtis distance, which is not metric, one may calculate square-rooted Bray-Curtis distances (which are metric), submit them to PCoA, and then use all PCoA axes as the input matrix in K-means method instead of the raw data. The K-means algorithm, similarly to other iterative methods (like NMDS) can get trapped in local minima, and it may be useful to repeat the analysis many times and choose the solution with the lowest overall TESS. + + The method can run in two modes, unsupervised or supervised. In the unsupervised mode, it searches for optimal clustering of samples into a predefined number of clusters; in the supervised mode, the user supplies the //k// centroids (e.g. typical samples) and the method searches for an optimal solution how to cluster the samples in the dataset around these centroids.
en/non-hier.txt · Last modified: 2019/04/06 18:53 by David Zelený