--- title: "Learning Clusterization" output: rmarkdown::html_document vignette: > %\VignetteIndexEntry{Learning Clusterization} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} library(LearnClust) knitr::opts_chunk$set(collapse = TRUE, comment = "#>") ``` LearnClust package allows users to learn how the algorithms get the solution. 1. The package implements distances between clusters. 2. It includes main functions that return the solution applying the algorithms. 3. It contains .details functions that explain the process used to get the solution. They help the user to understand how it gets the solution. ## Datasets: We initialize some datasets to use in the algorithms: ```{r} cluster1 <- matrix(c(1,2),ncol=2) cluster2 <- matrix(c(2,4),ncol=2) weight <- c(0.2,0.8) vectorData <- c(1,1,2,3,4,7,8,8,8,10) # vectorData <- c(1:10) matrixData <- matrix(vectorData,ncol=2,byrow=TRUE) print(matrixData) dfData <- data.frame(matrixData) print(dfData) plot(dfData) cMatrix <- matrix(c(2,4,4,2,3,5,1,1,2,2,5,5,1,0,1,1,2,1,2,4,5,1,2,1), ncol=3, byrow=TRUE) cDataFrame <- data.frame(cMatrix) ``` ## Distances The package includes different types of distance: - Euclidean Distance ```{r} edistance(cluster1,cluster2) ``` - Manhattan Distance ```{r} mdistance(cluster1,cluster2) ``` - Canberra Distance ```{r} canberradistance(cluster1,cluster2) ``` - Chebyshev Distance ```{r} chebyshevDistance(cluster1,cluster2) ``` - Octile Distance ```{r} octileDistance(cluster1,cluster2) ``` Each function has a .details version that explain how the calculus is done. There are functions where some weights are applied to each element. These function are used in the extra algorithm. These functions are: - Euclidean Distance with weight applied. ```{r} edistanceW(cluster1,cluster2,weight) ``` - Manhattan Distance with weight applied. ```{r} mdistanceW(cluster1,cluster2,weight) ``` - Canberra Distance with weight applied. ```{r} canberradistanceW(cluster1,cluster2,weight) ``` - Chebyshev Distance with weight applied. ```{r} chebyshevDistanceW(cluster1,cluster2,weight) ``` - Octile Distance with weight applied. ```{r} octileDistanceW(cluster1,cluster2,weight) ``` ## Agglomerative Hierarchical Clustering This algorithm uses some functions according to the theoretical process: 1. We prepare data to be used in the algorithms. We create a cluster with each values. They could be different R types (vector, matrix or data.frame) ```{r} list <- toList(vectorData) # list <- toList(matrixData) # list <- toList(dfData) print(list) ``` 2. We calculate the matrix distance using clusters from the first step. We use the distance and approach type that we want. ```{r} matrixDistance <- mdAgglomerative(list,'MAN','AVG') print(matrixDistance) ``` 3. We get the minimal value from the matrix distance, that is, the distance between closer clusters. ```{r} minDistance <- minDistance(matrixDistance) print(minDistance) ``` 4. With the minimal distance, we look for the clusters with this distance separation. We take the clusters that will be joined. ```{r} groupedClusters <- getCluster(minDistance, matrixDistance) print(groupedClusters) ``` 5. These two clusters will create a new one. ```{r} updatedClusters <- newCluster(list, groupedClusters) print(updatedClusters) ``` 6. We add the new cluster to the solution and repeat from step 2 to 5 until we get only one cluster. The complete function that implement the algorithm is: ```{r} agglomerativeExample <- agglomerativeHC(dfData,'EUC','MAX') plot(agglomerativeExample$dendrogram) print(agglomerativeExample$clusters) print(agglomerativeExample$groupedClusters) ``` The package includes some auxiliar functions to implement the algorithm. These functions are: - A function that updates active clusters. If two clusters have been joined, they will not be used again as individual clusters. ```{r} cleanClusters <- usefulClusters(updatedClusters) print(cleanClusters) ``` - Two functions that calculate the distance between clusters using distance and approach values given. ```{r} distances <- c(2,4,6,8) clusterDistanceByApproach <- clusterDistanceByApproach(distances,'AVG') print(clusterDistanceByApproach) ``` "clusterDistanceByApproach" get the value using approach type. This type could be "MAX","MIN", and "AVG" ```{r} clusterDistance <- clusterDistance(cluster1,cluster2,'MAX','MAN') print(clusterDistance) ``` "clusterDistance" get the distance value between each element from one cluster to the other ones using distance type. This type could be "EUC", "MAN", "CAN", "CHE", and "OCT" ### Agglomerative Hierarchical Clustering .DETAILS This algorithm explains every function. 1. How clusters are initialized to be used. Initial data could be different R types (vector, matrix or data.frame) ```{r} list <- toList.details(vectorData) # list <- toList(matrixData) # list <- toList(dfData) print(list) ``` 2. How the matrix distance is created. ```{r} matrixDistance <- mdAgglomerative.details(list,'MAN','AVG') ``` 3. Choosing the minimal distance avoiding cero values. ```{r} minDistance <- minDistance.details(matrixDistance) ``` 4. Using the minimal distance, look for the clusters with this distance. ```{r} groupedClusters <- getCluster.details(minDistance, matrixDistance) ``` 5. With the clusters, it creates a new one and remove the previous from the initial list. ```{r} updatedClusters <- newCluster.details(list, groupedClusters) ``` 6. We add the new cluster to the solution and repeat from step 2 to 5 until we get only one cluster. The complete function that explains the algorithm is: ```{r} agglomerativeExample <- agglomerativeHC.details(vectorData,'EUC','MAX') ``` ## Divisive Hierarchical Clustering This algorithm uses some functions according to the theoretical process: 1. We prepare data to be used in the algorithms. We create a cluster with each values. They could be different R types (vector, matrix or data.frame) ```{r} # list <- toListDivisive(vectorData) # list <- toListDivisive(matrixData) list <- toListDivisive(dfData[1:4,]) print(list) ``` 2. With every cluster, the algorithm has to create all posible subclusters joining inicial clusters. ```{r} clustersList <- initClusters(list) print(clustersList) ``` 3. We calculate the matrix distance using clusters from second step. We use the distance and approach type that we prefer. ```{r} matrixDistance <- mdDivisive(clustersList,'MAN','AVG',list) print(matrixDistance) ``` 3. We get the maximal value from the matrix distance, that is, the distance between far away clusters. ```{r} maxDistance <- maxDistance(matrixDistance) print(maxDistance) ``` 4. With the maximal distance, we look for the clusters with this distance separation. We take the clusters that will be divided. ```{r} dividedClusters <- getClusterDivisive(maxDistance, matrixDistance) print(dividedClusters) ``` 5. Two new subclusters will be created from the initial one and added to the solution. 6. We repeat from step 2 to 5 until any cluster could be divided again. The complete function that implement the algorithm is: ```{r} divisiveExample <- divisiveHC(dfData[1:4,],'MAN','AVG') print(divisiveExample) ``` The package uses the same auxiliar functions as the previous to implement the algorithm. These functions are: - clusterDistanceByApproach - clusterDistance - complementaryClusters: checks if the clusters we are going to divide are complementary, that is, every initial cluster is in one or in the other cluster, but never in both. This condition allows to not loose any cluster when the division is done. ```{r} data <- c(1,2,1,3,1,4,1,5) components <- toListDivisive(data) cluster1 <- matrix(c(1,2,1,3),ncol=2,byrow=TRUE) cluster2 <- matrix(c(1,4,1,5),ncol=2,byrow=TRUE) cluster3 <- matrix(c(1,6,1,7),ncol=2,byrow=TRUE) complementaryClusters(components,cluster1,cluster2) complementaryClusters(components,cluster1,cluster3) ``` Its ".details" version, explains how the functions checks this condition: ```{r} complementaryClusters.details(components,cluster1,cluster2) ``` ### Divisive Hierarchical Clustering .DETAILS This algorithm explains every function. 1. How clusters are initialized to be used. Initial data could be different R types (vector, matrix or data.frame) ```{r} # list <- toListDivisive.details(vectorData) # list <- toListDivisive(matrixData) list <- toListDivisive(dfData[1:4,]) print(list) ``` 2. How to create all posible clusters to be divided. ```{r} clustersList <- initClusters.details(list) ``` 3. How the matrix distance is created. ```{r} matrixDistance <- mdDivisive.details(clustersList,'MAN','AVG',list) ``` 4. Choosing the maximal distance, the far away clusters. ```{r} maxDistance <- maxDistance.details(matrixDistance) ``` 5. Using the maximal distance, look for the clusters with this distance. ```{r} dividedClusters <- getClusterDivisive.details(maxDistance, matrixDistance) ``` 6. We add the new clusters to the solution and repeat from step 2 to 5 until any cluster could be divided again. The complete function that explains the algorithm is: ```{r} divisiveExample <- divisiveHC.details(dfData[1:4,],'MAN','AVG') print(divisiveExample) ``` ## Correlative Hierarchical Clustering This example shows how the algorithm works step by step. 1. Input data is initialized creating a cluster with each data frame row. ```{r} initData <- initData(cDataFrame) print(initData) ``` 2. The algorithm checks if the input target is acceptable, if not, it initializes the target. ```{r} target <- c(1,2,3) initTarget <- initTarget(target,cDataFrame) print(initTarget) ``` 3. If users want it, the algorithm will normalize weight's values. ```{r} weight <- c(5,7,6) weights <- normalizeWeight(TRUE,weight,cDataFrame) print(weights) ``` 4. It calculates distances between clusters applying weights and distance definition given. ```{r} cluster1 <- matrix(c(1,2,3),ncol=3) cluster2 <- matrix(c(2,5,8),ncol=3) weight <- c(3,7,4) distance <- distances(cluster1,cluster2,'CHE',weight) print(distance) ``` Finally, the complete algorithm sorts the distances and sort the clusters aswell. It presents the solution as a sorted clusters list, with the distances or using a dendrogram. ```{r} target <- c(5,5,1) weight <- c(3,7,5) correlation <- correlationHC(cDataFrame, target, weight) print(correlation$sortedValues) print(correlation$distances) plot(correlation$dendrogram) ``` ## Correlative Hierarchical Clustering .DETAILS This example shows how the algorithm works step by step. 1. How input data is initialized. ```{r} initData <- initData.details(cDataFrame) ``` 2. How the algorithm checks if the input target is acceptable, and if not, how it initializes the target. ```{r} targetValid <- c(1,2,3) targetInvalid <- c(1,2) initTarget <- initTarget.details(targetValid,cDataFrame) initTarget <- initTarget.details(targetInvalid,cDataFrame) ``` 3. How the normalization process is done. ```{r} weight <- c(5,7,6) weights <- normalizeWeight.details(TRUE,weight,cDataFrame) weights <- normalizeWeight.details(FALSE,weight,cDataFrame) weights <- normalizeWeight.details(FALSE,NULL,cDataFrame) ``` 4. How it calculates distances between clusters applying weights and distance definition given. ```{r} cluster1 <- matrix(c(1,2,3),ncol=3) cluster2 <- matrix(c(2,5,8),ncol=3) weight <- c(3,7,4) distance <- distances.details(cluster1,cluster2,'CHE',weight) ``` The complete function that explains the algorithm is: ```{r} target <- c(5,5,1) weight <- c(3,7,5) correlation <- correlationHC.details(cDataFrame, target, weight) print(correlation$sortedValues) print(correlation$distances) plot(correlation$dendrogram) ```