K-means clustering is one of the simplest unsupervised learning algorithms that solve the well known clustering problem.

Before we venture into K-means, let’s first understand what clustering is?

**What is clustering?**

The idea behind clustering is straightforward. Clustering algorithms group together data points based on the similarities between them.

The data points are clustered in such a way that the data points belong to the same cluster are more similar to each other, and data points belong to a different cluster are very different from each other.

They are unsupervised, and they do not require any class label.

There are various clustering algorithms. In this post, we discuss the most popular clustering algorithm K-means.

**K-means:**

K-means is one of the common techniques for clustering where we iteratively assign points to different clusters.

Here each data point is assigned to only one cluster, which is also known as hard clustering.

The k in the title is a hyperparameter specifying the exact number of clusters. It should be defined beforehand.

The primary objective of the algorithm is to minimize the intra-cluster distance. There’ll be a centroid for each cluster and initially, these centroids are selected randomly.

K-means assigns each data point to a centroid that it is closest to. The metric which is used to measure the closeness is Euclidean distance.

If you want to learn more about distance measures I’ve written an article discussing various distance measures used in machine learning with implementation in python. You can read it here.

Since the primary objective of the algorithm is to minimize the intra-cluster distance, it groups data points into a cluster where the distance from the point to the centroid of the cluster is minimum.

The standard and most commonly used algorithm for K-means in Lloyd’s algorithm.

Let’s see the actual steps of the algorithm:

- The first step is to choose k points randomly from the dataset as the centroids of the clusters.
- Once we choose the centroids the next step is to assign data points to centroids closest to it.
- Then recompute the centroid so that it is closest to all the data points allocated to that cluster.
- Repeat step 2 and 3 until the algorithm converges. These two steps are repeated until the cluster centroids no longer change.

**CHOOSING THE VALUE OF K:**

Choosing a proper value of the hyperparameter k is essential as it can enhance the performance of the model as well as degrades it if wrongly selected.

You can choose k to be that number if you know how many clusters you are looking for.

Otherwise, you need to experiment with different values of k.

I have written an article explaining various supervised and unsupervised methods to determine the right value of k. You can read the article here.

Now, we’ll discuss two popular methods the elbow method and the silhouette coefficient to determine the ideal value for k.

**ELBOW METHOD:**

In this method we’ll try to minimize the within-sum-of-squares(WSS). The WSS measures the sum of squared difference from the cluster center.

The following diagram illustrates the idea behind the elbow method

We’ll plot WSS versus the number of clusters.

Then we select the value of k after which the WSS score doesn’t decrease significantly.

In the diagram, we choose the value of k where we identify the elbow-like inflection. Hence, the name elbow method.

**SILHOUETTE SCORE:**

It measures how similar observation is to the assigned cluster and how dissimilar to the observation of nearby cluster.

The silhouette score range from -1 to 1. The better it is if the score is near to 1.

Let’s implement K-means using sklearn

We’ll use sklearn’s make_blobs to generate a sample dataset

1 2 3 4 5 |
from sklearn.cluster import KMeans import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=200, centers=4, cluster_std=1.0, random_state=10) |

Let’s visualize our dataset

1 2 |
plt.scatter(X[:, 0], X[:, 1], s=50) plt.show() |

Next we can use silhouette score and elbow method to find the optimal number of k

First we use Silhouette score to find the optimal k

1 2 3 4 5 6 7 8 |
from sklearn.metrics import silhouette_score k = [2, 3, 4, 5, 6, 7, 8] score=[] for n_cluster in k: kmeans = KMeans(n_clusters=n_cluster).fit(X) silhouette_avg = silhouette_score(X,kmeans.labels_) score.append(silhouette_score(X,kmeans.labels_)) print('Silhouette Score for %i Clusters: %0.4f' % (n_cluster, silhouette_avg)) |

1 2 3 4 5 6 7 8 |
OUTPUT: Silhouette Score for 2 Clusters: 0.6506 Silhouette Score for 3 Clusters: 0.7261 Silhouette Score for 4 Clusters: 0.7796 Silhouette Score for 5 Clusters: 0.6631 Silhouette Score for 6 Clusters: 0.5599 Silhouette Score for 7 Clusters: 0.4386 Silhouette Score for 8 Clusters: 0.3359 |

As you can see from the results, k value of 4 has the highest score

Let’s plot the score against k

1 2 3 4 |
plt.plot(k, score, 'o-') plt.xlabel("Value for k") plt.ylabel("Silhouette score") plt.show() |

Now, let’s use the elbow method to find the k value

1 2 3 4 5 6 7 8 9 |
inertias = [] for i in k: km = KMeans(n_clusters=i, max_iter=1000, random_state=47) km.fit(X) inertias.append(km.inertia_) plt.plot(k, inertias) plt.xlabel("Value for k") plt.ylabel("Inertias") plt.show() |

You can see an elbow forming at k=4. That is the optimal k value.

We used both the elbow method and the silhouette score to find the optimal k value. As a result, we find out that the optimal value of k is 4.

Let’s implement the K-means algorithm with k=4

1 2 3 |
from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=4, random_state=42) kmeans.fit(X) |

Now let’s visualize the clusters produced by the K-means algorithm

1 2 3 4 5 |
for i in range(len(X)): plt.plot(X[i][0], X[i][1],['ro', 'go', 'yo','mo'][kmeans.labels_[i]], alpha=0.3) plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], marker='*', c='black', s=200) plt.show() |

The below diagram shows the resulting clusters.

As you can see, the algorithm recognized the four distinct clusters.

The black stars are the centroids of the clusters.

**SUMMARY: **

Cluster analysis is used in nearly every sector where a wide variety of transactions occurs.

It can help identify the natural grouping of customers, products etcetera.

One such example is market segmentation where customers are categorized based on their similarities.

In this post, we discussed one such clustering technique k-means.

We discussed Lloyd’s algorithm, which is used to implement k-means.

We also discussed why it is crucial to pick a proper k value as it can impact the performance of the model and discussed two popular methods the silhouette score and the elbow method to pick a k value.

Finally, we implement the K-means clustering algorithm using scikit-learn.