Shubham Gupta

Author
Shubham Gupta

Blogs
From dorm rooms to boardrooms, Shubham has built a career connecting young talent to opportunity. Their writing brings fresh, student-centric views on tech hiring and early careers.
author’s Articles

Insights & Stories by Shubham Gupta

Shubham Gupta explores what today’s grads want from work—and how recruiters can meet them halfway. Expect a mix of optimism, strategy, and sharp tips.
Clear all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter
Filter

Object detection for self-driving cars

Object Detection on Sample Test Image

We will use the trained model to predict the respective classes and the corresponding bounding boxes on a sample of images. The function 'draw' runs a tensorflow session and calculates the confidence scores, bounding box coordinates and the output class probabilities for the given sample image. Finally, it computes the xmin, xmax, ymin, ymax

Introduction to Object Detection

Humans can easily detect and identify objects present in an image. The human visual system is fast and accurate and can perform complex tasks like identifying multiple objects and detect obstacles with little conscious thought. With the availability of large amounts of data, faster GPUs, and better algorithms, we can now easily train computers to detect and classify multiple objects within an image with high accuracy. In this blog, we will explore terms such as object detection, object localization, loss function for object detection and localization, and finally explore an object detection algorithm known as “You only look once” (YOLO).

Object Localization

An image classification or image recognition model simply detect the probability of an object in an image. In contrast to this, object localization refers to identifying the location of an object in the image. An object localization algorithm will output the coordinates of the location of an object with respect to the image. In computer vision, the most popular way to localize an object in an image is to represent its location with the help of bounding boxes. Fig. 1 shows an example of a bounding box.

bounding box, object detection, localization, self driving cars, computer vision, deep learning, classfication
Fig 1. Bounding box representation used for object localization

A bounding box can be initialized using the following parameters:

  • bx, by : coordinates of the center of the bounding box
  • bw : width of the bounding box w.r.t the image width
  • bh : height of the bounding box w.r.t the image height

Defining the target variable

The target variable for a multi-class image classification problem is defined as:

Loss Function

Since we have defined both the target variable and the loss function, we can now use neural networks to both classify and localize objects.

Machine learning challenge, ML challenge

Object Detection

An approach to building an object detection is to first build a classifier that can classify closely cropped images of an object. Fig 2. shows an example of such a model, where a model is trained on a dataset of closely cropped images of a car and the model predicts the probability of an image being a car.

object detection, localization, self driving cars, computer vision, deep learning, classification
Fig 2. Image classification of cars

Now, we can use this model to detect cars using a sliding window mechanism. In a sliding window mechanism, we use a sliding window (similar to the one used in convolutional networks) and crop a part of the image in each slide. The size of the crop is the same as the size of the sliding window. Each cropped image is then passed to a ConvNet model (similar to the one shown in Fig 2.), which in turn predicts the probability of the cropped image is a car.

Fig 3. Sliding windows mechanism

After running the sliding window through the whole image, we resize the sliding window and run it again over the image again. We repeat this process multiple times. Since we crop through a number of images and pass it through the ConvNet, this approach is both computationally expensive and time-consuming, making the whole process really slow. Convolutional implementation of the sliding window helps resolve this problem.

Convolutional implementation of sliding windows

Before we discuss the implementation of the sliding window using convents, let’s analyze how we can convert the fully connected layers of the network into convolutional layers. Fig. 4 shows a simple convolutional network with two fully connected layers each of shape (400, ).

convolutional sliding window, sliding window, 1d convolution, yolo, object detection
Fig 4. Sliding windows mechanism

A fully connected layer can be converted to a convolutional layer with the help of a 1D convolutional layer. The width and height of this layer are equal to one and the number of filters are equal to the shape of the fully connected layer. An example of this is shown in Fig 5.

full connected layer to 1d convolution, 1 d convolution, full connected layers, dense layers
Fig 5. Converting a fully connected layer into a convolutional layer

We can apply this concept of conversion of a fully connected layer into a convolutional layer to the model by replacing the fully connected layer with a 1-D convolutional layer. The number of the filters of the 1D convolutional layer is equal to the shape of the fully connected layer. This representation is shown in Fig 6. Also, the output softmax layer is also a convolutional layer of shape (1, 1, 4), where 4 is the number of classes to predict.

full convolutional networks , converting dense layers to convolutional layers, computer vision, object detection, object localization
Fig 6. Convolutional representation of fully connected layers.

Now, let’s extend the above approach to implement a convolutional version of sliding window. First, let’s consider the ConvNet that we have trained to be in the following representation (no fully connected layers).

object detection, localization, self driving cars, computer vision, deep learning, classification

Let’s assume the size of the input image to be 16× 16× 3. If we’re to use a sliding window approach, then we would have passed this image to the above ConvNet four times, where each time the sliding window crops a part of the input image of size 14× 14× 3 and pass it through the ConvNet. But instead of this, we feed the full image (with shape 16× 16 × 3) directly into the trained ConvNet (see Fig. 7). This results in an output matrix of shape 2 × 2 × 4. Each cell in the output matrix represents the result of a possible crop and the classified value of the cropped image. For example, the left cell of the output (the green one) in Fig. 7 represents the result of the first sliding window. The other cells represent the results of the remaining sliding window operations.

Convolutional sliding window, fully convolutional network, sliding window, object detection, object localization, yolo, rcnn, computer vision, perception, self driving cars
Fig 7. Convolutional implementation of the sliding window

Note that the stride of the sliding window is decided by the number of filters used in the Max Pool layer. In the example above, the Max Pool layer has two filters, and as a result, the sliding window moves with a stride of two resulting in four possible outputs. The main advantage of using this technique is that the sliding window runs and computes all values simultaneously. Consequently, this technique is really fast. Although a weakness of this technique is that the position of the bounding boxes is not very accurate.

The YOLO (You Only Look Once) Algorithm

A better algorithm that tackles the issue of predicting accurate bounding boxes while using the convolutional sliding window technique is the YOLO algorithm. YOLO stands for you only look once and was developed in 2015 by Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. It’s popular because it achieves high accuracy while running in real time. This algorithm is called so because it requires only one forward propagation pass through the network to make the predictions.

The algorithm divides the image into grids and runs the image classification and localization algorithm (discussed under object localization) on each of the grid cells. For example, we have an input image of size 256 × 256. We place a 3× 3 grid on the image (see Fig. 8).

YOLO algorithm, you only look once, Joseph Redmon, Computer vision, pattern recognition, Real time object detection
Fig. 8 Grid (3 x 3) representation of the image

Next, we apply the image classification and localization algorithm on each grid cell. For each grid cell, the target variable is defined as

Do everything once with the convolution sliding window. Since the shape of the target variable for each grid cell is 1 × 9 and there are 9 (3 × 3) grid cells, the final output of the model will be:

YOLO algorithm, you only look once, Joseph Redmon, Computer vision, pattern recognition, Real time object detection

The advantages of the YOLO algorithm is that it is very fast and predicts much more accurate bounding boxes. Also, in practice to get more accurate predictions, we use a much finer grid, say 19 × 19, in which case the target output is of the shape 19 × 19 × 9.

Conclusion

With this, we come to the end of the introduction to object detection. We now have a better understanding of how we can localize objects while classifying them in an image. We also learned to combine the concept of classification and localization with the convolutional implementation of the sliding window to build an object detection system. In the next blog, we will go deeper into the YOLO algorithm, loss function used, and implement some ideas that make the YOLO algorithm better. Also, we will learn to implement the YOLO algorithm in real time.

Have anything to say? Feel free to comment below for any questions, suggestions, and discussions related to this article. Till then, keep hacking with HackerEarth.

Data Visualization for Beginners-Part 3

Bonjour! Welcome to another part of the series on data visualization techniques. In the previous two articles, we discussed different data visualization techniques that can be applied to visualize and gather insights from categorical and continuous variables. You can check out the first two articles here:

In this article, we’ll go through the implementation and use of a bunch of data visualization techniques such as heat maps, surface plots, correlation plots, etc. We will also look at different techniques that can be used to visualize unstructured data such as images, text, etc.

 ### Importing the required libraries   
 import pandas as pd   
 import numpy as np  
 import seaborn as sns   
 import matplotlib.pyplot as plt   
 import plotly.plotly as py  
 import plotly.graph_objs as go  
 %matplotlib inline  

Heatmaps

A heat map(or heatmap) is a two-dimensional graphical representation of the data which uses colour to represent data points on the graph. It is useful in understanding underlying relationships between data values that would be much harder to understand if presented numerically in a table/ matrix.

### We can create a heatmap by simply using the seaborn library.   
 sample_data = np.random.rand(8, 12)  
 ax = sns.heatmap(sample_data)  
Heatmaps, seaborn, python, matplot, data visualization
Fig 1. Heatmap using the seaborn library

Let’s understand this using an example. We’ll be using the metadata from Deep Learning 3 challenge. Link to the dataset. Deep Learning 3 challenged the participants to predict the attributes of animals by looking at their images.

 ### Training metadata contains the name of the image and the corresponding attributes associated with the animal in the image.  
 train = pd.read_csv('meta-data/train.csv')  
 train.head()  

We will be analyzing how often an attribute occurs in relationship with the other attributes. To analyze this relationship, we will compute the co-occurrence matrix.

 ### Extracting the attributes  
 cols = list(train.columns)  
 cols.remove('Image_name')  
 attributes = np.array(train[cols])  
 print('There are {} attributes associated with {} images.'.format(attributes.shape[1],attributes.shape[0]))  
 Out: There are 85 attributes associated with 12,600 images.  
 # Compute the co-occurrence matrix  
 cooccurrence_matrix = np.dot(attributes.transpose(), attributes)  
 print('\n Co-occurrence matrix: \n', cooccurrence_matrix)  
 Out: Co-occurrence matrix:   
  [[5091 728 797 ... 3797 728 2024]  
  [ 728 1614  0 ... 669 1614 1003]  
  [ 797  0 1188 ... 1188  0 359]  
  ...  
  [3797 669 1188 ... 8305 743 3629]  
  [ 728 1614  0 ... 743 1933 1322]  
  [2024 1003 359 ... 3629 1322 6227]]  
 # Normalizing the co-occurrence matrix, by converting the values into a matrix  
 # Compute the co-occurrence matrix in percentage  
 #Reference:https://stackoverflow.com/questions/20574257/constructing-a-co-occurrence-matrix-in-python-pandas/20574460  
 cooccurrence_matrix_diagonal = np.diagonal(cooccurrence_matrix)  
 with np.errstate(divide = 'ignore', invalid='ignore'):  
   cooccurrence_matrix_percentage = np.nan_to_num(np.true_divide(cooccurrence_matrix, cooccurrence_matrix_diagonal))  
 print('\n Co-occurrence matrix percentage: \n', cooccurrence_matrix_percentage)  

We can see that the values in the co-occurrence matrix represent the occurrence of each attribute with the other attributes. Although the matrix contains all the information, it is visually hard to interpret and infer from the matrix. To counter this problem, we will use heat maps, which can help relate the co-occurrences graphically.

 fig = plt.figure(figsize=(10, 10))  
 sns.set(style='white')  
 # Draw the heatmap with the mask and correct aspect ratio   
 ax = sns.heatmap(cooccurrence_matrix_percentage, cmap='viridis', center=0, square=True, linewidths=0.15, cbar_kws={"shrink": 0.5, "label": "Co-occurrence frequency"}, )  
 ax.set_title('Heatmap of the attributes')  
 ax.set_xlabel('Attributes')  
 ax.set_ylabel('Attributes')  
 plt.show()  
Heatmap, data visualization, python, co occurence, seaborn
Fig 2. Heatmap of the co-occurrence matrix indicating the frequency of occurrence of one attribute with other

Since the frequency of the co-occurrence is represented by a colour pallet, we can now easily interpret which attributes appear together the most. Thus, we can infer that these attributes are common to most of the animals.

Machine learning challenge, ML challenge

Choropleth

Choropleths are a type of map that provides an easy way to show how some quantity varies across a geographical area or show the level of variability within a region. A heat map is similar but doesn’t include geographical boundaries. Choropleth maps are also appropriate for indicating differences in the distribution of the data over an area, like ownership or use of land or type of forest cover, density information, etc. We will be using the geopandas library to implement the choropleth graph.

We will be using choropleth graph to visualize the GDP across the globe. Link to the dataset.

 # Importing the required libraries  
 import geopandas as gpd   
 from shapely.geometry import Point  
 from matplotlib import cm  
 # GDP mapped to the corresponding country and their acronyms  
 df =pd.read_csv('GDP.csv')  
 df.head()  
COUNTRY GDP (BILLIONS) CODE
0 Afghanistan 21.71 AFG
1 Albania 13.40 ALB
2 Algeria 227.80 DZA
3 American Samoa 0.75 ASM
4 Andorra 4.80 AND
### Importing the geometry locations of each country on the world map  
 geo = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))[['iso_a3', 'geometry']]  
 geo.columns = ['CODE', 'Geometry']  
 geo.head()  
# Mapping the country codes to the geometry locations  
 df = pd.merge(df, geo, left_on='CODE', right_on='CODE', how='inner')  
 #converting the dataframe to geo-dataframe  
 geometry = df['Geometry']  
 df.drop(['Geometry'], axis=1, inplace=True)  
 crs = {'init':'epsg:4326'}  
 geo_gdp = gpd.GeoDataFrame(df, crs=crs, geometry=geometry)  
 ## Plotting the choropleth  
 cpleth = geo_gdp.plot(column='GDP (BILLIONS)', cmap=cm.Spectral_r, legend=True, figsize=(8,8))  
 cpleth.set_title('Choropleth Graph - GDP of different countries')  
choropleth maps, choropleth graphs, data visualization techniques, python, big data, machine learning
Fig 3. Choropleth graph indicating the GDP according to geographical locations

Surface plot

Surface plots are used for the three-dimensional representation of the data. Rather than showing individual data points, surface plots show a functional relationship between a dependent variable (Z) and two independent variables (X and Y).

It is useful in analyzing relationships between the dependent and the independent variables and thus helps in establishing desirable responses and operating conditions.

 from mpl_toolkits.mplot3d import Axes3D  
 from matplotlib.ticker import LinearLocator, FormatStrFormatter  
 # Creating a figure  
 # projection = '3d' enables the third dimension during plot  
 fig = plt.figure(figsize=(10,8))  
 ax = fig.gca(projection='3d')  
 # Initialize data   
 X = np.arange(-5,5,0.25)  
 Y = np.arange(-5,5,0.25)  
 # Creating a meshgrid  
 X, Y = np.meshgrid(X, Y)  
 R = np.sqrt(np.abs(X**2 - Y**2))  
 Z = np.exp(R)  
 # plot the surface   
 surf = ax.plot_surface(X, Y, Z, cmap=cm.GnBu, antialiased=False)  
 # Customize the z axis.  
 ax.zaxis.set_major_locator(LinearLocator(10))  
 ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))  
 ax.set_title('Surface Plot')  
 # Add a color bar which maps values to colors.  
 fig.colorbar(surf, shrink=0.5, aspect=5)  
 plt.show()  

One of the main applications of surface plots in machine learning or data science is the analysis of the loss function. From a surface plot, we can analyze how the hyperparameters affect the loss function and thus help prevent overfitting of the model.

python, 3d plot, machine learning, data visualization, machine learning, loss function, gradient descent, big data
Fig 4. Surface plot visualizing the dependent variable w.r.t the independent variables in 3-dimensions

Visualizing high-dimensional datasets

Dimensionality refers to the number of attributes present in the dataset. For example, consumer-retail datasets can have a vast amount of variables (e.g. sales, promos, products, open, etc.). As a result, visually exploring the dataset to find potential correlations between variables becomes extremely challenging.

Therefore, we use a technique called dimensionality reduction to visualize higher dimensional datasets. Here, we will focus on two such techniques :

  • Principal Component Analysis (PCA)
  • T-distributed Stochastic Neighbor Embedding (t-SNE)

Principal Component Analysis (PCA)

Before we jump into understanding PCA, let’s review some terms:

  • Variance: Variance is simply the measure of the spread or extent of the data. Mathematically, it is the average squared deviation from the mean position.varaince, PCA, prinicipal component analysis
  • Covariance: Covariance is the measure of the extent to which corresponding elements from two sets of ordered data move in the same direction. It is the measure of how two random variables vary together. It is similar to variance, but where variance tells you the extent of one variable, covariance tells you the extent to which the two variables vary together. Mathematically, it is defined as:

A positive covariance means X and Y are positively related, i.e., if X increases, Y increases, while negative covariance means the opposite relation. However, zero variance means X and Y are not related.

PCA, Principal Component Analysis , dimension reduction, python, machine learning, big data, image classification
Fig 5. Different types of covariance

PCA is the orthogonal projection of data onto a lower-dimension linear space that maximizes variance (green line) of the projected data and minimizes the mean squared distance between the data point and the projects (blue line). The variance describes the direction of maximum information while the mean squared distance describes the information lost during projection of the data onto the lower dimension.

Thus, given a set of data points in a d-dimensional space, PCA projects these points onto a lower dimensional space while preserving as much information as possible.

 principal component analysis, machine learning, dimension reduction technqieus, data visualization techniques, deep learning, ICA, PCA
Fig 6. Illustration of principal component analysis

In the figure, the component along the direction of maximum variance is defined as the first principal axis. Similarly, the component along the direction of second maximum variance is defined as the second principal component, and so on. These principal components are referred to the new dimensions carrying the maximum information.

 # We will use the breast cancer dataset as an example  
 # The dataset is a binary classification dataset  
 # Importing the dataset  
 from sklearn.datasets import load_breast_cancer  
 data = load_breast_cancer()  
 X = pd.DataFrame(data=data.data, columns=data.feature_names) # Features   
 y = data.target # Target variable   
 # Importing PCA function  
 from sklearn.decomposition import PCA  
 pca = PCA(n_components=2) # n_components = number of principal components to generate  
 # Generating pca components from the data  
 pca_result = pca.fit_transform(X)  
 print("Explained variance ratio : \n",pca.explained_variance_ratio_)  
 Out: Explained variance ratio :   
  [0.98204467 0.01617649]  

We can see that 98% (approx) variance of the data is along the first principal component, while the second component only expresses 1.6% (approx) of the data.

 # Creating a figure   
 fig = plt.figure(1, figsize=(10, 10))  
 # Enabling 3-dimensional projection   
 ax = fig.gca(projection='3d')  
 for i, name in enumerate(data.target_names):  
   ax.text3D(np.std(pca_result[:, 0][y==i])-i*500 ,np.std(pca_result[:, 1][y==i]),0,s=name, horizontalalignment='center', bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))  
 # Plotting the PCA components    
 ax.scatter(pca_result[:,0], pca_result[:, 1], c=y, cmap = plt.cm.Spectral,s=20, label=data.target_names)  
 plt.show()  
PCA, principal component analysis, pca, ica, higher dimension data, dimension reduction techniques, data visualization of higher dimensions
Fig 7. Visualizing the distribution of cancer across the data

Thus, with the help of PCA, we can get a visual perception of how the labels are distributed across given data (see Figure).

T-distributed Stochastic Neighbour Embedding (t-SNE)

T-distributed Stochastic Neighbour Embeddings (t-SNE) is a non-linear dimensionality reduction technique that is well suited for visualization of high-dimensional data. It was developed by Laurens van der Maten and Geoffrey Hinton. In contrast to PCA, which is a mathematical technique, t-SNE adopts a probabilistic approach.

PCA can be used for capturing the global structure of the high-dimensional data but fails to describe the local structure within the data. Whereas, “t-SNE” is capable of capturing the local structure of the high-dimensional data very well while also revealing global structure such as the presence of clusters at several scales. t-SNE converts the similarity between data points to joint probabilities and tries to maximize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embeddings and high-dimension data. In doing so, it preserves the original structure of the data.

 # We will be using the scikit learn library to implement t-SNE  
 # Importing the t-SNE library   
 from sklearn.manifold import TSNE  
 # We will be using the iris dataset for this example  
 from sklearn.datasets import load_iris  
 # Loading the iris dataset   
 data = load_iris()  
 # Extracting the features   
 X = data.data  
 # Extracting the labels   
 y = data.target  
 # There are four features in the iris dataset with three different labels.  
 print('Features in iris data:\n', data.feature_names)  
 print('Labels in iris data:\n', data.target_names)  
 Out: Features in iris data:  
  ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']  
 Labels in iris data:  
  ['setosa' 'versicolor' 'virginica']  
 # Loading the TSNE model   
 # n_components = number of resultant components   
 # n_iter = Maximum number of iterations for the optimization.  
 tsne_model = TSNE(n_components=3, n_iter=2500, random_state=47)  
 # Generating new components   
 new_values = tsne_model.fit_transform(X)  
 labels = data.target_names  
 # Plotting the new dimensions/ components  
 fig = plt.figure(figsize=(5, 5))  
 ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)  
 for label, name in enumerate(labels):  
   ax.text3D(new_values[y==label, 0].mean(),  
        new_values[y==label, 1].mean() + 1.5,  
        new_values[y==label, 2].mean(), name,  
        horizontalalignment='center',  
        bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))  
 ax.scatter(new_values[:,0], new_values[:,1], new_values[:,2], c=y)  
 ax.set_title('High-Dimension data visualization using t-SNE', loc='right')  
 plt.show()  
Iris data set, Tsne, data visualization of words, data visualization techniques, dimension reduction techniques, higher dimension data
Fig 8. Visualizing the feature space of the iris dataset using t-SNE

Thus, by reducing the dimensions using t-SNE, we can visualize the distribution of the labels over the feature space. We can see that in the figure the labels are clustered in their own little group. So, if we’re to use a clustering algorithm to generate clusters using the new features/components, we can accurately assign new points to a label.

Conclusion

Let’s quickly summarize the topics we covered. We started with the generation of heatmaps using random numbers and extended its application to a real-world example. Next, we implemented choropleth graphs to visualize the data points with respect to geographical locations. We moved on to implement surface plots to get an idea of how we can visualize the data in a three-dimensional surface. Finally, we used two- dimensional reduction techniques, PCA and t-SNE, to visualize high-dimensional datasets.

I encourage you to implement the examples described in this article to get a hands-on experience. Hope you enjoyed the article. Do let me know if you have any feedback, suggestions, or thoughts on this article in the comments below!

Data Visualization for Beginners

Bonjour! Welcome to another part of the series on data visualization techniques. In the previous two articles, we discussed different data visualization techniques that can be applied to visualize and gather insights from categorical and continuous variables. You can check out the first two articles here:

In this article, we’ll go through the implementation and use of a bunch of data visualization techniques such as heat maps, surface plots, correlation plots, etc. We will also look at different techniques that can be used to visualize unstructured data such as images, text, etc.

 ### Importing the required libraries   
 import pandas as pd   
 import numpy as np  
 import seaborn as sns   
 import matplotlib.pyplot as plt   
 import plotly.plotly as py  
 import plotly.graph_objs as go  
 %matplotlib inline  

Heatmaps

A heat map(or heatmap) is a two-dimensional graphical representation of the data which uses colour to represent data points on the graph. It is useful in understanding underlying relationships between data values that would be much harder to understand if presented numerically in a table/ matrix.

### We can create a heatmap by simply using the seaborn library.   
 sample_data = np.random.rand(8, 12)  
 ax = sns.heatmap(sample_data)  
Heatmaps, seaborn, python, matplot, data visualization
Fig 1. Heatmap using the seaborn library

Let’s understand this using an example. We’ll be using the metadata from Deep Learning 3 challenge. Link to the dataset. Deep Learning 3 challenged the participants to predict the attributes of animals by looking at their images.

 ### Training metadata contains the name of the image and the corresponding attributes associated with the animal in the image.  
 train = pd.read_csv('meta-data/train.csv')  
 train.head()  

We will be analyzing how often an attribute occurs in relationship with the other attributes. To analyze this relationship, we will compute the co-occurrence matrix.

 ### Extracting the attributes  
 cols = list(train.columns)  
 cols.remove('Image_name')  
 attributes = np.array(train[cols])  
 print('There are {} attributes associated with {} images.'.format(attributes.shape[1],attributes.shape[0]))  
 Out: There are 85 attributes associated with 12,600 images.  
 # Compute the co-occurrence matrix  
 cooccurrence_matrix = np.dot(attributes.transpose(), attributes)  
 print('\n Co-occurrence matrix: \n', cooccurrence_matrix)  
 Out: Co-occurrence matrix:   
  [[5091 728 797 ... 3797 728 2024]  
  [ 728 1614  0 ... 669 1614 1003]  
  [ 797  0 1188 ... 1188  0 359]  
  ...  
  [3797 669 1188 ... 8305 743 3629]  
  [ 728 1614  0 ... 743 1933 1322]  
  [2024 1003 359 ... 3629 1322 6227]]  
 # Normalizing the co-occurrence matrix, by converting the values into a matrix  
 # Compute the co-occurrence matrix in percentage  
 #Reference:https://stackoverflow.com/questions/20574257/constructing-a-co-occurrence-matrix-in-python-pandas/20574460  
 cooccurrence_matrix_diagonal = np.diagonal(cooccurrence_matrix)  
 with np.errstate(divide = 'ignore', invalid='ignore'):  
   cooccurrence_matrix_percentage = np.nan_to_num(np.true_divide(cooccurrence_matrix, cooccurrence_matrix_diagonal))  
 print('\n Co-occurrence matrix percentage: \n', cooccurrence_matrix_percentage)  

We can see that the values in the co-occurrence matrix represent the occurrence of each attribute with the other attributes. Although the matrix contains all the information, it is visually hard to interpret and infer from the matrix. To counter this problem, we will use heat maps, which can help relate the co-occurrences graphically.

 fig = plt.figure(figsize=(10, 10))  
 sns.set(style='white')  
 # Draw the heatmap with the mask and correct aspect ratio   
 ax = sns.heatmap(cooccurrence_matrix_percentage, cmap='viridis', center=0, square=True, linewidths=0.15, cbar_kws={"shrink": 0.5, "label": "Co-occurrence frequency"}, )  
 ax.set_title('Heatmap of the attributes')  
 ax.set_xlabel('Attributes')  
 ax.set_ylabel('Attributes')  
 plt.show()  
Heatmap, data visualization, python, co occurence, seaborn
Fig 2. Heatmap of the co-occurrence matrix indicating the frequency of occurrence of one attribute with other

Since the frequency of the co-occurrence is represented by a colour pallet, we can now easily interpret which attributes appear together the most. Thus, we can infer that these attributes are common to most of the animals.

Machine learning challenge, ML challenge

Choropleth

Choropleths are a type of map that provides an easy way to show how some quantity varies across a geographical area or show the level of variability within a region. A heat map is similar but doesn’t include geographical boundaries. Choropleth maps are also appropriate for indicating differences in the distribution of the data over an area, like ownership or use of land or type of forest cover, density information, etc. We will be using the geopandas library to implement the choropleth graph.

We will be using choropleth graph to visualize the GDP across the globe. Link to the dataset.

 # Importing the required libraries  
 import geopandas as gpd   
 from shapely.geometry import Point  
 from matplotlib import cm  
 # GDP mapped to the corresponding country and their acronyms  
 df =pd.read_csv('GDP.csv')  
 df.head()  
COUNTRY GDP (BILLIONS) CODE
0 Afghanistan 21.71 AFG
1 Albania 13.40 ALB
2 Algeria 227.80 DZA
3 American Samoa 0.75 ASM
4 Andorra 4.80 AND
### Importing the geometry locations of each country on the world map  
 geo = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))[['iso_a3', 'geometry']]  
 geo.columns = ['CODE', 'Geometry']  
 geo.head()  
# Mapping the country codes to the geometry locations  
 df = pd.merge(df, geo, left_on='CODE', right_on='CODE', how='inner')  
 #converting the dataframe to geo-dataframe  
 geometry = df['Geometry']  
 df.drop(['Geometry'], axis=1, inplace=True)  
 crs = {'init':'epsg:4326'}  
 geo_gdp = gpd.GeoDataFrame(df, crs=crs, geometry=geometry)  
 ## Plotting the choropleth  
 cpleth = geo_gdp.plot(column='GDP (BILLIONS)', cmap=cm.Spectral_r, legend=True, figsize=(8,8))  
 cpleth.set_title('Choropleth Graph - GDP of different countries')  
choropleth maps, choropleth graphs, data visualization techniques, python, big data, machine learning
Fig 3. Choropleth graph indicating the GDP according to geographical locations

Surface plot

Surface plots are used for the three-dimensional representation of the data. Rather than showing individual data points, surface plots show a functional relationship between a dependent variable (Z) and two independent variables (X and Y).

It is useful in analyzing relationships between the dependent and the independent variables and thus helps in establishing desirable responses and operating conditions.

 from mpl_toolkits.mplot3d import Axes3D  
 from matplotlib.ticker import LinearLocator, FormatStrFormatter  
 # Creating a figure  
 # projection = '3d' enables the third dimension during plot  
 fig = plt.figure(figsize=(10,8))  
 ax = fig.gca(projection='3d')  
 # Initialize data   
 X = np.arange(-5,5,0.25)  
 Y = np.arange(-5,5,0.25)  
 # Creating a meshgrid  
 X, Y = np.meshgrid(X, Y)  
 R = np.sqrt(np.abs(X**2 - Y**2))  
 Z = np.exp(R)  
 # plot the surface   
 surf = ax.plot_surface(X, Y, Z, cmap=cm.GnBu, antialiased=False)  
 # Customize the z axis.  
 ax.zaxis.set_major_locator(LinearLocator(10))  
 ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))  
 ax.set_title('Surface Plot')  
 # Add a color bar which maps values to colors.  
 fig.colorbar(surf, shrink=0.5, aspect=5)  
 plt.show()  

One of the main applications of surface plots in machine learning or data science is the analysis of the loss function. From a surface plot, we can analyze how the hyperparameters affect the loss function and thus help prevent overfitting of the model.

python, 3d plot, machine learning, data visualization, machine learning, loss function, gradient descent, big data
Fig 4. Surface plot visualizing the dependent variable w.r.t the independent variables in 3-dimensions

Visualizing high-dimensional datasets

Dimensionality refers to the number of attributes present in the dataset. For example, consumer-retail datasets can have a vast amount of variables (e.g. sales, promos, products, open, etc.). As a result, visually exploring the dataset to find potential correlations between variables becomes extremely challenging.

Therefore, we use a technique called dimensionality reduction to visualize higher dimensional datasets. Here, we will focus on two such techniques :

  • Principal Component Analysis (PCA)
  • T-distributed Stochastic Neighbor Embedding (t-SNE)

Principal Component Analysis (PCA)

Before we jump into understanding PCA, let’s review some terms:

  • Variance: Variance is simply the measure of the spread or extent of the data. Mathematically, it is the average squared deviation from the mean position.varaince, PCA, prinicipal component analysis
  • Covariance: Covariance is the measure of the extent to which corresponding elements from two sets of ordered data move in the same direction. It is the measure of how two random variables vary together. It is similar to variance, but where variance tells you the extent of one variable, covariance tells you the extent to which the two variables vary together. Mathematically, it is defined as:

A positive covariance means X and Y are positively related, i.e., if X increases, Y increases, while negative covariance means the opposite relation. However, zero variance means X and Y are not related.

PCA, Principal Component Analysis , dimension reduction, python, machine learning, big data, image classification
Fig 5. Different types of covariance

PCA is the orthogonal projection of data onto a lower-dimension linear space that maximizes variance (green line) of the projected data and minimizes the mean squared distance between the data point and the projects (blue line). The variance describes the direction of maximum information while the mean squared distance describes the information lost during projection of the data onto the lower dimension.

Thus, given a set of data points in a d-dimensional space, PCA projects these points onto a lower dimensional space while preserving as much information as possible.

 principal component analysis, machine learning, dimension reduction technqieus, data visualization techniques, deep learning, ICA, PCA
Fig 6. Illustration of principal component analysis

In the figure, the component along the direction of maximum variance is defined as the first principal axis. Similarly, the component along the direction of second maximum variance is defined as the second principal component, and so on. These principal components are referred to the new dimensions carrying the maximum information.

 # We will use the breast cancer dataset as an example  
 # The dataset is a binary classification dataset  
 # Importing the dataset  
 from sklearn.datasets import load_breast_cancer  
 data = load_breast_cancer()  
 X = pd.DataFrame(data=data.data, columns=data.feature_names) # Features   
 y = data.target # Target variable   
 # Importing PCA function  
 from sklearn.decomposition import PCA  
 pca = PCA(n_components=2) # n_components = number of principal components to generate  
 # Generating pca components from the data  
 pca_result = pca.fit_transform(X)  
 print("Explained variance ratio : \n",pca.explained_variance_ratio_)  
 Out: Explained variance ratio :   
  [0.98204467 0.01617649]  

We can see that 98% (approx) variance of the data is along the first principal component, while the second component only expresses 1.6% (approx) of the data.

 # Creating a figure   
 fig = plt.figure(1, figsize=(10, 10))  
 # Enabling 3-dimensional projection   
 ax = fig.gca(projection='3d')  
 for i, name in enumerate(data.target_names):  
   ax.text3D(np.std(pca_result[:, 0][y==i])-i*500 ,np.std(pca_result[:, 1][y==i]),0,s=name, horizontalalignment='center', bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))  
 # Plotting the PCA components    
 ax.scatter(pca_result[:,0], pca_result[:, 1], c=y, cmap = plt.cm.Spectral,s=20, label=data.target_names)  
 plt.show()  
PCA, principal component analysis, pca, ica, higher dimension data, dimension reduction techniques, data visualization of higher dimensions
Fig 7. Visualizing the distribution of cancer across the data

Thus, with the help of PCA, we can get a visual perception of how the labels are distributed across given data (see Figure).

T-distributed Stochastic Neighbour Embedding (t-SNE)

T-distributed Stochastic Neighbour Embeddings (t-SNE) is a non-linear dimensionality reduction technique that is well suited for visualization of high-dimensional data. It was developed by Laurens van der Maten and Geoffrey Hinton. In contrast to PCA, which is a mathematical technique, t-SNE adopts a probabilistic approach.

PCA can be used for capturing the global structure of the high-dimensional data but fails to describe the local structure within the data. Whereas, “t-SNE” is capable of capturing the local structure of the high-dimensional data very well while also revealing global structure such as the presence of clusters at several scales. t-SNE converts the similarity between data points to joint probabilities and tries to maximize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embeddings and high-dimension data. In doing so, it preserves the original structure of the data.

 # We will be using the scikit learn library to implement t-SNE  
 # Importing the t-SNE library   
 from sklearn.manifold import TSNE  
 # We will be using the iris dataset for this example  
 from sklearn.datasets import load_iris  
 # Loading the iris dataset   
 data = load_iris()  
 # Extracting the features   
 X = data.data  
 # Extracting the labels   
 y = data.target  
 # There are four features in the iris dataset with three different labels.  
 print('Features in iris data:\n', data.feature_names)  
 print('Labels in iris data:\n', data.target_names)  
 Out: Features in iris data:  
  ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']  
 Labels in iris data:  
  ['setosa' 'versicolor' 'virginica']  
 # Loading the TSNE model   
 # n_components = number of resultant components   
 # n_iter = Maximum number of iterations for the optimization.  
 tsne_model = TSNE(n_components=3, n_iter=2500, random_state=47)  
 # Generating new components   
 new_values = tsne_model.fit_transform(X)  
 labels = data.target_names  
 # Plotting the new dimensions/ components  
 fig = plt.figure(figsize=(5, 5))  
 ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)  
 for label, name in enumerate(labels):  
   ax.text3D(new_values[y==label, 0].mean(),  
        new_values[y==label, 1].mean() + 1.5,  
        new_values[y==label, 2].mean(), name,  
        horizontalalignment='center',  
        bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))  
 ax.scatter(new_values[:,0], new_values[:,1], new_values[:,2], c=y)  
 ax.set_title('High-Dimension data visualization using t-SNE', loc='right')  
 plt.show()  
Iris data set, Tsne, data visualization of words, data visualization techniques, dimension reduction techniques, higher dimension data
Fig 8. Visualizing the feature space of the iris dataset using t-SNE

Thus, by reducing the dimensions using t-SNE, we can visualize the distribution of the labels over the feature space. We can see that in the figure the labels are clustered in their own little group. So, if we’re to use a clustering algorithm to generate clusters using the new features/components, we can accurately assign new points to a label.

Conclusion

Let’s quickly summarize the topics we covered. We started with the generation of heatmaps using random numbers and extended its application to a real-world example. Next, we implemented choropleth graphs to visualize the data points with respect to geographical locations. We moved on to implement surface plots to get an idea of how we can visualize the data in a three-dimensional surface. Finally, we used two- dimensional reduction techniques, PCA and t-SNE, to visualize high-dimensional datasets.

I encourage you to implement the examples described in this article to get a hands-on experience. Hope you enjoyed the article. Do let me know if you have any feedback, suggestions, or thoughts on this article in the comments below!

Composing Jazz Music with Deep Learning

Deep Learning is on the rise, extending its application in every field, ranging from computer vision to natural language processing, healthcare, speech recognition, generating art, addition of sound to silent movies, machine translation, advertising, self-driving cars, etc. In this blog, we will extend the power of deep learning to the domain of music production. We will talk about how we can use deep learning to generate new musical beats.

The current technological advancements have transformed the way we produce music, listen, and work with music. With the advent of deep learning, it has now become possible to generate music without the need for working with instruments artists may not have had access to or the skills to use previously. This offers artists more creative freedom and ability to explore different domains of music.

Recurrent Neural Networks

Since music is a sequence of notes and chords, it doesn’t have a fixed dimensionality. Traditional deep neural network techniques cannot be applied to generate music as they assume the inputs and targets/outputs to have fixed dimensionality and outputs to be independent of each other. It is therefore clear that a domain-independent method that learns to map sequences to sequences would be useful.

Recurrent neural networks (RNNs) are a class of artificial neural networks that make use of sequential information present in the data.

recurrent neural network, deep learning, character based learning,
Fig. 1 A basic RNN unit.

A recurrent neural network has looped, or recurrent, connections which allow the network to hold information across inputs. These connections can be thought of as memory cells. In other words, RNNs can make use of information learned in the previous time step. As seen in Fig. 1, the output of the previous hidden/activation layer is fed into the next hidden layer. Such an architecture is efficient in learning sequence-based data.

In this blog, we will be using the Long Short-Term Memory (LSTM) architecture. LSTM is a type of recurrent neural network (proposed by Hochreiter and Schmidhuber, 1997) that can remember a piece of information and keep it saved for many timesteps.

Dataset

Our dataset includes piano tunes stored in the MIDI format. MIDI (Musical Instrument Digital Interface) is a protocol which allows electronic instruments and other digital musical tools to communicate with each other. Since a MIDI file only represents player information, i.e., a series of messages like ‘note on’, ‘note off, it is more compact, easy to modify, and can be adapted to any instrument.

Before we move forward, let us understand some music related terminologies:

  • Note: A note is either a single sound or its representation in notation. Each note consist of pitch, octave, and an offset.
  • Pitch: Pitch refers to the frequency of the sound.
  • Octave: An octave is the interval between one musical pitch and another with half or double its frequency.
  • Offset: Refers to the location of the note.
  • Chord: Playing multiple notes at the same time constitutes a chord.

Data Preprocessing

We will use the music21 toolkit (a toolkit for computer-aided musicology, MIT) to extract data from these MIDI files.

  1. Notes Extraction

     def get_notes():  
         notes = []  
         for file in songs:  
           # converting .mid file to stream object  
           midi = converter.parse(file)  
           notes_to_parse = []  
           try:  
             # Given a single stream, partition into a part for each unique instrument  
             parts = instrument.partitionByInstrument(midi)  
           except:  
             pass  
           if parts: # if parts has instrument parts   
             notes_to_parse = parts.parts[0].recurse()  
           else:  
             notes_to_parse = midi.flat.notes  
           for element in notes_to_parse:   
             if isinstance(element, note.Note):  
               # if element is a note, extract pitch   
               notes.append(str(element.pitch))  
             elif(isinstance(element, chord.Chord)):  
               # if element is a chord, append the normal form of the   
               # chord (a list of integers) to the list of notes.   
               notes.append('.'.join(str(n) for n in element.normalOrder))  
         with open('data/notes', 'wb') as filepath:  
           pickle.dump(notes, filepath)  
         return notes  
      

    The function get_notes returns a list of notes and chords present in the .mid file. We use the converter.parse function to convert the midi file in a stream object, which in turn is used to extract notes and chords present in the file. The list returned by the function get_notes() looks as follows:

     Out:  
         ['F2', '4.5.7', '9.0', 'C3', '5.7.9', '7.0', 'E4', '4.5.8', '4.8', '4.8', '4', 'G#3',  
         'D4', 'G#3', 'C4', '4', 'B3', 'A2', 'E3', 'A3', '0.4', 'D4', '7.11', 'E3', '0.4.7', 'B4', 'C3', 'G3', 'C4', '4.7', '11.2', 'C3', 'C4', '11.2.4', 'G4', 'F2', 'C3', '0.5', '9.0', '4.7', 'F2', '4.5.7.9.0', '4.8', 'F4', '4', '4.8', '2.4', 'G#3',  
        '8.0', 'E2', 'E3', 'B3', 'A2', '4.9', '0.4', '7.11', 'A2', '9.0.4', ...........]  

    We can see that the list consists of pitches and chords (represented as a list of integers separated by a dot). We assume each new chord to be a new pitch on the list. As letters are used to generate words in a sentence, similarly the music vocabulary used to generate music is defined by the unique pitches in the notes list.

  2. Generating Input and Output Sequences

    A neural network accepts only real values as input and since the pitches in the notes list are in string format, we need to map each pitch in the notes list to an integer. We can do so as follows:

     # Extract the unique pitches in the list of notes.   
       pitchnames = sorted(set(item for item in notes))  
       # create a dictionary to map pitches to integers  
       note_to_int = dict((note, number) for number, note in enumerate(pitchnames))  
      

    Next, we will create an array of input and output sequences to train our model. Each input sequence will consist of 100 notes, while the output array stores the 101st note for the corresponding input sequence. So, the objective of the model will be to predict the 101st note of the input sequence of notes.

     # create input sequences and the corresponding outputs  
       for i in range(0, len(notes) - sequence_length, 1):  
         sequence_in = notes[i: i + sequence_length]  
         sequence_out = notes[i + sequence_length]  
         network_input.append([note_to_int[char] for char in sequence_in])  
         network_output.append(note_to_int[sequence_out])  
      

    Next, we reshape and normalize the input vector sequence before feeding it to the model. Finally, we one-hot encode our output vector.

     n_patterns = len(network_input)  
       # reshape the input into a format compatible with LSTM layers   
       network_input = np.reshape(network_input, (n_patterns, sequence_length, 1))  
       # normalize input  
       network_input = network_input / float(n_vocab)  
       # One hot encode the output vector  
       network_output = np_utils.to_categorical(network_output)  
      

Model Architecture

Machine learning challenge, ML challenge

We will use keras to build our model architecture. We use a character level-based architecture to train the model. So each input note in the music file is used to predict the next note in the file, i.e., each LSTM cell takes the previous layer activation (a⟨t−1⟩) and the previous layers actual output (y⟨t−1⟩) as input at the current time step tt. This is depicted in the following figure (Fig 2.).

LSTM, Long term short architecture, Recurrent neural network, music generation, neural network,
Fig 2. One to Many LSTM architecture

Our model architecture is defined as:

 model = Sequential()  
   model.add(LSTM(128, input_shape=network_in.shape[1:], return_sequences=True))  
   model.add(Dropout(0.2))  
   model.add(LSTM(128, return_sequences=True))  
   model.add(Flatten())  
   model.add(Dense(256))  
   model.add(Dropout(0.3))  
   model.add(Dense(n_vocab))  
   model.add(Activation('softmax'))  
   model.compile(loss='categorical_crossentropy', optimizer='adam')  
  

Our music model consists of two LSTM layers with each layer consisting of 128 hidden layers. We use ‘categorical cross entropy‘ as the loss function and ‘adam‘ as the optimizer. Fig. 3 shows the model summary.

LSTM, Long short term memory, model architecture, music generation, rnn, recurrent neural netowrk
Fig 3. Model summary

Model Training

To train the model, we call the model.fit function with the input and output sequences as the input to the function. We also create a model checkpoint which saves the best model weights.

 from keras.callbacks import ModelCheckpoint  
   def train(model, network_input, network_output, epochs):   
     """  
     Train the neural network  
     """  
     filepath = 'weights.best.music3.hdf5'  
     checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=0, save_best_only=True)  
     model.fit(network_input, network_output, epochs=epochs, batch_size=32, callbacks=[checkpoint])  
   def train_network():  
     epochs = 200  
     notes = get_notes()  
     print('Notes processed')  
     n_vocab = len(set(notes))  
     print('Vocab generated')  
     network_in, network_out = prepare_sequences(notes, n_vocab)  
     print('Input and Output processed')  
     model = create_network(network_in, n_vocab)  
     print('Model created')  
     return model  
     print('Training in progress')  
     train(model, network_in, network_out, epochs)  
     print('Training completed')  
  

The train_network method gets the notes, creates the input and output sequences, creates a model, and trains the model for 200 epochs.

Music Sample Generation

Now that we have trained our model, we can use it to generate some new notes. To generate new notes, we need a starting note. So, we randomly pick an integer and pick a random sequence from the input sequence as a starting point.

 def generate_notes(model, network_input, pitchnames, n_vocab):  
     """ Generate notes from the neural network based on a sequence of notes """  
     # Pick a random integer  
     start = np.random.randint(0, len(network_input)-1)  
     int_to_note = dict((number, note) for number, note in enumerate(pitchnames))  
     # pick a random sequence from the input as a starting point for the prediction  
     pattern = network_input[start]  
     prediction_output = []  
     print('Generating notes........')  
     # generate 500 notes  
     for note_index in range(500):  
       prediction_input = np.reshape(pattern, (1, len(pattern), 1))  
       prediction_input = prediction_input / float(n_vocab)  
       prediction = model.predict(prediction_input, verbose=0)  
       # Predicted output is the argmax(P(h|D))  
       index = np.argmax(prediction)  
       # Mapping the predicted interger back to the corresponding note  
       result = int_to_note[index]  
       # Storing the predicted output  
       prediction_output.append(result)  
       pattern.append(index)  
       # Next input to the model  
       pattern = pattern[1:len(pattern)]  
     print('Notes Generated...')  
     return prediction_output  
  

Next, we use the trained model to predict the next 500 notes. At each time step, the output of the previous layer (ŷ⟨t−1⟩) is provided as input (x⟨t⟩) to the LSTM layer at the current time step t. This is depicted in the following figure (see Fig. 4).

sampling, sampling from rnn, LSTM, architecture, music sampling, music generation
Fig 4. Sampling from a trained network.

Since the predicted output is an array of probabilities, we choose the output at the index with the maximum probability. Finally, we map this index to the actual note and add this to the list of predicted output. Since the predicted output is a list of strings of notes and chords, we cannot play it. Hence, we encode the predicted output into the MIDI format using the create_midi method.

 ### Converts the predicted output to midi format  
   create_midi(prediction_output)  
  

To create some new jazz music, you can simply call the generate() method, which calls all the related methods and saves the predicted output as a MIDI file.

 #### Generate a new jazz music   
   generate()  
   Out:   
     Initiating music generation process.......  
     Loading Model weights.....  
     Model Loaded  
     Generating notes........  
     Notes Generated...  
     Saving Output file as midi....  
  

To play the generated MIDI in the Jupyter Notebook you can import the play_midi method from the play.py file or use an external MIDI player or convert the MIDI file to the mp3. Let’s listen to our generated jazz piano music.

 ### Play the Jazz music  
   play.play_midi('test_output3.mid')  
“Generated Track 1” Deep Learning Recurrent Neural Network
Audio Player

Conclusion

Congratulations! You can now generate your own jazz music. You can find the full code in this Github repository. I encourage you to play with the parameters of the model and train the model with input sequences of different sequence lengths. Try to implement the code for some other instrument (such as guitar). Furthermore, such a character-based model can also be applied to a text corpus to generate sample texts, such as a poem.

Also, you can showcase your own personal composer and any similar idea in the World Music Hackathonby HackerEarth.

Have anything to say? Feel free to comment below for any questions, suggestions, and discussions related to this article. Till then, happy coding.

Data visualization for beginners - Part 2

Welcome to Part II of the series on data visualization. In the last blog post, we explored different ways to visualize continuous variables and infer information. If you haven’t visited that article, you can find it here.In this blog, we will expand our exploration to categorical variables and investigate ways in which we can visualize and gain insights from them, in isolation and in combination with variables (both categorical and continuous).

Before we dive into the different graphs and plots, let’s define a categorical variable. In statistics, a categorical variable is one which has two or more categories, but there is no intrinsic ordering to them, for example, gender, color, cities, age group, etc. If there is some kind of ordering between the categories, the variables are classified as ordinal variables, for example, if you categorize car prices by cheap, moderate and expensive. Although these are categories, there is a clear ordering between the categories.

# Importing the necessary libraries.  
import numpy as np  
import pandas as pd  
import seaborn as sns  
import matplotlib.pyplot as plt  
%matplotlib inline  

We will be using the Adult data set, which is an extraction of the 1994 census dataset. The prediction task is to determine whether a person makes more than 50K a year. Hereis the link to the dataset. In this blog, we will be using the dataset only for data analysis.

# Since the dataset doesn't contain the column header, we need to specify it manually.   
cols = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'gender', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'annual-income']  

# Importing dataset   
data = pd.read_csv('adult dataset/adult.data', names=cols)  
# The first five columns of the dataset.   
data.head()  

Bar graph

A bar chart or graph is a graph with rectangular bars or bins that are used to plot categorical values. Each bar in the graph represents a categorical variable and the height of the bar is proportional to the value represented by it.

Bar graphs are used:

  • To make comparisons between variables
  • To visualize any trend in the data, i.e., they show the dependence of one variable on another
  • Estimate values of a variable
# Let's start by visualizing the distribution of gender in the dataset.  
fig, ax = plt.subplots()  
x = data.gender.unique()  
# Counting 'Males' and 'Females' in the dataset  
y = data.gender.value_counts()  
# Plotting the bar graph  
ax.bar(x, y)  
ax.set_xlabel('Gender')  
ax.set_ylabel('Count')  
plt.show()  
Bar graph, pyplot, python, data visualization,, machine learning, big data
Fig 1. Bar plot showing the distribution of gender in the dataset

From the figure, we can infer that there are more number of males than females in the dataset. Next, we will use the bar graph to visualize the distribution of annual income based on both gender and hours per week (i.e. the number of hours they work per week).

# For this plot, we will be using the seaborn library as it provides more flexibility with dataframes.   
sns.barplot(data.gender, data['hours-per-week'], hue=data['annual-income'])  
plt.show()

So from the figure above, we can infer that males and females with annual income less than 50K tend to work more per week.

Countplot

This is a seaborn-specific function which is used to plot the count or frequency distribution of each unique observation in the categorical variable. It is similar to a histogram over a categorical rather than quantitative variable.

So, let’s plot the number of males and females in the dataset using the countplot function.

# Using Countplot to count number of males and females in the dataset.  
sns.countplot(data.gender)  
plt.show()  
count plot, seabormn data visualization, python, big data, machine leanring
Fig 3. Distribution of gender using countplot.

Earlier, we plotted the same thing using a bar graph, and it required some external calculations on our part to do so. But we can do the same thing using the countplot function in just a single line of code. Next, we will see how we can use countplot for deeper insights.

# ‘hue’ is used to visualize the effect of an additional variable to the current distribution.  
sns.countplot(data.gender, hue=data['annual-income'])  
plt.show()  
countplot, using hue, data visualization using seaborn
Fig 4. Distribution of gender based on annual income using countplot.

From the figure above, we can count that number of males and females whose annual income is <=50 and > 50K. We can see that the approximate number of

  • Males with annual income <=50K : 15,000
  • Males with annual income > 50K: 7000
  • Females with annual income <=50K: 9000
  • Females with annual income > 50K: 1000

So, we can infer that out of 32,500 (approx) people, only 8000 people have income greater than 50K, out of which only 1000 of them are females.

Machine learning challenge, ML challenge

Box plot

Box plots are widely used in data visualization. Box plots, also known as box and whisker plots are used to visualize variations and compare different categories in a given set of data. It doesn’t display the distribution in detail but is useful in detecting whether a distribution is skewed and detect outliers in the data. In a box and whisker plot:

  • the box spans the interquartile range
  • a vertical line inside the box represents the median
  • two lines outside the box, the whiskers, extending to the highest and the lowest observations represent the possible outliers in the data
whisker plot, box plot, seaborn, python, pyplot
Fig 5. Box and whisker plot.

Let’s use a box and whisker plot to find a correlation between ‘hours-per-week’ and ‘relationship’ based on their annual income.

# Creating a box plot  
fig, ax = plt.subplots(figsize=(15, 8))  
sns.boxplot(x='relationship', y='hours-per-week', hue='annual-income', data=data, ax=ax)  
ax.set_title('Annual Income of people based on relationship and hours-per-week')  
plt.show()  
box plot, whisker plot, visualization using box plot, box plot using seaborn, box plot in python
Fig 6. Using box plot to visualize how people in different relationships earn based on the number of hours they work per week.

We can interpret some interesting results from the box plot. People with the same relationship status and an annual income more than 50K often work for more hours per week. Similarly, we can also infer that people who have a child and earn less than 50K tend to have more flexible working hours.
Apart from this, we can also detect outliers in the data. For example, people with relationship status ‘Not in family’ (see Fig 6.) and an income less than 50K have a large number of outliers at both the high and low ends. This also seems to be logically correct as a person who earns less than 50K annually may work more or less depending on the type of job and employment status.

Strip plot

Strip plot is a data analysis technique used to plot the sorted values of a variable along one axis. It is used to represent the distribution of a continuous variable with respect to the different levels of a categorical variable. For example, a strip plot can be used to show the distribution of the variable ‘gender’, i.e., males and females, with respect to the number of hours they work each week. A strip plot is also a good complement to a box plot or a violin plot in cases where you want to showcase all the observations along with some representation of the underlying distribution.

# Using Strip plot to visualize the data.  
fig, ax= plt.subplots(figsize=(10, 8))  
sns.stripplot(data['annual-income'], data['hours-per-week'], jitter=True, ax=ax)  
ax.set_title('Strip plot')  
plt.show()  
strip plot, strip plot using seaborn, strip plot in python, seaborn, python, machine learning, big data
Fig 7. Strip plot showing the distribution of the earnings based on the number of hours they work per week.

In the figure, by looking at the distribution of the data points, we can deduce that most of the people with an annual income greater than 50K work between 40 and 60 hours per week. While those with income less than 50K work can work between 0 and 60 hours per week.

Violin plot

Sometimes the mean and median may not be enough to understand the distribution of the variable in the dataset. The data may be clustered around the maximum or minimum with nothing in the middle. Box plots are a great way to summarize the statistical information related to the distribution of the data (through the interquartile range, mean, median), but they cannot be used to visualize the variations in the distributions.

A violin plot is a combination of a box plot and kernel density function (KDE, described in Part I of this blog series) which can be used to visualize the probability distribution of the data. Violin plots can be interpreted as follows:

  • The outer layer shows the probability distribution of the data points and indicates 95% confidence interval. The thicker the layer, the higher the probability of the data points, and vice-versa.
  • The second layer shows a box plot indicating the interquartile range.
  • The third layer, or the dot, indicates the median of the data.

    violin plot, interpreting a violin plot, how to read violin plot, violin plot in data visualization
    Fig 8. Representation of a violin plot.

Let’s now build a violin plot. To start with, we will analyze the distribution of annual income of the people w.r.t. the number of hours they work per week.

fig, ax = plt.subplots(figsize=(10, 8))  
sns.violinplot(x='annual-income', y='hours-per-week', data=data, ax=ax)  
ax.set_title('Violin plot')  
plt.show()  
violin plot, visualization using violin plot, violin plot using seaborn, how to plot using violin plot
Fig 9. Violin plot showing the distribution of the annual income based on the number of hours they work per week.

In Fig 9, the median number working hours per week is same (40 approximately) for both people earning less than 50K and greater than 50K. Although people earning less than 50K can have a varied range of the hours they spend working per week, most of the people who earn more than 50K work in the range of 40 – 80 hours per week.

Next, we can visualize the same distribution, but this grouping them according to their gender.

# Violin plot  
fig, ax = plt.subplots(figsize=(10, 8))  
sns.violinplot(x='annual-income', y='hours-per-week', hue='gender', data=data, ax=ax)  
ax.set_title('Violin plot grouped according to gender')  
plt.show()  
data visualization using violin plot, violin plot in seaborn, seaborn plots, plots in big data, plots in machine learning
Fig 10. Distribution of annual income based on the number of hours worked per week and gender.

Adding the variable ‘gender’, gives us insights into how much each gender spends working per week based upon their annual income. From the figure, we can infer that males with annual income less than 50K tends to spend more hours working per week than females. But for people earning greater than 50K, both males and females spend an equal amount of hours per week working.

Violin plots, although more informative, are less frequently used in data visualization. It may be because they are hard to grasp and understand at first glance. But their ability to represent the variations in the data are making them popular among machine learning and data enthusiasts.

PairGrid

PairGrid is used to plot the pairwise relationship of all the variables in a dataset. This may seem to be similar to the pairplot we discussed in part I of this series. The difference is that instead of plotting all the plots automatically, as in the case of pairplot, Pair Grid creates a class instance, allowing us to map specific functions to the different sections of the grid.

Let’s start by defining the class.

# Creating an instance of the pair grid plot.  
g = sns.PairGrid(data=data, hue='annual-income')  

The variable ‘g’ here is a class instance. If we were to display ‘g’, then we will get a grid of empty plots. There are four grid sections to fill in a Pair Grid: upper triangle, lower triangle, the diagonal, and off-diagonal. To fill all the sections with the same plot, we can simply call ‘g.map’ with the type of plot and plot parameters.

# Creating a scatter plots for all pairs of variables.  
g = sns.PairGrid(data=data, hue='capital-gain')  
g.map(plt.scatter)  
data visualization using pair plot, visualizing multiple variabels, pair plot in seaborn, how to use pair plot
Fig 11. Scatter plot between each variable pair in the dataset.

The ‘g.map_lower’ method only fills the lower triangle of the grid while the ‘g.map_upper’ method only fills the upper triangle of the grid. Similarly, ‘g.map_diag’ and ‘g.map_offdiag’ fills the diagonal and off-diagonal of the grid, respectively.

#Here we plot scatter plot, histogram and violin plot using Pair grid.  
g = sns.PairGrid(data=data, vars = ['age', 'education-num', 'hours-per-week'])  
# with the help of the vars parameter we can select the variables between which we want the plot to be constructed.  

g.map_lower(plt.scatter, color='red')  
g.map_diag(plt.hist, bins=15)  
g.map_upper(sns.violinplot)  
data visualization using pair grid, how to use pair grid, pair grid, pair grid in seaborn, pair grid for big data
Fig 12. Pair Grid showing different plot between the different pair of variables.

Thus with the help of Pair Grid, we can visualize the relationship between the three variables (‘hours-per-week’, ‘education-num’ and ‘age’) using three different plots all in the same figure. Pair grid comes in handy when visualizing multiple plots in the same figure.

Conclusion

Let’s summarize what we learned. So, we started with visualizing the distribution of categorical variables in isolation. Then, we moved on to visualize the relationship between a categorical and a continuous variable. Finally, we explored visualizing relationships when more than two variables are involved. Next week, we will explore how we can visualize unstructured data. Finally, I encourage you to download the given census data (used in this blog) or any other dataset of your choice and play with all the variations of the plots learned in this blog. Till then, Adiós!