Deep Learning – Simplified

The buzz word is around for a few years in the analytic world, with companies investing heavily to fund the research. From understanding human perception to building self-driven cars deep learning comes with a package of great promises. I was thinking to myself as how I could put these concepts in simple terms which led this blog post. So lets get started.

What is deep learning?

 The foundations of deep learning has it’s inspiration from the ability of the human beings to perceive things as they appear to him. The human perception is  a miracle of nature. Well, lets try answering this – Which of the following pictures has a bi-cycle in it?


Well, I quite liked the banana!. But I am sure you would still go for  pictures 2 and 3. It would have taken a second for you to look at the images and make the classification unconsciously. But underlying this decision, we have involved our  entire series of visual cortices (V1 – V5) in the brain each of which contains around 140 million neurons with billions of connections between them forming a network. Yes, we are a living super computer walking on this planet, but fail to recognize the complexity of the problem involved.  Computer scientist who undertook research on computer vision, natural language processing were marveled by this act of the human brain and to help computers mimic the same. This brought in the idea of Artificial Neural Networks (ANN) in the 1980’s

Why not write an algorithm?

 Earlier to deep learning computer scientist wrote machine learning algorithms that take in features from images or text. The major draw back of such approaches in machine learning is that programmers have to constantly tell computers which features they have to use from the raw data. This puts the burden on the programmer to perform feature engineering and no wonder we see that the algorithms are not performing well. It also becomes increasingly difficult for traditional machine algorithms to work in case of learning complex patterns.

Unlike the traditional machine learning algorithms, Deep learning (based on ANN) has gone past this barrier as it uses training examples for the system to learn the features on its own.

What!! But how?

To get started on neural networks work it is good to understand how a artificial neuron called a perceptron functions.  A Perceptron (schematic representation below) takes in inputs (Xi) and has specific weights (Wi) which assign importance to each of these inputs. The neuron computes a function based on the weighted inputs.


For the sake of simplicity, let us assume that the computed function is linear in nature.  If the output of the computed function exceeds a specific threshold or bias the neurons outputs a 1 or 0. Lets look at the example below.

Let us assume we are charted on a weekly exercise plan to burn 1400 calories (threshold) per week. Let us assume the inputs (walking, running, swimming) x1,x2,x3 each burn  calories 100,150,200 respectively. It is for us to decide to plan the weekly schedule of activities to maintain a healthy living. The number of days (weights) we choose to exercise is up to us. Let us model this problem with a linear perceptron. As you could note by choosing the weights (number of days we do an activity) and threshold (calories to burn this week) we could decide the output (healthy week)


While the idea of a perceptron is only an inspiration from the human brain, we are still far from understanding and replicating on how it works. A neural network is formed by hooking these neurons as shown below to each other and help making complex decisions.  The structure of a neural network could be divided in to 3 layers as shown below. The left most layer are called input layer which contains the input neurons, the right most layer contains is called the output layer containing the output neurons. Unlike the figure below each of these layers can have multiple nodes. The inner layer of neurons are are called the hidden layer. The first vanilla version of artificial neural networks are called Multi Layer Perceptrons (MLP).


The Multi Layer Perceptrons (MLP) is a type of supervised network as it requires an label or desired output to learn. A Multilayer Perceptron (MLP) is a type of neural network referred to as a supervised network because it requires a desired output in order to learn.  As reach in machine layer matured each of these nodes were replaced by more sophisticated algorithms than the perceptrons. The neural network have to initially trained to learn the abstract model. In a neural network the output of one layer are constantly fed forward as inputs to the next layer. This approach is called feed-forward propagation.


Each of these nodes learn a function and feed their output forward to the next layer in the network. Not every time a node picks a function to be learned correctly. The difference between the expected output and the current output of a node are called errors (shown below).  These errors prevent the neural networks from learning effectively. Research and usage of neural network seemed to have met a trough until the advent of back propagation algorithm.


The back-propagation algorithm acts as an error correcting mechanism at each neuron level, there by helping the network to learn effectively. The derivation of how backward propagation helps to solve the problem is beyond the scope of this post.

 Deep Neural Network

A deep neural network (DNN) is an artificial neural network (ANN) with multiple hidden layers of units between the input and output layers. Each of the node learn an effective function and transfer the knowledge forward.  Let us consider the working of one of the deep network architectures used for image classification. Pixels from the input image are converted to feature maps (smaller or abstract representation). These abstract representation are then used  to form simpler forms by the next layer and finally in a more generic representation.


Let us take a look at an example of image classification of human faces as shown below. The input layers learn to recognize lines or edges of the face (initial feature map). This information is fed forward to layer of nodes in the hidden layer which then again learns a more abstract concept like the eyes, nose. The final layer then assimilates this learning to form the object model of the face.


Research[4] in the topic of deep learning is constantly growing and there are constant updates to architectures and algorithms used in deep neural network. Deep learning has been extensively applied in the fields of image recognition, Natural Language processing, Audio recognition and information retrieval.Details of all the architectures and algorithms are beyond the scope of this post and would be handled in a different post.

Hype or Real?

The concept of neural networks to learn have actually existed for decades, but there were major problems building larger networks. Deep learning is partly hyped or could be a re-branded version of neural network. But the recent hype about deep learning and neural network is because of the ability to perform better than the kernel algorithms on standard data sets. Thanks to high performance computation units like GPU and better parallel algorithms neural network have grown bigger and deeper.


  1. Neural networks and deep learning
  2. Deep learning – Wikipedia, the free encyclopedia
  3. Deng, Li et el;Tutorial Survey of Architectures, algorithms and applications of deep learning
  4. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning

Crack Your Next Data Science Interview

Preparing for a data science interview might seem like a huge mountain to climb with a huge variety of topics piled in front . But it isn’t hard as it seems to be.


The time is now!!

Having a wide range of topics to cover, calls for a need to set aside time and prepare meticulously for topics . Interviews can range from explaining logistic regression to a 5 year old to tuning the parameters of a model. Set aside a time every day to prepare and religiously sit down to prepare for on the topic interview.  With consistent effort it is easier to be there on top of the mountain. From experience below are the topics we should be covering to ace your next data science interview

With a wide variety of topic it is entirely possible to get sucked into one of these holes. This makes it necessary to fix SMART goals and prepare towards these goals.


Below are the steps which I personally followed to prepare for my interviews.

  1. Review your background and prepare a list of topics you may want to cover. As data scientist come from different backgrounds such as political sciences, statistics, software engineering. It is important to understand your weak links and to prepare towards strengthening it.
  2. Write down your goals and prepare a schedule to work on the small weak links. By writing your goals you create a subconscious wiring to work towards these goals.
  3. Make a commitment by setting a time aside every day for you to religiously study the topics on your weak links list.
  4. Attend Interviews: Attending interviews is another way to get feedback to understand your week links and iterate over them.
  5. Review your goals: Set weekly review meetings with your self to review your current preparation

While these steps are important below are the topics which are essential for a data scientist to know.

Basic Mathematics

To become a good data scientist one must have the ability to deliver insights from the data. You would be able to deliver insights with descent  understanding of mathematical concepts. Go through a refreshers of linear algebra, probability and statistics theory.

Asking the right questions

This is more learned by practiced than taught. Many employers look in for the curiosity and the ability of the candidate to ask questions that can extract insights from the data. Take up a totally unknown data set and practice asking questions and look for answers for your question. With this approach you would improve your questions and strengthen your abilities to find the answers.

Applied machine learning

It is important to understand the basic algorithms in machine learning. Interviewers focus on how the candidate formulates the problem and his ability to transform business into an analytical problem. If you are new to machine learning, a good place to start understanding these concepts would be to enroll in a course or learn from the web. Do check the data science specialization at Coursera and nano degree’s at Udacity. These are a great place to start.

Learn white board coding

This is similar to a software engineer position where the interviewers test the candidate’s ability to define, analyze, solve and test the problem at hand. It is important to brush up concepts of algorithms and data structure. This has been a part of many product oriented data science interviews where the data scientist are expected to be good programmers. There are tons of websites and books to get you started here.

Get the right tools

Thou there many a wide range of tools to express analytics, the top choice of many data scientists have been python and R. Both the languages have great machine learning libraries. These tools would be good to know and have in your toolbox.

 Be a data hacker

Learn data wrangling and mugging techniques in the language of choice. This helps to get up to speed with any given data set.

 Understand databases

Relational databases are a part of every industry and it is important to learn the basics of databases and how to write efficient queries.

 Learn Data Visualization

The best way to start understanding the data is to visualizing. Choose and learn visualization techniques in a tool of choice. Thou it would not be asked during an interview but it is a must required skillset for a good data scientist.


Practicing the theoretical concepts you learn with help you develop a better understanding of the concepts and also understand your weakness quickly.

Research about the role

Along with preparing for the interview, it is essential to align your skills to the type of data science role you are looking for.  Think about what kind of data scientist you would want to be and which type of teams you would like to be a part of. Ask appropriate questions to understand the requirements of the role and tailor your needs. Look up the profiles of the people who would be interviewing  to understand their background and performing similar roles at the company. This would help you to be understand the type of questions you could expect during the interview. It is important to identify the type of role the employer is looking to fill in, and focus your preparation towards that direction. Take time to understand the job description and also the background of people who would be interviewing you. Remember to work on your weakness on the chosen type of roles. Below are the simplified types of data scientist employers commonly look for.

Business Savvy Data Scientist

The business savvy data scientist focusses on building analytic solutions to help business users and final decision makers.  They help to understand the underlying problems of a company’s marketing campaign, to understand churn or what interest the customers. Communication and story telling plays a major role for these type of roles as it involves communicating the value to non-technical people. They do not have to build complex models, but must unearth the value from the data to answer the questions of why and how.

Product Savvy Data Scientist

The other type of data scientist focuses on building products to help businesses. They build high complex models using sophisticated statistical and machine learning algorithms. They are very focused on improving the performance of the models where it has direct impact on the company’s product. They require to posses good statistical and solid computer science skills.

Hope the above steps helps you to crack your next data science interview. Don’t wait to make your next leap.



Resources to get Started