Supervised Learning made easy

In the previous articles, we had a short introduction to artificial intelligence and machine learning. In this article, we start to introduce the first technique for machine learning, Supervised Learning.

What is Supervised Learning

If we read Wikipedia we can read this definition of Supervised Learning:

Supervised learning (SL) is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal).

However, for a better understanding of how a supervised algorithm works, let’s define some basic terms useful to build our algorithms.

How we know the fuel of every machine learning algorithm, is the data, a supervised learning algorithm used as an input a dataset as a collection of labelled data

Figure 1 – An example of labelled data

More formally we can describe the dataset of the labelled data in this way:

    \[ \{(x_i,y_i)\}^N_i_=_1 \]

Each element of xi among N is called a feature vector. A feature vector is a vector in which each dimension, j=1,…., N contains a value that describes the example in analysis. This value is called feature and is denoted by the syntax x (j).

The goal of supervised learning is to predict the data in input based on the feature present on the data and then classify the data itself. Figure 1 shows how a feature work, in the figure we see the labelled data, Dog and the different weight, for example, 18 lbs or 14 lbs. A feature is essentially the characteristic connected with the data, the feature is important when we want to classify.

Classification vs Regression

When we talk of Supervised Learning, the idea is generally to solve two major types of problems:

  • Classification
  • Regression

A classification problem is a type of problem where machine learning tries to classify the data based on the previous observation.

A typical example is a spam filter, the filter learns the common rules to classify the mail as spam or not, this type of classification is called binary classification.

Another type of classification is called multiclass classification, this type of classification can be used for example to recognize different types of animals, we feed the algorithm with different pictures and the algorithm classify the animal on unseen data.

The essence of the Classification is essential to assign a categorical label to unordered and unseen data.

There is another important task connected with Supervised Learning, this is called regression.

Regression analysis is used to predict a continuous outcome variable based on a prediction, if we want to be more formal, a regression analysis is for finding a correlation between some independent variable and one dependent variable. A classical example of regression analysis is the prediction of the price of a house based for example on the square feet or the location.

Algorithm for Supervised Learning

We now have an idea about supervised learning, and what type of problem we can solve with this technique, for a better understanding of how to use let’s see the major algorithm used with supervised learning.

K-Nearest Neighbor

The KNN or K-Nearest Neighbor is a simples algorithm used to classify data based on the similarity of the data. In the algorithm, the ‘k’ is used to identify the value of neighbour near the data point, this is used to analyze the value near the data point and then classify the data, new data are classified by the majority of vote of the neighbour, the number of neighbours used is identified by the value k.

Naive Bayes

Naive Bayes classifier is a probabilistic algorithm, this is based on the Bayes Theorem and is used to solve a classification problem. This type of algorithm is essentially a probabilistic classifier, this means the classification is done using a probability of an event using the Bayes Theorem:

    \[ P(A|B) = \frac{P(B|A) P(A)}{P(B)} \]

Where:

  • P(A|B) is the posterior probability, the probability of hypothesis A on the observed event B
  • P(B|A) is the likelihood probability, the probability of an hypothesis is true
  • P(A) is a prior probability, the probability of the hypothesis before observing the evidence
  • P(B) is the marginal probability, the probability of the evidence

Naive Bayes is used for spam classification or text classification.

Decision Trees

Decision Trees are used in both the problem from the supervised learning, Classification and Regression, for this reason, are sometimes called Classification And Regression Trees (CART).

In the Decision Trees, the prediction of the response is made directly by learning the feature derived from the features themselves.

In decision analysis, the decision tree can be used to visualize the data and explicitly represent decision and decision making.

Linear Regression

Linear Regression is one of the most basic algorithms for Machine Learning, with the linear regression algorithm, the model tries to find the best linear between dependent and independent variables.

Linear regression can be split into two main types:

  • Simple Linear Regression, where we have only one independent variable
  • Multiple Linear Regression, where we have more than one independent variable

In both cases, the model tries to find the correlation between the independent variable and the dependent one.

Support Vector Machine (SVM)

Support Vector Machine is another algorithm that can be used in both cases, classification and regression.

The SVM algorithm plot each data item as a point in the n-dimensional space, where n is the number of features we have in the model, each point represent a coordinate in the space.

With the coordinate defined, we use classification to find a hyper-plane used to differentiate the data.

Conclusion

In this article, we introduce the supervised learning algorithm, this is the first algorithm used in Artificial Intelligence and Machine Learning.

In the next articles, we start to see how to implement some of the algorithms presented and we see how this algorithm can be used to build our Machine Learning model.

If you are interested to go deeper into the algorithm I can suggest some amazing books:

If you like the article feel free to add a comment and ask any question you can have

Machine Learning an introduction

What is Machine Learning

Machine Learning is actually everywhere in our daily life. When we use the Netflix suggestion for the next film, or when Spotify suggest us a song that can match our taste, all of these are an example of applied machine learning. But how does all this software work and what are the ingredients for a machine learning system?

How a machine learn?

Machine Learning is a subfield of artificial intelligence, as we had seen in the previous article, often the term is used interchangeably, but they are different. Machine learning is a set of algorithms that allows the machine to learn how to perform a specific task based on the data. But how a machine can learn?

First of all, it is important to understand the basis of machine learning, and how this helps a machine to “learn”, and all start from the data.

In our modern world, we are overwhelmed by the data, every single minute of our life we produce or consume data, just think on every time you read a post on Facebook and leave a like, or when you post a tweet on Twitter, all this incredible amount of data, are the fuel for the machine learning algorithms.

In machine learning, we can adopt three major techniques for learning:

  • Supervised learning
  • Unsupervised learning
  • reinforcement learning

Supervised learning, is a technique where the data are labelled, this helps the computer to correctly classify the data, which means the data are grouped for types.

Figure 1 – The MIST dataset an example of Supervised Learning

On the other side, we can find the unsupervised learning technique where the data are not labelled, this means the algorithm classifies the data based on the data itself, for example finding information in common.

The last technique for learning is called reinforcement learning, this technique is probably the most complex, in reinforcement learning, we create a system, aka an agent, that learn based on the feedback the agent received from the external environment, for example, in a game the result of a win or lose can indicate a correct or wrong path to follow.

Both the technique, supervised and unsupervised, have a common base, they learn from the data, data are the fuel for the machine learning algorithms.

The last technique, reinforcement learning, is on the other side based on a different principle, this technique uses the feedback received for improving and learning.

Supervised learning

Supervised learning is a family of algorithms, used in machine learning to make a prediction based on evidence in the presence of uncertainty.

This technique is the most simple in machine learning, and it’s similar to what we do when we teach our kids, we essentially provide an example, the labelled data, and teach how to use and recognize the data.

Supervised learning uses two specific techniques for developing machine learning models:

  • Classification
  • Regression

Classification is used for simple classification problems, is used to recognize if a mail is genuine or spam, or for example is used to classify the type of cancer benign or not. A classification always split the data into two specific categories.

Common classification algorithms are:

  • Logistic Regression
  • Naive Baysen
  • K-Nearest Neighbors
  • Support Vector Machine
  • Decision Tree

We see in detail all these algorithms in a few future articles, for now, let’s just say this is all algorithms for classification.

Regression is used to predict continuous response, this is used for example to classify the forecast value for a financial market, or for example for understanding the variance of a temperature or a house price variation.

Common regression algorithms are:

  • Multi Linear Regression
  • Support Vector Machine
  • Decision Tree

We see and go deeper on classification and regression problems in the future article, where we can have a clear identification of the problem and a better understanding of the algorithm.

Unsupervised Learning

In supervised learning, we have labelled data and we apply an algorithm to predict and assign a label to new data. With unsupervised learning, the learning is different.

Unsupervised learning is used to learn directly from the data, without having any knowledge about the data before, the principal algorithm for unsupervised learning are:

  • clustering
  • neural network

We see more about unsupervised learning later in the series, and we see how to use and powerful this technique is, let’s now introduce the last of the three types of machine learning, reinforcement learning.

Reinforcement learning

The last technique of machine learning we present is reinforcement learning, this technique is probably the most complex but probably the most near to the human way to learn.

The goal of reinforcement learning is to develop a system, aka an agent, that improves performance based on the interaction with the environment, this means every iteration of the result predicted by the agent, will be validated by the environment.

In reinforcement learning, the agent receives rewards for every interaction, this reward indicates how good the model score versus the result we want to achieve, one good example of reinforcement learning is an engine of the game of Go, the agent can try to predict the next move based on the actual position of the piece on the board, if the prediction is right, the reward will be positive, in case the prediction is false, the reward will be negative.

Conclusion

In this article, we just had a short introduction to the basic types of machine learning. We saw what are the three main types of machine learning and we see the main difference between the types.

In the next articles, we go deeper on the different types starting with Supervised Learning. Machine Learning is one of the most fascinated fields in computer science, in this article and in the next of the series, I try to help you to develop some basic knowledge about machine learning.

An introduction to Artificial Intelligence

Artificial Intelligence, is the science of making machines do things that would require intelligence if is done by a men – Marvin Minsky

Artificial Intelligence is a term first coined at Dartmouth college in 1956, the term was coined by the Cognitive scientist Marvin Minsky.

When we talk about Artificial Intelligence, we talk about a different set of mathematical algorithms used for learning from the data.

Types of Artificial Intelligence

By definition, Artificial Intelligence is the science of making a machine, do things that would require intelligence.

Based on the definition we can define three types of artificial intelligence:

  • Artificial Narrow Intelligence, aka Weak AI, is the type of AI we saw every day, this is a basic type of intelligence and is used in our actual technology, this type of intelligence is used for one scope only, example of this type of Intelligence are for example the voice assistant like Srini or Alexa, the raking system, the NLP
  • Artificial General Intelligence, this type of intelligence is when the machine are able to learn and generalize like an human do, is able to learn and produce generic result, for some expert this type of artificial intelligence is impossible to achieve
  • Artificial Super Intelligence, this type of intelligence is a type of artificial intelligence where the intelligence is more higher then an human can do

How we can see there are different types of artificial intelligence, at least the theoretical level, and all have in common the goal to make a machine learn.

Domains of artificial intelligence

In this article, we talk about Weak AI, the actual AI we see and work on every day.

Figure 1 – The domain of the artificial intelligence

Figure 1 shows the subdomain of artificial intelligence, Machine Learning and Deep Learning.

Machine Learning is essentially a set of algorithms used for allowing a machine to learn, on the other side the Deep Learning, is a subset of methods direct derived from Machine Learning.

Machine Learning and Deep Learning

Machine Learning and its subfield Deep Learning are the core of the actual artificial intelligence, the weak AI.

Machine Learning is a subfield of computer science, the aim of machine learning, is to learn to execute a task, for which is not directly programmed, from the data.

The capability to learn from the data is an important capability from machine learning, and for artificial intelligence in general, this because, similarly how a man, or an animal, learn from experience, a computer learn from the data.

Deep Learning, is a subset of machine learning, deep learning uses a subset of the algorithm designed for machine learning and is used to solve some practical industrial problem. Some deep learning applications are for example computer vision, or natural language processing/natural language understanding.

Deep Learning differs from machine learning, and in some way expand machine learning, because in deep learning we use a set of algorithm called artificial neural network. This family of algorithms is inspired by the human brain.

Conclusion

In this short article, we just introduce what is artificial intelligence and what is the basic component of artificial intelligence. This is a first of a series of articles where we explore the different methods and algorithms used in artificial intelligence.

Every article made a deeper knowledge around a specific area of the field and is designed to give some practice knowledge in the exciting science of artificial intelligence.