Boosting

 Boosting:

Boosting refers to an Ensemble method in which many predictors are trained and each predictor learns from the errors of it's predecessors.

In Boosting many weak learners (weak classifiers with low accuracy) are combined to form a strong learner(strong classifier with good accuracy).In boosting an ensemble of predictors are trained sequentially where each predictor tries to correct the error made by its predecessor.

Why boosting is used??

Let's suppose that on given a data set, which contains images of Dog's and Cat's, you were asked to classify these into two separate classes. Then you take consideration some differences like:

  1. The image has pointy ears: Cat
  2. The image has cat shaped eyes: Cat
  3. The image has bigger limbs: Dog
  4. The image has Sharpened claws: Cat
  5. The image has a wider mouth structure: Dog
All these and some of other considerations will help us to identify whether Dog or Cat. However if we consider any one point to classify an image, then the prediction would be flawed. Each of these individual points are called weak learners, because these individuals are not strong enough to classify images. By combining these all we make a strong learner model. If an image pass through all points and satisfies then we conclude that image is belong to either Cat  or Dog.

What is Boosting??
Boosting is an Ensemble learning technique that uses a set of Machine learning algorithms to convert weak learner to strong learners in order to increase the accuracy of the model.
Working of Boosting :
The basic principle behind the working of the boosting algorithm is to generate multiple weak learners and combine their predictions to form one strong rule.

  • Step 1:The base algorithm reads the data and assigns equal wight to each sample observation.
  • Step 2:False predictions made by the base learner are identified. In the next iteration, these false predictions are assigned to the next base learner with a higher weightage on these incorrect predictions.
  • Step 3:Repeat the step 2 until the algorithm can correctly classify the output.
Therefore, the main aim of Boosting is to focus more on mis-classified predictions.

Types of Boosting:
  1. Adaptive Boosting (Ada Boost)
  2. Gradient Boost
  3. XG Boost.

Comments