AdaBoost
AdaBoost is a boosting algorithm, which also works on the principle of the stagewise addition method where multiple weak learners are used for getting strong learners. Unlike Gradient Boosting in XGBoost, the alpha parameter I calculated is related to the errors of the weak learner, here the value of the alpha parameter will be indirectly proportional to the error of the weak learner.
Once the alpha parameter is calculated, the weightage will be given to the particular weak learners, here the weak learner that are doing mistakes will get more weightage to fill out the gap in error and the weak learners that are already performing well will get fewer weights as they are already a good model.
Python3
from sklearn.ensemble import AdaBoostRegressor adr = AdaBoostRegressor() adr.fit(X_train, y_train) y_pred3 = adr.predict(X_test) print ( "AdaBoost - R2: " , r2_score(y_test, y_pred3)) |
Output:
AdaBoost - R2: 0.796880734337689
GradientBoosting vs AdaBoost vs XGBoost vs CatBoost vs LightGBM
Boosting algorithms are one of the best-performing algorithms among all the other Machine Learning algorithms with the best performance and higher accuracies. All the boosting algorithms work on the basis of learning from the errors of the previous model trained and tried avoiding the same mistakes made by the previously trained weak learning algorithm.
It is also a big interview question that might be asked in data science interviews. In this article, we will be discussing the main difference between GradientBoosting, AdaBoost, XGBoost, CatBoost, and LightGBM algorithms, with their working mechanisms and their mathematics of them.