本文主要是介绍集成学习之Stacking,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
Stacking算法
算法思想
Stacking是一种堆叠模型,分为多个阶段模型,首先是第一阶段模型预测出结果,之后送入第二阶段模型来实现模型的融合,通过减少模型的方差来获得更高的预测精度。
算法步骤
算法步骤下图所示(参考博客)
我们首先将数据集划分为训练集和测试集,假设数量分别为10000和2000,这时我们采用交叉验证的方式,将训练集划分成5份,其中四份用来做真正的训练集,一份用来做验证集,这样最终我们就得到了5个包含2000个数据的验证集,我们将它按列拼接,得到一个10000行(每个数据一行)的数据用来做第二阶段模型的训练数据,同时使用第一阶段训练好的模型对测试集进行预测,生成第二阶段模型的测试数据。最后第一阶段的每个基模型都能获得一个10000行的验证集结果和2000行的测试集结果,我们将验证集结果按列拼接,生成10000*(基模型数量)的数据作为第二阶段模型的最终训练数据,2000行的测试集结果则进行平均,结果还是2000行的数据作为最终的测试数据。
与Blending区别
Stacking相较于Blending(集成学习之Blending),Blending第二阶段的模型用到的训练数据只是验证集的数据,这样就造成了大量数据没能被更好地利用,造成数据浪费。而Stacking使用了交叉验证的方式,使得第二阶段获取的数据仍然是整个训练数据集的数据。
算法的优缺点
Stacking的优点与Blending类似,主要偏工程实践,数学理论知识比较少,理解起来较容易,模型的可扩展高,并且对训练数据有着更好的使用。
缺点是,模型的开销往往也比较大,同时值得注意的是,在第一阶段交叉验证输出预测结果的时候,这部分的预测结果已经见过其他数据集的标签,造成了一定程度上的数据泄露,使得模型的效果会比预期的更好,会有轻微的过拟合出现。
代码实现(结合鸢尾花数据集)
本次代码分别使用mlxtend中的StackingCVClassifier和sklearn中的Stacking方法
# 1.使用mlxtend工具包中的Stacking方法
from sklearn import datasets
iris = datasets.load_iris()
X, y = iris.data[:, 2:], iris.target
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import StackingCVClassifierRANDOM_SEED = 2clf1 = KNeighborsClassifier(n_neighbors=2)
clf2 = RandomForestClassifier(random_state = RANDOM_SEED)
clf3 = GaussianNB()
clf4 = make_pipeline(StandardScaler(),SVC())
lr = LogisticRegression()# estimators = [('knn',clf1),('rf',clf2),('nb',clf3),('svm',clf4)]
estimators = [clf1, clf2, clf3, clf4]
sclf = StackingCVClassifier(classifiers = estimators,meta_classifier = lr,random_state=RANDOM_SEED)print('3-fold cross validation:\n')for clf, label in zip([clf1,clf2,clf3,clf4,sclf], ['KNN','Random Forest',\'Naive Bayes','SVM','Stacking']):scores = cross_val_score(clf,X,y,cv=3,scoring='accuracy')print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
'''
output
3-fold cross validation:Accuracy: 0.96 (+/- 0.02) [KNN]
Accuracy: 0.97 (+/- 0.02) [Random Forest]
Accuracy: 0.96 (+/- 0.02) [Naive Bayes]
Accuracy: 0.96 (+/- 0.02) [SVM]
Accuracy: 0.97 (+/- 0.02) [Stacking]
'''
# 绘制使用不同基模型的集成模型的决策边界
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import itertools
from mlxtend.plotting import plot_decision_regions
gs = gridspec.GridSpec(3, 2)
fig = plt.figure(figsize=(16, 18))estimators = [clf1, clf2]
sclf2 = StackingCVClassifier(classifiers = estimators,meta_classifier = lr,random_state=RANDOM_SEED)labels = ['KNN','Logistic Regression','Random Forest','SVM','Stacking','Stacking2']
for clf, lab, grd in zip([clf1, clf2, clf3, clf4, sclf, sclf2],labels,itertools.product([0, 1, 2],[0,1])):clf.fit(X, y)ax = plt.subplot(gs[grd[0], grd[1]])fig = plot_decision_regions(X=X, y=y,clf=clf, legend=2)plt.title(lab)
# print(grd)
plt.show()
绘制的基模型决策边界如下图所示:
# 使用概率作为元特征
sclf = StackingCVClassifier(classifiers=[clf1, clf2, clf3],use_probas=True, ##meta_classifier=lr,random_state=42)print('3-fold cross validation:\n')for clf, label in zip([clf1, clf2, clf3, sclf], ['KNN', 'Random Forest', 'Naive Bayes','StackingClassifier']):scores = cross_val_score(clf, X, y, cv=3, scoring='accuracy')print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
'''
output
3-fold cross validation:Accuracy: 0.96 (+/- 0.02) [KNN]
Accuracy: 0.97 (+/- 0.02) [Random Forest]
Accuracy: 0.96 (+/- 0.02) [Naive Bayes]
Accuracy: 0.96 (+/- 0.02) [StackingClassifier]
'''
# 堆叠5折CV分类与网格搜索(结合网格搜索调参优化)
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from mlxtend.classifier import StackingCVClassifier# Initializing modelsclf1 = KNeighborsClassifier(n_neighbors=1)
clf2 = RandomForestClassifier(random_state=RANDOM_SEED)
clf3 = GaussianNB()
lr = LogisticRegression()sclf = StackingCVClassifier(classifiers=[clf1, clf2, clf3], meta_classifier=lr,random_state=42)params = {'kneighborsclassifier__n_neighbors': [1, 5],'randomforestclassifier__n_estimators': [10, 50],'meta_classifier__C': [0.1, 10.0]}grid = GridSearchCV(estimator=sclf, param_grid=params, cv=5,refit=True)
grid.fit(X, y)cv_keys = ('mean_test_score', 'std_test_score', 'params')for r, _ in enumerate(grid.cv_results_['mean_test_score']):print("%0.3f +/- %0.2f %r"% (grid.cv_results_[cv_keys[0]][r],grid.cv_results_[cv_keys[1]][r] / 2.0,grid.cv_results_[cv_keys[2]][r]))print('Best parameters: %s' % grid.best_params_)
print('Accuracy: %.2f' % grid.best_score_)
'''
0.967 +/- 0.01 {'kneighborsclassifier__n_neighbors': 1, 'meta_classifier__C': 0.1, 'randomforestclassifier__n_estimators': 10}
0.967 +/- 0.01 {'kneighborsclassifier__n_neighbors': 1, 'meta_classifier__C': 0.1, 'randomforestclassifier__n_estimators': 50}
0.967 +/- 0.01 {'kneighborsclassifier__n_neighbors': 1, 'meta_classifier__C': 10.0, 'randomforestclassifier__n_estimators': 10}
0.967 +/- 0.01 {'kneighborsclassifier__n_neighbors': 1, 'meta_classifier__C': 10.0, 'randomforestclassifier__n_estimators': 50}
0.960 +/- 0.01 {'kneighborsclassifier__n_neighbors': 5, 'meta_classifier__C': 0.1, 'randomforestclassifier__n_estimators': 10}
0.960 +/- 0.01 {'kneighborsclassifier__n_neighbors': 5, 'meta_classifier__C': 0.1, 'randomforestclassifier__n_estimators': 50}
0.960 +/- 0.01 {'kneighborsclassifier__n_neighbors': 5, 'meta_classifier__C': 10.0, 'randomforestclassifier__n_estimators': 10}
0.960 +/- 0.01 {'kneighborsclassifier__n_neighbors': 5, 'meta_classifier__C': 10.0, 'randomforestclassifier__n_estimators': 50}
Best parameters: {'kneighborsclassifier__n_neighbors': 1, 'meta_classifier__C': 0.1, 'randomforestclassifier__n_estimators': 10}
Accuracy: 0.97
'''
# 如果想要多次使用某算法,就只需在参数列表中再加上代表这个算法的变量
sclf = StackingCVClassifier(classifiers=[clf1, clf1, clf2, clf3], meta_classifier=lr,random_state=RANDOM_SEED)# 同时,对不同的基模型也可以输如不同的特征子集
from sklearn.datasets import load_iris
from mlxtend.classifier import StackingCVClassifier
from mlxtend.feature_selection import ColumnSelector
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegressioniris = load_iris()
X = iris.data
y = iris.targetpipe1 = make_pipeline(ColumnSelector(cols=(0, 2)), # 选择第0,2列LogisticRegression())
pipe2 = make_pipeline(ColumnSelector(cols=(1, 2, 3)), # 选择第1,2,3列LogisticRegression())sclf = StackingCVClassifier(classifiers=[pipe1, pipe2], meta_classifier=LogisticRegression(),random_state=42)sclf.fit(X, y)
'''
StackingCVClassifier(classifiers=[Pipeline(steps=[('columnselector',ColumnSelector(cols=(0, 2))),('logisticregression',LogisticRegression())]),Pipeline(steps=[('columnselector',ColumnSelector(cols=(1, 2,3))),('logisticregression',LogisticRegression())])],meta_classifier=LogisticRegression(), random_state=42)
'''
# 绘制ROC曲线
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import StackingCVClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifieriris = datasets.load_iris()
X, y = iris.data[:, [0, 1]], iris.target# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]RANDOM_SEED = 42X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=RANDOM_SEED)clf1 = LogisticRegression()
clf2 = RandomForestClassifier(random_state=RANDOM_SEED)
clf3 = SVC(random_state=RANDOM_SEED)
lr = LogisticRegression()sclf = StackingCVClassifier(classifiers=[clf1, clf2, clf3],meta_classifier=lr)# Learn to predict each class against the other
classifier = OneVsRestClassifier(sclf)
y_score = classifier.fit(X_train, y_train).decision_function(X_test)# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])roc_auc[i] = auc(fpr[i], tpr[i])# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
绘制的曲线如下图所示:
# 2.使用sklearn自带的Stacking方法
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import StackingClassifier
from sklearn.model_selection import cross_val_scoreX, y = load_iris(return_X_y=True)clf1 = KNeighborsClassifier(n_neighbors=2)
clf2 = RandomForestClassifier()
clf3 = GaussianNB()
clf4 = make_pipeline(StandardScaler(),SVC())
lr = LogisticRegression()estimators = [('knn',clf1),('rf',clf2),('nb',clf3),('svm',clf4)]
sclf = StackingClassifier(estimators = estimators,final_estimator = lr)from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
sclf.fit(X_train, y_train).score(X_test, y_test)
cross_val_score(sclf,X,y,cv=3,scoring='accuracy')
'''
output:
array([0.98, 0.92, 0.96])
'''
参考资料
mlxtend官网:http://rasbt.github.io/mlxtend/user_guide/classifier/StackingCVClassifier/#stackingcvclassifier
sklearn Stacking部分:https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html
本文主要内容来自Datawhale开源课程
这篇关于集成学习之Stacking的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!