【数据竞赛】99%情况下都有效的特征筛选策略--Null Importance。
机器学习初学者
共 13081字,需浏览 27分钟
·
2021-08-05 09:50
Null Importance特征筛选
目前数据量越来越大,数据特征维度也越来越高,这不仅对我们的计算存储带来了较大的挑战,与此同时,还会对模型的效果带来较大的损益。
如何既能节省内存计算资源,同时能拿到模型的提效是我们非常关心的一个问题。
本文我们介绍一种特征筛选策略 -- Null Importance 特征筛选策略,该策略在95%的数据竞赛中基本都可以拿到效果,带来不错的提升。
1. 核心思想
Null Importance的核心思想在于:
计算不靠谱的特征重要性;
我们对标签进行随机shuffle,并计算特征重要性,这些特征重要性是“错误的”; 计算靠谱的特征重要性;
对原始数据进行训练并得到特征重要性,这些特征重要性是“正确的”;
计算靠谱的特征重要性和不靠谱的特征重要性的差距/偏离度(自行设计Score函数);
按照计算得到的Score进行批量的特征筛选,并计算线下验证分数;
选用线下分数最好的一些特征作为最终的特征;
2. 实现步骤
Null Importance算法的实现步骤为:
在原始数据集上运行模型并且记录每个特征重要性。以此作为基准; 构建Null importances分布:对我们的标签进行随机Shuffle,并且计算shuffle之后的特征的重要性; 对2进行多循环操作,得到多个不同shuffle之后的特征重要性; 设计score函数,得到未shuffle的特征重要性与shuffle之后特征重要性的偏离度,并以此设计特征筛选策略; 计算不同筛选情况下的模型的分数,并进行记录; 将分数最好的几个分数对应的特征进行返回。
代码摘自:https://www.kaggle.com/ogrellier/feature-selection-with-null-importances。
1. 特征重要性获取函数
def get_feature_importances(data, shuffle, seed=None):
# Gather real features
train_features = [f for f in data if f not in ['TARGET', 'SK_ID_CURR']]
# Go over fold and keep track of CV score (train and valid) and feature importances
# Shuffle target if required
y = data['TARGET'].copy()
if shuffle:
# Here you could as well use a binomial distribution
y = data['TARGET'].copy().sample(frac=1.0)
# Fit LightGBM in RF mode, yes it's quicker than sklearn RandomForest
dtrain = lgb.Dataset(data[train_features], y, free_raw_data=False, silent=True)
lgb_params = {
'objective': 'binary',
'boosting_type': 'rf',
'subsample': 0.623,
'colsample_bytree': 0.7,
'num_leaves': 127,
'max_depth': 8,
'seed': seed,
'bagging_freq': 1,
'n_jobs': 4
}
# Fit the model
clf = lgb.train(params=lgb_params, train_set=dtrain, num_boost_round=200, categorical_feature=categorical_feats)
# Get feature importances
imp_df = pd.DataFrame()
imp_df["feature"] = list(train_features)
imp_df["importance_gain"] = clf.feature_importance(importance_type='gain')
imp_df["importance_split"] = clf.feature_importance(importance_type='split')
imp_df['trn_score'] = roc_auc_score(y, clf.predict(data[train_features]))
return imp_df
2.获取原始版本的特征重要性
# Seed the unexpected randomness of this world
np.random.seed(123)
# Get the actual importance, i.e. without shuffling
actual_imp_df = get_feature_importances(data=data, shuffle=False)
3.获取多个target shuffle版本的特征重要性
null_imp_df = pd.DataFrame()
nb_runs = 80
import time
start = time.time()
dsp = ''
for i in range(nb_runs):
# Get current run importances
imp_df = get_feature_importances(data=data, shuffle=True)
imp_df['run'] = i + 1
# Concat the latest importances with the old ones
null_imp_df = pd.concat([null_imp_df, imp_df], axis=0)
# Erase previous message
for l in range(len(dsp)):
print('\b', end='', flush=True)
# Display current run and time used
spent = (time.time() - start) / 60
dsp = 'Done with %4d of %4d (Spent %5.1f min)' % (i + 1, nb_runs, spent)
print(dsp, end='', flush=True)
4.计算Score
4.1 Score计算方式1
以未进行特征shuffle的特征重要性除以shuffle之后的0.75分位数作为我们的score;
feature_scores = []
for _f in actual_imp_df['feature'].unique():
f_null_imps_gain = null_imp_df.loc[null_imp_df['feature'] == _f, 'importance_gain'].values
f_act_imps_gain = actual_imp_df.loc[actual_imp_df['feature'] == _f, 'importance_gain'].mean()
gain_score = np.log(1e-10 + f_act_imps_gain / (1 + np.percentile(f_null_imps_gain, 75))) # Avoid didvide by zero
f_null_imps_split = null_imp_df.loc[null_imp_df['feature'] == _f, 'importance_split'].values
f_act_imps_split = actual_imp_df.loc[actual_imp_df['feature'] == _f, 'importance_split'].mean()
split_score = np.log(1e-10 + f_act_imps_split / (1 + np.percentile(f_null_imps_split, 75))) # Avoid didvide by zero
feature_scores.append((_f, split_score, gain_score))
scores_df = pd.DataFrame(feature_scores, columns=['feature', 'split_score', 'gain_score'])
4.2 计算方式2
shuffle target之后特征重要性低于实际target对应特征的重要性0.25分位数的次数百分比。
correlation_scores = []
for _f in actual_imp_df['feature'].unique():
f_null_imps = null_imp_df.loc[null_imp_df['feature'] == _f, 'importance_gain'].values
f_act_imps = actual_imp_df.loc[actual_imp_df['feature'] == _f, 'importance_gain'].values
gain_score = 100 * (f_null_imps < np.percentile(f_act_imps, 25)).sum() / f_null_imps.size
f_null_imps = null_imp_df.loc[null_imp_df['feature'] == _f, 'importance_split'].values
f_act_imps = actual_imp_df.loc[actual_imp_df['feature'] == _f, 'importance_split'].values
split_score = 100 * (f_null_imps < np.percentile(f_act_imps, 25)).sum() / f_null_imps.size
correlation_scores.append((_f, split_score, gain_score))
corr_scores_df = pd.DataFrame(correlation_scores, columns=['feature', 'split_score', 'gain_score'])
5. 计算特征筛选之后的最佳分数并记录相应特征
选用筛选之后分数最好的特征作为最终特征即可。
def score_feature_selection(df=None, train_features=None, cat_feats=None, target=None):
# Fit LightGBM
dtrain = lgb.Dataset(df[train_features], target, free_raw_data=False, silent=True)
lgb_params = {
'objective': 'binary',
'boosting_type': 'gbdt',
'learning_rate': .1,
'subsample': 0.8,
'colsample_bytree': 0.8,
'num_leaves': 31,
'max_depth': -1,
'seed': 13,
'n_jobs': 4,
'min_split_gain': .00001,
'reg_alpha': .00001,
'reg_lambda': .00001,
'metric': 'auc'
}
# Fit the model
hist = lgb.cv(
params=lgb_params,
train_set=dtrain,
num_boost_round=2000,
categorical_feature=cat_feats,
nfold=5,
stratified=True,
shuffle=True,
early_stopping_rounds=50,
verbose_eval=0,
seed=17
)
# Return the last mean / std values
return hist['auc-mean'][-1], hist['auc-stdv'][-1]
# features = [f for f in data.columns if f not in ['SK_ID_CURR', 'TARGET']]
# score_feature_selection(df=data[features], train_features=features, target=data['TARGET'])
for threshold in [0, 10, 20, 30 , 40, 50 ,60 , 70, 80 , 90, 95, 99]:
split_feats = [_f for _f, _score, _ in correlation_scores if _score >= threshold]
split_cat_feats = [_f for _f, _score, _ in correlation_scores if (_score >= threshold) & (_f in categorical_feats)]
gain_feats = [_f for _f, _, _score in correlation_scores if _score >= threshold]
gain_cat_feats = [_f for _f, _, _score in correlation_scores if (_score >= threshold) & (_f in categorical_feats)]
print('Results for threshold %3d' % threshold)
split_results = score_feature_selection(df=data, train_features=split_feats, cat_feats=split_cat_feats, target=data['TARGET'])
print('\t SPLIT : %.6f +/- %.6f' % (split_results[0], split_results[1]))
gain_results = score_feature_selection(df=data, train_features=gain_feats, cat_feats=gain_cat_feats, target=data['TARGET'])
print('\t GAIN : %.6f +/- %.6f' % (gain_results[0], gain_results[1]))
Null Importance特征筛选策略是数据实践中必备的技能之一,该方法基本可以在95%以上的数据竞赛或者实践项目中都能取得一定的收益。
https://www.kaggle.com/ogrellier/feature-selection-with-null-importances https://academic.oup.com/bioinformatics/article/26/10/1340/193348
往期精彩回顾 本站qq群851320808,加入微信群请扫码:
评论