0 回归-海上风电出力预测

2024-04-18 08:52

本文主要是介绍0 回归-海上风电出力预测,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

https://www.dcic-china.com/competitions/10098

分析一下:特征工程如何做。


  1. 时间特征: 小时、分钟、一个星期中的第几天、一个月中的第几天。这些可以作为周期特征的标识。比如周六周日的人流会有很大的波动,这些如果不告诉模型它是很难学习到知识的。
  2. 业务特征: 这方面需要查阅相关的知识点了。操作基本都是在 对单个特征特殊处理f(x),两个特征之间做四则运算。同一业务特征做加减,不同领域特征做乘除。最好做出来的特征有实际的物理意义。
  3. 历史序列特征:滑动窗口、移动平均等等;我之前参加过一个 做的特征工作是爆炸式的,也是惊讶了我,但是别人的结果是真的好。这玩意真有点迷,做尝试吧。
  4. label处理。比如回归,如果能降低当前标签的量纲一定要做。可以与某个及其相关的特征做除法(减法),缩小变化,这样防止模型预测的结果不可控。

import numpy as np
import pandas as pd
import lightgbm as lgb
import xgboost as xgb
from catboost import CatBoostClassifier, CatBoostRegressor
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
from sklearn.metrics import mean_squared_error, mean_absolute_error
import matplotlib.pyplot as plt
import tqdm
import sys
import os
import gc
import argparse
import warnings
warnings.filterwarnings('ignore')# 读取数据
train_info = pd.read_csv('../data/first_data/A榜-训练集_海上风电预测_基本信息.csv', encoding='gbk')
train_df = pd.read_csv('../data/first_data/A榜-训练集_海上风电预测_气象变量及实际功率数据.csv', encoding='gbk')test_info = pd.read_csv('../data/first_data/B榜-测试集_海上风电预测_基本信息.csv', encoding='gbk')
test_df = pd.read_csv('../data/first_data/B榜-测试集_海上风电预测_气象变量数据.csv', encoding='gbk')submit_example = pd.read_csv('../data/first_data/submit_example.csv')train_df = train_df.merge(train_info[['站点编号','装机容量(MW)']], on=['站点编号'], how='left')
test_df = test_df.merge(test_info[['站点编号','装机容量(MW)']], on=['站点编号'], how='left')train_df['站点编号'] = train_df['站点编号'].apply(lambda x:int(x[1]))
test_df['站点编号'] = test_df['站点编号'].apply(lambda x:int(x[1]))train_df.columns = ['stationId','time','airPressure','relativeHumidity','cloudiness','10mWindSpeed','10mWindDirection','temperature','irradiation','precipitation','100mWindSpeed','100mWindDirection','power','capacity']test_df.columns = ['stationId','time','airPressure','relativeHumidity','cloudiness','10mWindSpeed','10mWindDirection','temperature','irradiation','precipitation','100mWindSpeed','100mWindDirection','capacity']# 特征组合
train_df['100mWindSpeed/10mWindSpeed'] = train_df['100mWindSpeed'] / (train_df['10mWindSpeed'] + 0.0000001)
test_df['100mWindSpeed/10mWindSpeed'] = test_df['100mWindSpeed'] / (test_df['10mWindSpeed'] + 0.0000001)train_df['100mWindDirection/10mWindDirection'] = train_df['100mWindDirection'] / (train_df['10mWindDirection'] + 0.0000001)
test_df['100mWindDirection/10mWindDirection'] = test_df['100mWindDirection'] / (test_df['10mWindDirection'] + 0.0000001)train_df['10mWindDirection_new'] = train_df['10mWindDirection'] - 180
test_df['10mWindDirection_new'] = test_df['10mWindDirection'] - 180# 差值
train_df['100mWindSpeed_10mWindSpeed'] = train_df['100mWindSpeed'] - train_df['10mWindSpeed'] 
test_df['100mWindSpeed_10mWindSpeed'] = test_df['100mWindSpeed'] - test_df['10mWindSpeed']train_df['100mWindDirection_10mWindDirection'] = train_df['100mWindDirection'] - train_df['10mWindDirection']
test_df['100mWindDirection_10mWindDirection'] = test_df['100mWindDirection'] - test_df['10mWindDirection']# 风切变指数
train_df['WindSpeed/WindDirectio'] = train_df['100mWindSpeed/10mWindSpeed'] / train_df['100mWindDirection/10mWindDirection']
test_df['WindSpeed/WindDirectio'] = test_df['100mWindSpeed/10mWindSpeed'] / test_df['100mWindDirection/10mWindDirection']train_df['100mWindSpeed/10mWindSpeed_2'] = train_df['100mWindSpeed/10mWindSpeed'].apply(lambda x:np.log10(x)) / 10
test_df['100mWindSpeed/10mWindSpeed_2'] = test_df['100mWindSpeed/10mWindSpeed'].apply(lambda x:np.log10(x)) / 10# 湿度/温度
train_df['relativeHumidity/temperature'] = train_df['relativeHumidity'] / (train_df['temperature'] + 0.0000001)
test_df['relativeHumidity/temperature'] = test_df['relativeHumidity'] / (test_df['temperature'] + 0.0000001)# 辐射/温度
train_df['irradiation/temperature'] = train_df['irradiation'] / (train_df['temperature'] + 0.0000001)
test_df['irradiation/temperature'] = test_df['irradiation'] / (test_df['temperature'] + 0.0000001)# 辐射/云量
train_df['irradiation/cloudiness'] = train_df['irradiation'] / (train_df['cloudiness'] + 0.0000001)
test_df['irradiation/cloudiness'] = test_df['irradiation'] / (test_df['cloudiness'] + 0.0000001)# 是否降水
train_df['is_precipitation'] = train_df['precipitation'].apply(lambda x:1 if x>0 else 0)
test_df['is_precipitation'] = test_df['precipitation'].apply(lambda x:1 if x>0 else 0)def get_time_feature(df, col):df_copy = df.copy()prefix = col + "_"df_copy[col] = df_copy[col].astype(str)df_copy[col] = pd.to_datetime(df_copy[col], format='%Y-%m-%d %H:%M')df_copy[prefix + 'month'] = df_copy[col].dt.monthdf_copy[prefix + 'day'] = df_copy[col].dt.daydf_copy[prefix + 'hour'] = df_copy[col].dt.hourdf_copy[prefix + 'minute'] = df_copy[col].dt.minutedf_copy[prefix + 'weekofyear'] = df_copy[col].dt.weekofyeardf_copy[prefix + 'dayofyear'] = df_copy[col].dt.dayofyearreturn df_copy   train_df = get_time_feature(train_df, 'time')
test_df = get_time_feature(test_df, 'time')# 合并训练数据和测试数据
train_df['is_test'] = 0
test_df['is_test'] = 1
df = pd.concat([train_df, test_df], axis=0).reset_index(drop=True)# 构建特征
num_cols = ['airPressure','relativeHumidity','cloudiness','10mWindSpeed','10mWindDirection','temperature','irradiation','precipitation','100mWindSpeed','100mWindDirection']for col in tqdm.tqdm(num_cols):# 历史平移/差分特征for i in [1,2,3,4,5,6,7,15,30,50] + [1*96,2*96,3*96,4*96,5*96]:df[f'{col}_shift{i}'] = df.groupby('stationId')[col].shift(i)df[f'{col}_feture_shift{i}'] = df.groupby('stationId')[col].shift(-i)df[f'{col}_diff{i}'] = df[f'{col}_shift{i}'] - df[col]df[f'{col}_feture_diff{i}'] = df[f'{col}_feture_shift{i}'] - df[col]df[f'{col}_2diff{i}'] = df.groupby('stationId')[f'{col}_diff{i}'].diff(1)df[f'{col}_feture_2diff{i}'] = df.groupby('stationId')[f'{col}_feture_diff{i}'].diff(1)# 均值相关df[f'{col}_3mean'] = (df[f'{col}'] + df[f'{col}_feture_shift1'] + df[f'{col}_shift1'])/3df[f'{col}_5mean'] = (df[f'{col}_3mean']*3 + df[f'{col}_feture_shift2'] + df[f'{col}_shift2'])/5df[f'{col}_7mean'] = (df[f'{col}_5mean']*5 + df[f'{col}_feture_shift3'] + df[f'{col}_shift3'])/7df[f'{col}_9mean'] = (df[f'{col}_7mean']*7 + df[f'{col}_feture_shift4'] + df[f'{col}_shift4'])/9df[f'{col}_11mean'] = (df[f'{col}_9mean']*9 + df[f'{col}_feture_shift5'] + df[f'{col}_shift5'])/11df[f'{col}_shift_3_96_mean'] = (df[f'{col}_shift{1*96}'] + df[f'{col}_shift{2*96}'] + df[f'{col}_shift{3*96}'])/3df[f'{col}_shift_5_96_mean'] = (df[f'{col}_shift_3_96_mean']*3 + df[f'{col}_shift{4*96}'] + df[f'{col}_shift{5*96}'])/5df[f'{col}_future_shift_3_96_mean'] = (df[f'{col}_feture_shift{1*96}'] + df[f'{col}_feture_shift{2*96}'] + df[f'{col}_feture_shift{3*96}'])/3df[f'{col}_future_shift_5_96_mean'] = (df[f'{col}_future_shift_3_96_mean']*3 + df[f'{col}_feture_shift{4*96}'] + df[f'{col}_feture_shift{5*96}'])/3# 窗口统计for win in [3,5,7,14,28]:df[f'{col}_win{win}_mean'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').mean().valuesdf[f'{col}_win{win}_max'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').max().valuesdf[f'{col}_win{win}_min'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').min().valuesdf[f'{col}_win{win}_std'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').std().valuesdf[f'{col}_win{win}_skew'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').skew().valuesdf[f'{col}_win{win}_kurt'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').kurt().valuesdf[f'{col}_win{win}_median'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').median().valuesdf = df.sort_values(['stationId','time'], ascending=False)df[f'{col}_future_win{win}_mean'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').mean().valuesdf[f'{col}_future_win{win}_max'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').max().valuesdf[f'{col}_future_win{win}_min'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').min().valuesdf[f'{col}_future_win{win}_std'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').std().valuesdf[f'{col}_future_win{win}_skew'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').skew().valuesdf[f'{col}_future_win{win}_kurt'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').kurt().valuesdf[f'{col}_future_win{win}_median'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').median().valuesdf = df.sort_values(['stationId','time'], ascending=True)# 二阶特征df[f'{col}_win{win}_mean_loc_diff'] = df[col] - df[f'{col}_win{win}_mean']df[f'{col}_win{win}_max_loc_diff'] = df[col] - df[f'{col}_win{win}_max']df[f'{col}_win{win}_min_loc_diff'] = df[col] - df[f'{col}_win{win}_min']df[f'{col}_win{win}_median_loc_diff'] = df[col] - df[f'{col}_win{win}_median']df[f'{col}_future_win{win}_mean_loc_diff'] = df[col] - df[f'{col}_future_win{win}_mean']df[f'{col}_future_win{win}_max_loc_diff'] = df[col] - df[f'{col}_future_win{win}_max']df[f'{col}_future_win{win}_min_loc_diff'] = df[col] - df[f'{col}_future_win{win}_min']df[f'{col}_future_win{win}_median_loc_diff'] = df[col] - df[f'{col}_future_win{win}_median']for col in ['is_precipitation']:for win in [4,8,12,20,48,96]:df[f'{col}_win{win}_mean'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').mean().valuesdf[f'{col}_win{win}_sum'] = df.groupby('stationId')[col].rolling(window=win, min_periods=3, closed='left').sum().valuestrain_df = df[df.is_test==0].reset_index(drop=True)
test_df = df[df.is_test==1].reset_index(drop=True)
del df
gc.collect()train_df = train_df[train_df['power']!='<NULL>'].reset_index(drop=True)
train_df['power'] = train_df['power'].astype(float)
cols = [f for f in test_df.columns if f not in ['time','power','is_test']] # capacity
def cv_model(clf, train_x, train_y, test_x, capacity, seed=2024):folds = 5kf = KFold(n_splits=folds, shuffle=True, random_state=seed)oof = np.zeros(train_x.shape[0])test_predict = np.zeros(test_x.shape[0])cv_scores = []for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):print('************************************ {} ************************************'.format(str(i+1)))trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], train_y[valid_index]# 转化目标,进行站点目标归一化trn_y = trn_y / capacity[train_index]val_y = val_y / capacity[valid_index]train_matrix = clf.Dataset(trn_x, label=trn_y)valid_matrix = clf.Dataset(val_x, label=val_y)params = {'boosting_type': 'gbdt','objective': 'regression','metric': 'rmse','min_child_weight': 5,'num_leaves': 2 ** 8,'lambda_l2': 10,'feature_fraction': 0.8,'bagging_fraction': 0.8,'bagging_freq': 4,'learning_rate': 0.1,'seed': 2023,'nthread' : 16,'verbose' : -1,}model = clf.train(params, train_matrix, 3000, valid_sets=[train_matrix, valid_matrix],categorical_feature=[], verbose_eval=500, early_stopping_rounds=200)val_pred = model.predict(val_x, num_iteration=model.best_iteration)test_pred = model.predict(test_x, num_iteration=model.best_iteration)oof[valid_index] = val_predtest_predict += test_pred / kf.n_splitsscore = 1/(1+np.sqrt(mean_squared_error(val_pred * capacity[valid_index], val_y * capacity[valid_index])))cv_scores.append(score)print(cv_scores)if i == 0:imp_df = pd.DataFrame()imp_df["feature"] = colsimp_df["importance_gain"] = model.feature_importance(importance_type='gain')imp_df["importance_split"] = model.feature_importance(importance_type='split')imp_df["mul"] = imp_df["importance_gain"]*imp_df["importance_split"]imp_df = imp_df.sort_values(by='mul',ascending=False)imp_df.to_csv('feature_importance.csv', index=False)print(imp_df[:30])return oof, test_predictlgb_oof, lgb_test = cv_model(lgb, train_df[cols], train_df['power'], test_df[cols], train_df['capacity'])

这篇关于0 回归-海上风电出力预测的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/914318

相关文章

✨机器学习笔记(二)—— 线性回归、代价函数、梯度下降

1️⃣线性回归(linear regression) f w , b ( x ) = w x + b f_{w,b}(x) = wx + b fw,b​(x)=wx+b 🎈A linear regression model predicting house prices: 如图是机器学习通过监督学习运用线性回归模型来预测房价的例子,当房屋大小为1250 f e e t 2 feet^

用Python实现时间序列模型实战——Day 14: 向量自回归模型 (VAR) 与向量误差修正模型 (VECM)

一、学习内容 1. 向量自回归模型 (VAR) 的基本概念与应用 向量自回归模型 (VAR) 是多元时间序列分析中的一种模型,用于捕捉多个变量之间的相互依赖关系。与单变量自回归模型不同,VAR 模型将多个时间序列作为向量输入,同时对这些变量进行回归分析。 VAR 模型的一般形式为: 其中: ​ 是时间  的变量向量。 是常数向量。​ 是每个时间滞后的回归系数矩阵。​ 是误差项向量,假

Tensorflow lstm实现的小说撰写预测

最近,在研究深度学习方面的知识,结合Tensorflow,完成了基于lstm的小说预测程序demo。 lstm是改进的RNN,具有长期记忆功能,相对于RNN,增加了多个门来控制输入与输出。原理方面的知识网上很多,在此,我只是将我短暂学习的tensorflow写一个预测小说的demo,如果有错误,还望大家指出。 1、将小说进行分词,去除空格,建立词汇表与id的字典,生成初始输入模型的x与y d

临床基础两手抓!这个12+神经网络模型太贪了,免疫治疗预测、通路重要性、基因重要性、通路交互作用性全部拿下!

生信碱移 IRnet介绍 用于预测病人免疫治疗反应类型的生物过程嵌入神经网络,提供通路、通路交互、基因重要性的多重可解释性评估。 临床实践中常常遇到许多复杂的问题,常见的两种是: 二分类或多分类:预测患者对治疗有无耐受(二分类)、判断患者的疾病分级(多分类); 连续数值的预测:预测癌症病人的风险、预测患者的白细胞数值水平; 尽管传统的机器学习提供了高效的建模预测与初步的特征重

深度学习与大模型第3课:线性回归模型的构建与训练

文章目录 使用Python实现线性回归:从基础到scikit-learn1. 环境准备2. 数据准备和可视化3. 使用numpy实现线性回归4. 使用模型进行预测5. 可视化预测结果6. 使用scikit-learn实现线性回归7. 梯度下降法8. 随机梯度下降和小批量梯度下降9. 比较不同的梯度下降方法总结 使用Python实现线性回归:从基础到scikit-learn 线性

【python因果推断库11】工具变量回归与使用 pymc 验证工具变量4

目录  Wald 估计与简单控制回归的比较 CausalPy 和 多变量模型 感兴趣的系数 复杂化工具变量公式  Wald 估计与简单控制回归的比较 但现在我们可以将这个估计与仅包含教育作为控制变量的简单回归进行比较。 naive_reg_model, idata_reg = make_reg_model(covariate_df.assign(education=df[

什么是GPT-3的自回归架构?为什么GPT-3无需梯度更新和微调

文章目录 知识回顾GPT-3的自回归架构何为自回归架构为什么架构会影响任务表现自回归架构的局限性与双向模型的对比小结 为何无需梯度更新和微调为什么不需要怎么做到不需要 🍃作者介绍:双非本科大四网络工程专业在读,阿里云专家博主,专注于Java领域学习,擅长web应用开发,目前开始人工智能领域相关知识的学习 🦅个人主页:@逐梦苍穹 📕所属专栏:人工智能 🌻gitee地址:x

结合Python与GUI实现比赛预测与游戏数据分析

在现代软件开发中,用户界面设计和数据处理紧密结合,以提升用户体验和功能性。本篇博客将基于Python代码和相关数据分析进行讨论,尤其是如何通过PyQt5等图形界面库实现交互式功能。同时,我们将探讨如何通过嵌入式预测模型为用户提供赛果预测服务。 本文的主要内容包括: 基于PyQt5的图形用户界面设计。结合数据进行比赛预测。文件处理和数据分析流程。 1. PyQt5 图形用户界面设计

CNN-LSTM模型中应用贝叶斯推断进行时间序列预测

这篇论文的标题是《在混合CNN-LSTM模型中应用贝叶斯推断进行时间序列预测》,作者是Thi-Lich Nghiem, Viet-Duc Le, Thi-Lan Le, Pierre Maréchal, Daniel Delahaye, Andrija Vidosavljevic。论文发表在2022年10月于越南富国岛举行的国际多媒体分析与模式识别会议(MAPR)上。 摘要部分提到,卷积

多维时序 | Matlab基于SSA-SVR麻雀算法优化支持向量机的数据多变量时间序列预测

多维时序 | Matlab基于SSA-SVR麻雀算法优化支持向量机的数据多变量时间序列预测 目录 多维时序 | Matlab基于SSA-SVR麻雀算法优化支持向量机的数据多变量时间序列预测效果一览基本介绍程序设计参考资料 效果一览 基本介绍 1.Matlab基于SSA-SVR麻雀算法优化支持向量机的数据多变量时间序列预测(完整源码和数据) 2.SS