Python环境下基于深度判别迁移学习网络的轴承故障诊断

2024-02-15 20:20

本文主要是介绍Python环境下基于深度判别迁移学习网络的轴承故障诊断,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目前很多机器学习和数据挖掘算法都是基于训练数据和测试数据位于同一特征空间、拥有相同数据分布的假设。然而在现实应用中,该假设却未必存在。一方面,如果将利用某一领域数据训练得到的模型直接应用于新的目标领域,领域之间切实存在的数据差异可能会导致模型效果的骤然下降。另一方面,如果直接在新的目标领域中进行模型的训练,其数据的稀缺和标注的不完整可能会导致监督学习出现严重的过拟合问题,难以达到令人满意的学习效果。因此,如何对不同领域、不同来源的非结构化数据进行合理的数据迁移,实现不同领域的模型适配,成为一个亟需解决的问题。

迁移学习可以从辅助领域的现有数据中迁移知识,帮助完成目标领域的学习任务,即完成从源域到目标域的知识迁移过程,从而有效解决上述问题。特别地,随着大数据时代来临,迁移学习能够将知识从“大数据”迁移到“小数据”,解决小数据领域的知识稀缺等问题。

本代码为Python环境下基于深度判别迁移学习网络的轴承故障诊断代码,源域数据为西储大学轴承数据48kcwru_data.npy,目标域数据为为江南大学轴承数据jnudata600_data.npy,所用模块版本如下:

numpy==1.21.5
sklearn==1.0.2
pytorch_lightning==1.7.7
torch==1.10.1+cpu

所用模块如下:

import numpy as np
from sklearn.preprocessing import StandardScaler
from pytorch_lightning.utilities.seed import seed_everything
import torch
import torch.nn as nn
import torch.nn.functional as F
import argparse
from sklearn.utils import shuffle
from torch.utils import data as da
from torchmetrics import MeanMetric, Accuracy

部分代码如下:

#定义加载数据函数
def load_data():source_data = np.load(args.cwru_data)source_label = np.load(args.cwru_label).argmax(axis=-1)target_data = np.load(args.jnu_data)target_label = np.load(args.jnu_label).argmax(axis=-1)source_data = StandardScaler().fit_transform(source_data.T).Ttarget_data = StandardScaler().fit_transform(target_data.T).Tsource_data = np.expand_dims(source_data, axis=1)target_data = np.expand_dims(target_data, axis=1)source_data, source_label = shuffle(source_data, source_label, random_state=2)target_data, target_label = shuffle(target_data, target_label, random_state=2)Train_source = Dataset(source_data, source_label)Train_target = Dataset(target_data, target_label)return Train_source, Train_target###############################################################
#最大均值差异MMD类
class MMD(nn.Module):def __init__(self, m, n):super(MMD, self).__init__()self.m = mself.n = ndef _mix_rbf_mmd2(self, X, Y, sigmas=(10,), wts=None, biased=True):K_XX, K_XY, K_YY, d = self._mix_rbf_kernel(X, Y, sigmas, wts)return self._mmd2(K_XX, K_XY, K_YY, const_diagonal=d, biased=biased)def _mix_rbf_kernel(self, X, Y, sigmas, wts=None):if wts is None:wts = [1] * len(sigmas)XX = torch.matmul(X, X.t())XY = torch.matmul(X, Y.t())YY = torch.matmul(Y, Y.t())X_sqnorms = torch.diagonal(XX)Y_sqnorms = torch.diagonal(YY)r = lambda x: torch.unsqueeze(x, 0)c = lambda x: torch.unsqueeze(x, 1)K_XX, K_XY, K_YY = 0., 0., 0.for sigma, wt in zip(sigmas, wts):gamma = 1 / (2 * sigma ** 2)K_XX += wt * torch.exp(-gamma * (-2 * XX + c(X_sqnorms) + r(X_sqnorms)))K_XY += wt * torch.exp(-gamma * (-2 * XY + c(X_sqnorms) + r(Y_sqnorms)))K_YY += wt * torch.exp(-gamma * (-2 * YY + c(Y_sqnorms) + r(Y_sqnorms)))return K_XX, K_XY, K_YY, torch.sum(torch.tensor(wts))

运行结果如下

Epoch1, train_loss is 2.03397,test_loss is 4.93007, train_accuracy is 0.44475,test_accuracy is 0.18675,train_all_loss is 41.71445,target_cla_loss is 1.61769,source_cla_loss is 3.70468,cda_loss is 6.74935,mda_loss is 37.17306

Epoch2, train_loss is 0.54279,test_loss is 6.41293, train_accuracy is 0.83325,test_accuracy is 0.20800,train_all_loss is 6.87837,target_cla_loss is 1.71905,source_cla_loss is 1.94874,cda_loss is 1.58677,mda_loss is 4.59905

Epoch3, train_loss is 0.18851,test_loss is 5.60176, train_accuracy is 0.93775,test_accuracy is 0.29850,train_all_loss is 5.26101,target_cla_loss is 0.66165,source_cla_loss is 0.54253,cda_loss is 1.02360,mda_loss is 4.54996

Epoch4, train_loss is 0.14104,test_loss is 4.58690, train_accuracy is 0.94850,test_accuracy is 0.30800,train_all_loss is 4.09870,target_cla_loss is 0.54025,source_cla_loss is 0.38254,cda_loss is 0.88701,mda_loss is 3.57343

Epoch5, train_loss is 0.11775,test_loss is 5.07279, train_accuracy is 0.95900,test_accuracy is 0.28300,train_all_loss is 3.27498,target_cla_loss is 0.52239,source_cla_loss is 0.29470,cda_loss is 0.87684,mda_loss is 2.84035

Epoch6, train_loss is 0.08998,test_loss is 5.02790, train_accuracy is 0.97300,test_accuracy is 0.29825,train_all_loss is 3.21299,target_cla_loss is 0.39788,source_cla_loss is 0.21586,cda_loss is 0.76452,mda_loss is 2.88089

Epoch7, train_loss is 0.07695,test_loss is 4.51329, train_accuracy is 0.97975,test_accuracy is 0.31800,train_all_loss is 2.92297,target_cla_loss is 0.40623,source_cla_loss is 0.16808,cda_loss is 0.76679,mda_loss is 2.63759

Epoch8, train_loss is 0.07421,test_loss is 4.26603, train_accuracy is 0.97900,test_accuracy is 0.32400,train_all_loss is 2.65909,target_cla_loss is 0.39052,source_cla_loss is 0.17180,cda_loss is 0.72834,mda_loss is 2.37541

Epoch9, train_loss is 0.05102,test_loss is 3.63495, train_accuracy is 0.98950,test_accuracy is 0.35800,train_all_loss is 2.62192,target_cla_loss is 0.41614,source_cla_loss is 0.11351,cda_loss is 0.77280,mda_loss is 2.38951

Epoch10, train_loss is 0.06574,test_loss is 3.52261, train_accuracy is 0.98850,test_accuracy is 0.33100,train_all_loss is 2.59112,target_cla_loss is 0.48041,source_cla_loss is 0.14138,cda_loss is 0.73876,mda_loss is 2.32783

Epoch11, train_loss is 0.05876,test_loss is 3.86388, train_accuracy is 0.99050,test_accuracy is 0.33475,train_all_loss is 2.45829,target_cla_loss is 0.45235,source_cla_loss is 0.11720,cda_loss is 0.73355,mda_loss is 2.22250

Epoch12, train_loss is 0.05688,test_loss is 3.82415, train_accuracy is 0.99350,test_accuracy is 0.32075,train_all_loss is 2.43463,target_cla_loss is 0.41805,source_cla_loss is 0.10393,cda_loss is 0.68400,mda_loss is 2.22049

Epoch13, train_loss is 0.05224,test_loss is 3.78473, train_accuracy is 0.99300,test_accuracy is 0.32225,train_all_loss is 2.26402,target_cla_loss is 0.42708,source_cla_loss is 0.09958,cda_loss is 0.68476,mda_loss is 2.05325

Epoch14, train_loss is 0.06636,test_loss is 3.89151, train_accuracy is 0.98675,test_accuracy is 0.32200,train_all_loss is 2.42129,target_cla_loss is 0.42966,source_cla_loss is 0.12657,cda_loss is 0.66312,mda_loss is 2.18544

Epoch15, train_loss is 0.05342,test_loss is 3.78424, train_accuracy is 0.99575,test_accuracy is 0.32200,train_all_loss is 2.33275,target_cla_loss is 0.42920,source_cla_loss is 0.10599,cda_loss is 0.64312,mda_loss is 2.11953

Epoch16, train_loss is 0.04968,test_loss is 3.67101, train_accuracy is 0.99750,test_accuracy is 0.32100,train_all_loss is 2.23092,target_cla_loss is 0.43684,source_cla_loss is 0.09945,cda_loss is 0.63847,mda_loss is 2.02393

Epoch17, train_loss is 0.05957,test_loss is 3.67722, train_accuracy is 0.99525,test_accuracy is 0.33000,train_all_loss is 2.27638,target_cla_loss is 0.45589,source_cla_loss is 0.12059,cda_loss is 0.63428,mda_loss is 2.04677

Epoch18, train_loss is 0.05812,test_loss is 3.59771, train_accuracy is 0.99600,test_accuracy is 0.32350,train_all_loss is 2.23080,target_cla_loss is 0.47418,source_cla_loss is 0.11620,cda_loss is 0.61432,mda_loss is 2.00575

Epoch19, train_loss is 0.06287,test_loss is 3.43253, train_accuracy is 0.99350,test_accuracy is 0.32950,train_all_loss is 2.40593,target_cla_loss is 0.48567,source_cla_loss is 0.12299,cda_loss is 0.64320,mda_loss is 2.17005

Epoch20, train_loss is 0.06316,test_loss is 3.51056, train_accuracy is 0.99575,test_accuracy is 0.32475,train_all_loss is 2.31846,target_cla_loss is 0.46683,source_cla_loss is 0.13138,cda_loss is 0.63335,mda_loss is 2.07705

Epoch21, train_loss is 0.05934,test_loss is 3.84920, train_accuracy is 0.99425,test_accuracy is 0.32475,train_all_loss is 2.24664,target_cla_loss is 0.43869,source_cla_loss is 0.11981,cda_loss is 0.62297,mda_loss is 2.02066

Epoch22, train_loss is 0.05423,test_loss is 3.69176, train_accuracy is 0.99500,test_accuracy is 0.34025,train_all_loss is 2.28318,target_cla_loss is 0.46334,source_cla_loss is 0.11229,cda_loss is 0.64396,mda_loss is 2.06016

Epoch23, train_loss is 0.04934,test_loss is 3.35009, train_accuracy is 0.99775,test_accuracy is 0.33525,train_all_loss is 2.29074,target_cla_loss is 0.46118,source_cla_loss is 0.10556,cda_loss is 0.63410,mda_loss is 2.07565

Epoch24, train_loss is 0.05903,test_loss is 3.62366, train_accuracy is 0.99275,test_accuracy is 0.33475,train_all_loss is 2.19408,target_cla_loss is 0.42396,source_cla_loss is 0.12135,cda_loss is 0.61378,mda_loss is 1.96895

Epoch25, train_loss is 0.06076,test_loss is 3.72236, train_accuracy is 0.99475,test_accuracy is 0.33100,train_all_loss is 2.17257,target_cla_loss is 0.41436,source_cla_loss is 0.12382,cda_loss is 0.60274,mda_loss is 1.94704

Epoch26, train_loss is 0.05901,test_loss is 3.59237, train_accuracy is 0.99600,test_accuracy is 0.33225,train_all_loss is 2.34868,target_cla_loss is 0.47314,source_cla_loss is 0.12100,cda_loss is 0.60417,mda_loss is 2.11995

Epoch27, train_loss is 0.05911,test_loss is 3.82265, train_accuracy is 0.99500,test_accuracy is 0.34000,train_all_loss is 2.25363,target_cla_loss is 0.43506,source_cla_loss is 0.12174,cda_loss is 0.58978,mda_loss is 2.02940

Epoch28, train_loss is 0.05933,test_loss is 3.65749, train_accuracy is 0.99750,test_accuracy is 0.34050,train_all_loss is 2.22602,target_cla_loss is 0.47101,source_cla_loss is 0.12723,cda_loss is 0.61243,mda_loss is 1.99045

Epoch29, train_loss is 0.03960,test_loss is 3.72375, train_accuracy is 0.99900,test_accuracy is 0.34075,train_all_loss is 2.20123,target_cla_loss is 0.44257,source_cla_loss is 0.10232,cda_loss is 0.60967,mda_loss is 1.99369

Epoch30, train_loss is 0.05384,test_loss is 3.61430, train_accuracy is 0.99350,test_accuracy is 0.33100,train_all_loss is 2.12823,target_cla_loss is 0.43999,source_cla_loss is 0.11275,cda_loss is 0.60491,mda_loss is 1.91099

Epoch31, train_loss is 0.04751,test_loss is 3.67043, train_accuracy is 0.99650,test_accuracy is 0.35875,train_all_loss is 2.09769,target_cla_loss is 0.39988,source_cla_loss is 0.11007,cda_loss is 0.61966,mda_loss is 1.88566

Epoch32, train_loss is 0.05494,test_loss is 3.66357, train_accuracy is 0.99325,test_accuracy is 0.35325,train_all_loss is 2.16350,target_cla_loss is 0.42613,source_cla_loss is 0.12841,cda_loss is 0.61385,mda_loss is 1.93109

Epoch33, train_loss is 0.04867,test_loss is 3.86881, train_accuracy is 0.99600,test_accuracy is 0.34925,train_all_loss is 2.09730,target_cla_loss is 0.43453,source_cla_loss is 0.11316,cda_loss is 0.60486,mda_loss is 1.88020

Epoch34, train_loss is 0.04144,test_loss is 3.81459, train_accuracy is 0.99775,test_accuracy is 0.34900,train_all_loss is 2.03613,target_cla_loss is 0.43036,source_cla_loss is 0.09037,cda_loss is 0.58916,mda_loss is 1.84382

Epoch35, train_loss is 0.04441,test_loss is 3.66703, train_accuracy is 0.99775,test_accuracy is 0.35975,train_all_loss is 2.19538,target_cla_loss is 0.42232,source_cla_loss is 0.09971,cda_loss is 0.58817,mda_loss is 1.99462

Epoch36, train_loss is 0.05367,test_loss is 3.85576, train_accuracy is 0.99750,test_accuracy is 0.34825,train_all_loss is 2.16997,target_cla_loss is 0.42345,source_cla_loss is 0.12474,cda_loss is 0.59072,mda_loss is 1.94381

Epoch37, train_loss is 0.04486,test_loss is 3.92458, train_accuracy is 0.99500,test_accuracy is 0.34375,train_all_loss is 2.10291,target_cla_loss is 0.42692,source_cla_loss is 0.10405,cda_loss is 0.61382,mda_loss is 1.89478

Epoch38, train_loss is 0.04253,test_loss is 3.61390, train_accuracy is 0.99775,test_accuracy is 0.35300,train_all_loss is 1.99870,target_cla_loss is 0.41496,source_cla_loss is 0.10335,cda_loss is 0.61797,mda_loss is 1.79206

Epoch39, train_loss is 0.04100,test_loss is 3.58168, train_accuracy is 0.99825,test_accuracy is 0.35125,train_all_loss is 2.21055,target_cla_loss is 0.43278,source_cla_loss is 0.09059,cda_loss is 0.58880,mda_loss is 2.01780

Epoch40, train_loss is 0.04859,test_loss is 3.62033, train_accuracy is 0.99700,test_accuracy is 0.35875,train_all_loss is 2.10599,target_cla_loss is 0.39430,source_cla_loss is 0.10756,cda_loss is 0.59355,mda_loss is 1.89964

Epoch41, train_loss is 0.05128,test_loss is 3.67334, train_accuracy is 0.99550,test_accuracy is 0.35000,train_all_loss is 2.04580,target_cla_loss is 0.41525,source_cla_loss is 0.11839,cda_loss is 0.60271,mda_loss is 1.82561

Epoch42, train_loss is 0.05321,test_loss is 3.69000, train_accuracy is 0.99450,test_accuracy is 0.34000,train_all_loss is 2.08382,target_cla_loss is 0.37200,source_cla_loss is 0.10866,cda_loss is 0.57336,mda_loss is 1.88063

Epoch43, train_loss is 0.04858,test_loss is 3.74816, train_accuracy is 0.99575,test_accuracy is 0.34375,train_all_loss is 2.07542,target_cla_loss is 0.42002,source_cla_loss is 0.10167,cda_loss is 0.57270,mda_loss is 1.87447

Epoch44, train_loss is 0.04634,test_loss is 3.87925, train_accuracy is 0.99550,test_accuracy is 0.34175,train_all_loss is 2.08382,target_cla_loss is 0.37761,source_cla_loss is 0.10194,cda_loss is 0.58080,mda_loss is 1.88604

Epoch45, train_loss is 0.05111,test_loss is 3.70675, train_accuracy is 0.99375,test_accuracy is 0.35175,train_all_loss is 2.13403,target_cla_loss is 0.39479,source_cla_loss is 0.11404,cda_loss is 0.58542,mda_loss is 1.92198

Epoch46, train_loss is 0.05028,test_loss is 3.58449, train_accuracy is 0.99525,test_accuracy is 0.35550,train_all_loss is 2.08147,target_cla_loss is 0.41587,source_cla_loss is 0.10975,cda_loss is 0.57340,mda_loss is 1.87279

Epoch47, train_loss is 0.04632,test_loss is 3.54860, train_accuracy is 0.99750,test_accuracy is 0.35175,train_all_loss is 2.09232,target_cla_loss is 0.41676,source_cla_loss is 0.10310,cda_loss is 0.56944,mda_loss is 1.89061

Epoch48, train_loss is 0.05131,test_loss is 3.71509, train_accuracy is 0.99600,test_accuracy is 0.34425,train_all_loss is 2.03649,target_cla_loss is 0.37779,source_cla_loss is 0.11077,cda_loss is 0.57039,mda_loss is 1.83090

Epoch49, train_loss is 0.04696,test_loss is 3.79712, train_accuracy is 0.99675,test_accuracy is 0.36775,train_all_loss is 1.96412,target_cla_loss is 0.38354,source_cla_loss is 0.11067,cda_loss is 0.59040,mda_loss is 1.75606

Epoch50, train_loss is 0.04041,test_loss is 3.61169, train_accuracy is 0.99750,test_accuracy is 0.36175,train_all_loss is 2.00069,target_cla_loss is 0.37321,source_cla_loss is 0.09218,cda_loss is 0.58409,mda_loss is 1.81278

Epoch51, train_loss is 0.04318,test_loss is 3.74727, train_accuracy is 0.99750,test_accuracy is 0.35675,train_all_loss is 2.00346,target_cla_loss is 0.35602,source_cla_loss is 0.09815,cda_loss is 0.57770,mda_loss is 1.81194

Epoch52, train_loss is 0.04124,test_loss is 3.54835, train_accuracy is 0.99700,test_accuracy is 0.35150,train_all_loss is 2.03243,target_cla_loss is 0.36050,source_cla_loss is 0.09370,cda_loss is 0.57510,mda_loss is 1.84517

Epoch53, train_loss is 0.04390,test_loss is 3.82567, train_accuracy is 0.99725,test_accuracy is 0.35275,train_all_loss is 1.99703,target_cla_loss is 0.34785,source_cla_loss is 0.10884,cda_loss is 0.56020,mda_loss is 1.79739

Epoch54, train_loss is 0.04348,test_loss is 3.86695, train_accuracy is 0.99750,test_accuracy is 0.35850,train_all_loss is 1.94795,target_cla_loss is 0.38242,source_cla_loss is 0.10408,cda_loss is 0.56894,mda_loss is 1.74873

Epoch55, train_loss is 0.03718,test_loss is 3.56571, train_accuracy is 0.99775,test_accuracy is 0.36575,train_all_loss is 1.97170,target_cla_loss is 0.37277,source_cla_loss is 0.08584,cda_loss is 0.56435,mda_loss is 1.79215

Epoch56, train_loss is 0.04152,test_loss is 3.39552, train_accuracy is 0.99850,test_accuracy is 0.36575,train_all_loss is 2.02024,target_cla_loss is 0.36317,source_cla_loss is 0.10414,cda_loss is 0.58045,mda_loss is 1.82174

Epoch57, train_loss is 0.03893,test_loss is 3.68062, train_accuracy is 0.99875,test_accuracy is 0.35975,train_all_loss is 1.97693,target_cla_loss is 0.34846,source_cla_loss is 0.09432,cda_loss is 0.57112,mda_loss is 1.79064

Epoch58, train_loss is 0.04239,test_loss is 3.59319, train_accuracy is 0.99750,test_accuracy is 0.36800,train_all_loss is 1.99725,target_cla_loss is 0.37591,source_cla_loss is 0.10367,cda_loss is 0.56670,mda_loss is 1.79933

Epoch59, train_loss is 0.04245,test_loss is 3.66916, train_accuracy is 0.99800,test_accuracy is 0.35800,train_all_loss is 1.99187,target_cla_loss is 0.34185,source_cla_loss is 0.10355,cda_loss is 0.57694,mda_loss is 1.79644

Epoch60, train_loss is 0.04063,test_loss is 3.69465, train_accuracy is 0.99700,test_accuracy is 0.35850,train_all_loss is 1.92894,target_cla_loss is 0.34705,source_cla_loss is 0.09351,cda_loss is 0.56955,mda_loss is 1.74377

Epoch61, train_loss is 0.04399,test_loss is 3.56587, train_accuracy is 0.99650,test_accuracy is 0.35900,train_all_loss is 2.02587,target_cla_loss is 0.37329,source_cla_loss is 0.09890,cda_loss is 0.57672,mda_loss is 1.83196

Epoch62, train_loss is 0.03630,test_loss is 3.75333, train_accuracy is 0.99850,test_accuracy is 0.36125,train_all_loss is 2.06062,target_cla_loss is 0.33665,source_cla_loss is 0.09386,cda_loss is 0.58990,mda_loss is 1.87410

Epoch63, train_loss is 0.04550,test_loss is 3.58529, train_accuracy is 0.99850,test_accuracy is 0.35700,train_all_loss is 2.03326,target_cla_loss is 0.37309,source_cla_loss is 0.11314,cda_loss is 0.55576,mda_loss is 1.82723

Epoch64, train_loss is 0.03636,test_loss is 3.52662, train_accuracy is 0.99900,test_accuracy is 0.36075,train_all_loss is 1.94026,target_cla_loss is 0.39102,source_cla_loss is 0.09922,cda_loss is 0.58333,mda_loss is 1.74360

Epoch65, train_loss is 0.03628,test_loss is 3.86440, train_accuracy is 0.99850,test_accuracy is 0.35875,train_all_loss is 1.99657,target_cla_loss is 0.32306,source_cla_loss is 0.09738,cda_loss is 0.56515,mda_loss is 1.81036

Epoch66, train_loss is 0.04234,test_loss is 3.57190, train_accuracy is 0.99775,test_accuracy is 0.36675,train_all_loss is 1.92315,target_cla_loss is 0.36443,source_cla_loss is 0.10319,cda_loss is 0.55765,mda_loss is 1.72775

Epoch67, train_loss is 0.03892,test_loss is 3.82084, train_accuracy is 0.99650,test_accuracy is 0.35025,train_all_loss is 1.97754,target_cla_loss is 0.32409,source_cla_loss is 0.10061,cda_loss is 0.57595,mda_loss is 1.78693

Epoch68, train_loss is 0.04147,test_loss is 3.64863, train_accuracy is 0.99775,test_accuracy is 0.35175,train_all_loss is 2.04404,target_cla_loss is 0.33164,source_cla_loss is 0.09546,cda_loss is 0.55667,mda_loss is 1.85975

Epoch69, train_loss is 0.03997,test_loss is 3.96786, train_accuracy is 0.99550,test_accuracy is 0.35550,train_all_loss is 1.87499,target_cla_loss is 0.33787,source_cla_loss is 0.09773,cda_loss is 0.55921,mda_loss is 1.68755

Epoch70, train_loss is 0.03543,test_loss is 3.93736, train_accuracy is 0.99975,test_accuracy is 0.35850,train_all_loss is 1.96610,target_cla_loss is 0.34311,source_cla_loss is 0.08989,cda_loss is 0.56598,mda_loss is 1.78530

Epoch71, train_loss is 0.03870,test_loss is 4.00044, train_accuracy is 0.99825,test_accuracy is 0.34775,train_all_loss is 1.97678,target_cla_loss is 0.33252,source_cla_loss is 0.10044,cda_loss is 0.55977,mda_loss is 1.78711

Epoch72, train_loss is 0.03661,test_loss is 4.20446, train_accuracy is 0.99850,test_accuracy is 0.33975,train_all_loss is 1.91947,target_cla_loss is 0.32661,source_cla_loss is 0.09100,cda_loss is 0.55292,mda_loss is 1.74052

Epoch73, train_loss is 0.03299,test_loss is 4.03290, train_accuracy is 0.99975,test_accuracy is 0.35475,train_all_loss is 1.89665,target_cla_loss is 0.32557,source_cla_loss is 0.09065,cda_loss is 0.57111,mda_loss is 1.71633

Epoch74, train_loss is 0.03557,test_loss is 3.74976, train_accuracy is 0.99875,test_accuracy is 0.35050,train_all_loss is 1.94581,target_cla_loss is 0.33794,source_cla_loss is 0.09797,cda_loss is 0.57739,mda_loss is 1.75631

Epoch75, train_loss is 0.03606,test_loss is 3.90655, train_accuracy is 0.99900,test_accuracy is 0.35175,train_all_loss is 1.91040,target_cla_loss is 0.36908,source_cla_loss is 0.09017,cda_loss is 0.56595,mda_loss is 1.72673

Epoch76, train_loss is 0.02996,test_loss is 4.31643, train_accuracy is 0.99850,test_accuracy is 0.35225,train_all_loss is 1.87109,target_cla_loss is 0.35043,source_cla_loss is 0.08601,cda_loss is 0.58625,mda_loss is 1.69141

Epoch77, train_loss is 0.03729,test_loss is 4.11600, train_accuracy is 0.99675,test_accuracy is 0.36625,train_all_loss is 1.98549,target_cla_loss is 0.30353,source_cla_loss is 0.09079,cda_loss is 0.59306,mda_loss is 1.80504

Epoch78, train_loss is 0.03488,test_loss is 4.00549, train_accuracy is 0.99900,test_accuracy is 0.36025,train_all_loss is 1.94518,target_cla_loss is 0.31716,source_cla_loss is 0.09354,cda_loss is 0.56666,mda_loss is 1.76326

Epoch79, train_loss is 0.03735,test_loss is 3.74691, train_accuracy is 0.99775,test_accuracy is 0.36200,train_all_loss is 1.98189,target_cla_loss is 0.36152,source_cla_loss is 0.10226,cda_loss is 0.55654,mda_loss is 1.78782

Epoch80, train_loss is 0.03132,test_loss is 3.66145, train_accuracy is 0.99925,test_accuracy is 0.37600,train_all_loss is 1.92512,target_cla_loss is 0.32335,source_cla_loss is 0.09147,cda_loss is 0.55130,mda_loss is 1.74618

Epoch81, train_loss is 0.04329,test_loss is 3.67632, train_accuracy is 0.99775,test_accuracy is 0.36875,train_all_loss is 2.01323,target_cla_loss is 0.36775,source_cla_loss is 0.12122,cda_loss is 0.53975,mda_loss is 1.80126

Epoch82, train_loss is 0.03796,test_loss is 3.88163, train_accuracy is 0.99800,test_accuracy is 0.36575,train_all_loss is 1.94328,target_cla_loss is 0.34789,source_cla_loss is 0.09737,cda_loss is 0.53522,mda_loss is 1.75760

Epoch83, train_loss is 0.03361,test_loss is 3.93112, train_accuracy is 0.99850,test_accuracy is 0.35125,train_all_loss is 1.94797,target_cla_loss is 0.34964,source_cla_loss is 0.09108,cda_loss is 0.56788,mda_loss is 1.76514

Epoch84, train_loss is 0.03604,test_loss is 3.92195, train_accuracy is 0.99900,test_accuracy is 0.37450,train_all_loss is 1.89947,target_cla_loss is 0.35758,source_cla_loss is 0.09633,cda_loss is 0.56305,mda_loss is 1.71108

Epoch85, train_loss is 0.03087,test_loss is 3.79357, train_accuracy is 0.99850,test_accuracy is 0.37225,train_all_loss is 1.93883,target_cla_loss is 0.33742,source_cla_loss is 0.08896,cda_loss is 0.57650,mda_loss is 1.75848

Epoch86, train_loss is 0.03754,test_loss is 3.96986, train_accuracy is 0.99875,test_accuracy is 0.36800,train_all_loss is 1.87951,target_cla_loss is 0.33196,source_cla_loss is 0.10165,cda_loss is 0.56499,mda_loss is 1.68817

Epoch87, train_loss is 0.03479,test_loss is 4.27059, train_accuracy is 0.99875,test_accuracy is 0.36100,train_all_loss is 1.86776,target_cla_loss is 0.34986,source_cla_loss is 0.10001,cda_loss is 0.55966,mda_loss is 1.67679

Epoch88, train_loss is 0.03385,test_loss is 4.07302, train_accuracy is 0.99900,test_accuracy is 0.36325,train_all_loss is 1.98173,target_cla_loss is 0.32548,source_cla_loss is 0.08992,cda_loss is 0.56596,mda_loss is 1.80266

Epoch89, train_loss is 0.03606,test_loss is 3.76652, train_accuracy is 0.99825,test_accuracy is 0.36950,train_all_loss is 1.99725,target_cla_loss is 0.36634,source_cla_loss is 0.10286,cda_loss is 0.54637,mda_loss is 1.80312

Epoch90, train_loss is 0.03380,test_loss is 3.84020, train_accuracy is 0.99900,test_accuracy is 0.36500,train_all_loss is 1.91125,target_cla_loss is 0.31314,source_cla_loss is 0.08440,cda_loss is 0.53862,mda_loss is 1.74168

Epoch91, train_loss is 0.03329,test_loss is 3.78597, train_accuracy is 0.99875,test_accuracy is 0.36275,train_all_loss is 1.84015,target_cla_loss is 0.35008,source_cla_loss is 0.08478,cda_loss is 0.53946,mda_loss is 1.66642

Epoch92, train_loss is 0.03170,test_loss is 3.90322, train_accuracy is 0.99925,test_accuracy is 0.36850,train_all_loss is 1.84773,target_cla_loss is 0.31886,source_cla_loss is 0.07877,cda_loss is 0.55678,mda_loss is 1.68140

Epoch93, train_loss is 0.02658,test_loss is 4.19532, train_accuracy is 0.99925,test_accuracy is 0.36575,train_all_loss is 1.83341,target_cla_loss is 0.28239,source_cla_loss is 0.06854,cda_loss is 0.56702,mda_loss is 1.67993

Epoch94, train_loss is 0.02931,test_loss is 4.24633, train_accuracy is 0.99950,test_accuracy is 0.36750,train_all_loss is 1.84162,target_cla_loss is 0.28099,source_cla_loss is 0.08070,cda_loss is 0.54899,mda_loss is 1.67792

Epoch95, train_loss is 0.03792,test_loss is 4.27938, train_accuracy is 0.99750,test_accuracy is 0.37175,train_all_loss is 1.89878,target_cla_loss is 0.30210,source_cla_loss is 0.10248,cda_loss is 0.54445,mda_loss is 1.71165

Epoch96, train_loss is 0.02876,test_loss is 3.81267, train_accuracy is 0.99925,test_accuracy is 0.37500,train_all_loss is 1.88203,target_cla_loss is 0.32059,source_cla_loss is 0.07481,cda_loss is 0.55576,mda_loss is 1.71959

Epoch97, train_loss is 0.03724,test_loss is 3.74088, train_accuracy is 0.99775,test_accuracy is 0.37925,train_all_loss is 1.93946,target_cla_loss is 0.33789,source_cla_loss is 0.10656,cda_loss is 0.56191,mda_loss is 1.74292

Epoch98, train_loss is 0.03961,test_loss is 4.07593, train_accuracy is 0.99600,test_accuracy is 0.36750,train_all_loss is 1.90211,target_cla_loss is 0.31981,source_cla_loss is 0.10458,cda_loss is 0.57807,mda_loss is 1.70774

Epoch99, train_loss is 0.02847,test_loss is 4.25489, train_accuracy is 0.99975,test_accuracy is 0.35350,train_all_loss is 1.84025,target_cla_loss is 0.30272,source_cla_loss is 0.07335,cda_loss is 0.59257,mda_loss is 1.67737

Epoch100, train_loss is 0.02855,test_loss is 4.00182, train_accuracy is 0.99850,test_accuracy is 0.36100,train_all_loss is 1.82983,target_cla_loss is 0.28872,source_cla_loss is 0.07271,cda_loss is 0.56430,mda_loss is 1.67182

工学博士,担任《Mechanical System and Signal Processing》审稿专家,担任
《中国电机工程学报》优秀审稿专家,《控制与决策》,《系统工程与电子技术》,《电力系统保护与控制》,《宇航学报》等EI期刊审稿专家。

擅长领域:现代信号处理,机器学习,深度学习,数字孪生,时间序列分析,设备缺陷检测、设备异常检测、设备智能故障诊断与健康管理PHM等。

这篇关于Python环境下基于深度判别迁移学习网络的轴承故障诊断的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/712445

相关文章

HarmonyOS学习(七)——UI(五)常用布局总结

自适应布局 1.1、线性布局(LinearLayout) 通过线性容器Row和Column实现线性布局。Column容器内的子组件按照垂直方向排列,Row组件中的子组件按照水平方向排列。 属性说明space通过space参数设置主轴上子组件的间距,达到各子组件在排列上的等间距效果alignItems设置子组件在交叉轴上的对齐方式,且在各类尺寸屏幕上表现一致,其中交叉轴为垂直时,取值为Vert

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

python: 多模块(.py)中全局变量的导入

文章目录 global关键字可变类型和不可变类型数据的内存地址单模块(单个py文件)的全局变量示例总结 多模块(多个py文件)的全局变量from x import x导入全局变量示例 import x导入全局变量示例 总结 global关键字 global 的作用范围是模块(.py)级别: 当你在一个模块(文件)中使用 global 声明变量时,这个变量只在该模块的全局命名空

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

学习hash总结

2014/1/29/   最近刚开始学hash,名字很陌生,但是hash的思想却很熟悉,以前早就做过此类的题,但是不知道这就是hash思想而已,说白了hash就是一个映射,往往灵活利用数组的下标来实现算法,hash的作用:1、判重;2、统计次数;

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

Linux 网络编程 --- 应用层

一、自定义协议和序列化反序列化 代码: 序列化反序列化实现网络版本计算器 二、HTTP协议 1、谈两个简单的预备知识 https://www.baidu.com/ --- 域名 --- 域名解析 --- IP地址 http的端口号为80端口,https的端口号为443 url为统一资源定位符。CSDNhttps://mp.csdn.net/mp_blog/creation/editor

【Python编程】Linux创建虚拟环境并配置与notebook相连接

1.创建 使用 venv 创建虚拟环境。例如,在当前目录下创建一个名为 myenv 的虚拟环境: python3 -m venv myenv 2.激活 激活虚拟环境使其成为当前终端会话的活动环境。运行: source myenv/bin/activate 3.与notebook连接 在虚拟环境中,使用 pip 安装 Jupyter 和 ipykernel: pip instal

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]

【机器学习】高斯过程的基本概念和应用领域以及在python中的实例

引言 高斯过程(Gaussian Process,简称GP)是一种概率模型,用于描述一组随机变量的联合概率分布,其中任何一个有限维度的子集都具有高斯分布 文章目录 引言一、高斯过程1.1 基本定义1.1.1 随机过程1.1.2 高斯分布 1.2 高斯过程的特性1.2.1 联合高斯性1.2.2 均值函数1.2.3 协方差函数(或核函数) 1.3 核函数1.4 高斯过程回归(Gauss