ELM极限学习机源码

2024-04-27 04:32
文章标签 源码 极限 elm 学习机

本文主要是介绍ELM极限学习机源码,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

ELM曾今和svm是比cnn更火的方法,纪念一下:

function [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm(train_data, test_data, Elm_Type, NumberofHiddenNeurons, ActivationFunction)% Usage: elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction)
% OR:    [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction)
%
% Input:
% TrainingData_File     - Filename of training data set
% TestingData_File      - Filename of testing data set
% Elm_Type              - 0 for regression; 1 for (both binary and multi-classes) classification
% NumberofHiddenNeurons - Number of hidden neurons assigned to the ELM
% ActivationFunction    - Type of activation function:
%                           'sig' for Sigmoidal function
%                           'sin' for Sine function
%                           'hardlim' for Hardlim function
%                           'tribas' for Triangular basis function
%                           'radbas' for Radial basis function (for additive type of SLFNs instead of RBF type of SLFNs)
%
% Output: 
% TrainingTime          - Time (seconds) spent on training ELM
% TestingTime           - Time (seconds) spent on predicting ALL testing data
% TrainingAccuracy      - Training accuracy: 
%                           RMSE for regression or correct classification rate for classification
% TestingAccuracy       - Testing accuracy: 
%                           RMSE for regression or correct classification rate for classification
%
% MULTI-CLASSE CLASSIFICATION: NUMBER OF OUTPUT NEURONS WILL BE AUTOMATICALLY SET EQUAL TO NUMBER OF CLASSES
% FOR EXAMPLE, if there are 7 classes in all, there will have 7 output
% neurons; neuron 5 has the highest output means input belongs to 5-th class
%
% Sample1 regression: [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm('sinc_train', 'sinc_test', 0, 20, 'sig')
% Sample2 classification: elm('diabetes_train', 'diabetes_test', 1, 20, 'sig')
%%%%%    Authors:    MR QIN-YU ZHU AND DR GUANG-BIN HUANG%%%%    NANYANG TECHNOLOGICAL UNIVERSITY, SINGAPORE%%%%    EMAIL:      EGBHUANG@NTU.EDU.SG; GBHUANG@IEEE.ORG%%%%    WEBSITE:    http://www.ntu.edu.sg/eee/icis/cv/egbhuang.htm%%%%    DATE:       APRIL 2004%%%%%%%%%%% Macro definition
REGRESSION=0;
CLASSIFIER=1;%%%%%%%%%%% Load training dataset
%train_data=load(TrainingData_File);
T=train_data(:,1)';
P=train_data(:,2:size(train_data,2))';
clear train_data;                                   %   Release raw training data array%%%%%%%%%%% Load testing dataset
%test_data=load(TestingData_File);
TV.T=test_data(:,1)';
TV.P=test_data(:,2:size(test_data,2))';
clear test_data;                                    %   Release raw testing data arrayNumberofTrainingData=size(P,2);
NumberofTestingData=size(TV.P,2);
NumberofInputNeurons=size(P,1);if Elm_Type~=REGRESSION%%%%%%%%%%%% Preprocessing the data of classificationsorted_target=sort(cat(2,T,TV.T),2);label=zeros(1,1);                               %   Find and save in 'label' class label from training and testing data setslabel(1,1)=sorted_target(1,1);j=1;for i = 2:(NumberofTrainingData+NumberofTestingData)if sorted_target(1,i) ~= label(1,j)j=j+1;label(1,j) = sorted_target(1,i);endendnumber_class=j;NumberofOutputNeurons=number_class;%%%%%%%%%% Processing the targets of trainingtemp_T=zeros(NumberofOutputNeurons, NumberofTrainingData);for i = 1:NumberofTrainingDatafor j = 1:number_classif label(1,j) == T(1,i)break; endendtemp_T(j,i)=1;endT=temp_T*2-1;%%%%%%%%%% Processing the targets of testingtemp_TV_T=zeros(NumberofOutputNeurons, NumberofTestingData);for i = 1:NumberofTestingDatafor j = 1:number_classif label(1,j) == TV.T(1,i)break; endendtemp_TV_T(j,i)=1;endTV.T=temp_TV_T*2-1;end                                                 %   end if of Elm_Type%%%%%%%%%%% Calculate weights & biases
start_time_train=cputime;%%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
tempH=InputWeight*P;
clear P;                                            %   Release input of training data 
ind=ones(1,NumberofTrainingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH=tempH+BiasMatrix;%%%%%%%%%%% Calculate hidden neuron output matrix H
switch lower(ActivationFunction)case {'sig','sigmoid'}%%%%%%%% Sigmoid H = 1 ./ (1 + exp(-tempH));case {'sin','sine'}%%%%%%%% SineH = sin(tempH);    case {'hardlim'}%%%%%%%% Hard LimitH = double(hardlim(tempH));case {'tribas'}%%%%%%%% Triangular basis functionH = tribas(tempH);case {'radbas'}%%%%%%%% Radial basis functionH = radbas(tempH);%%%%%%%% More activation functions can be added here                
end
clear tempH;                                        %   Release the temparary array for calculation of hidden neuron output matrix H%%%%%%%%%%% Calculate output weights OutputWeight (beta_i)
OutputWeight=pinv(H') * T';                        % slower implementation
%OutputWeight=inv(eye(size(H,1))/C+H * H') * H * T';   % faster method 1
%implementation; one can set regularizaiton factor C properly in classification applications 
%OutputWeight=(eye(size(H,1))/C+H * H') \ H * T';      % faster method 2
%implementation; one can set regularizaiton factor C properly in classification applications%If you use faster methods or kernel method, PLEASE CITE in your paper properly: %Guang-Bin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang, "Extreme Learning Machine for Regression and Multi-Class Classification," submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence, October 2010. end_time_train=cputime;
TrainingTime=end_time_train-start_time_train        %   Calculate CPU time (seconds) spent for training ELM%%%%%%%%%%% Calculate the training accuracy
Y=(H' * OutputWeight)';                             %   Y: the actual output of the training data
if Elm_Type == REGRESSIONTrainingAccuracy=sqrt(mse(T - Y))               %   Calculate training accuracy (RMSE) for regression case
end
clear H;%%%%%%%%%%% Calculate the output of testing input
start_time_test=cputime;
tempH_test=InputWeight*TV.P;
clear TV.P;             %   Release input of testing data             
ind=ones(1,NumberofTestingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH_test=tempH_test + BiasMatrix;
switch lower(ActivationFunction)case {'sig','sigmoid'}%%%%%%%% Sigmoid H_test = 1 ./ (1 + exp(-tempH_test));case {'sin','sine'}%%%%%%%% SineH_test = sin(tempH_test);        case {'hardlim'}%%%%%%%% Hard LimitH_test = hardlim(tempH_test);        case {'tribas'}%%%%%%%% Triangular basis functionH_test = tribas(tempH_test);        case {'radbas'}%%%%%%%% Radial basis functionH_test = radbas(tempH_test);        %%%%%%%% More activation functions can be added here        
end
TY=(H_test' * OutputWeight)';                       %   TY: the actual output of the testing data
end_time_test=cputime;
TestingTime=end_time_test-start_time_test           %   Calculate CPU time (seconds) spent by ELM predicting the whole testing dataif Elm_Type == REGRESSIONTestingAccuracy=sqrt(mse(TV.T - TY))            %   Calculate testing accuracy (RMSE) for regression case
endif Elm_Type == CLASSIFIER
%%%%%%%%%% Calculate training & testing classification accuracyMissClassificationRate_Training=0;MissClassificationRate_Testing=0;for i = 1 : size(T, 2)[x, label_index_expected]=max(T(:,i));[x, label_index_actual]=max(Y(:,i));if label_index_actual~=label_index_expectedMissClassificationRate_Training=MissClassificationRate_Training+1;endendTrainingAccuracy=1-MissClassificationRate_Training/size(T,2)for i = 1 : size(TV.T, 2)[x, label_index_expected]=max(TV.T(:,i));[x, label_index_actual]=max(TY(:,i));if label_index_actual~=label_index_expectedMissClassificationRate_Testing=MissClassificationRate_Testing+1;endendTestingAccuracy=1-MissClassificationRate_Testing/size(TV.T,2)  
end


这篇关于ELM极限学习机源码的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/939520

相关文章

JAVA智听未来一站式有声阅读平台听书系统小程序源码

智听未来,一站式有声阅读平台听书系统 🌟 开篇:遇见未来,从“智听”开始 在这个快节奏的时代,你是否渴望在忙碌的间隙,找到一片属于自己的宁静角落?是否梦想着能随时随地,沉浸在知识的海洋,或是故事的奇幻世界里?今天,就让我带你一起探索“智听未来”——这一站式有声阅读平台听书系统,它正悄悄改变着我们的阅读方式,让未来触手可及! 📚 第一站:海量资源,应有尽有 走进“智听

Java ArrayList扩容机制 (源码解读)

结论:初始长度为10,若所需长度小于1.5倍原长度,则按照1.5倍扩容。若不够用则按照所需长度扩容。 一. 明确类内部重要变量含义         1:数组默认长度         2:这是一个共享的空数组实例,用于明确创建长度为0时的ArrayList ,比如通过 new ArrayList<>(0),ArrayList 内部的数组 elementData 会指向这个 EMPTY_EL

如何在Visual Studio中调试.NET源码

今天偶然在看别人代码时,发现在他的代码里使用了Any判断List<T>是否为空。 我一般的做法是先判断是否为null,再判断Count。 看了一下Count的源码如下: 1 [__DynamicallyInvokable]2 public int Count3 {4 [__DynamicallyInvokable]5 get

工厂ERP管理系统实现源码(JAVA)

工厂进销存管理系统是一个集采购管理、仓库管理、生产管理和销售管理于一体的综合解决方案。该系统旨在帮助企业优化流程、提高效率、降低成本,并实时掌握各环节的运营状况。 在采购管理方面,系统能够处理采购订单、供应商管理和采购入库等流程,确保采购过程的透明和高效。仓库管理方面,实现库存的精准管理,包括入库、出库、盘点等操作,确保库存数据的准确性和实时性。 生产管理模块则涵盖了生产计划制定、物料需求计划、

Spring 源码解读:自定义实现Bean定义的注册与解析

引言 在Spring框架中,Bean的注册与解析是整个依赖注入流程的核心步骤。通过Bean定义,Spring容器知道如何创建、配置和管理每个Bean实例。本篇文章将通过实现一个简化版的Bean定义注册与解析机制,帮助你理解Spring框架背后的设计逻辑。我们还将对比Spring中的BeanDefinition和BeanDefinitionRegistry,以全面掌握Bean注册和解析的核心原理。

音视频入门基础:WAV专题(10)——FFmpeg源码中计算WAV音频文件每个packet的pts、dts的实现

一、引言 从文章《音视频入门基础:WAV专题(6)——通过FFprobe显示WAV音频文件每个数据包的信息》中我们可以知道,通过FFprobe命令可以打印WAV音频文件每个packet(也称为数据包或多媒体包)的信息,这些信息包含该packet的pts、dts: 打印出来的“pts”实际是AVPacket结构体中的成员变量pts,是以AVStream->time_base为单位的显

kubelet组件的启动流程源码分析

概述 摘要: 本文将总结kubelet的作用以及原理,在有一定基础认识的前提下,通过阅读kubelet源码,对kubelet组件的启动流程进行分析。 正文 kubelet的作用 这里对kubelet的作用做一个简单总结。 节点管理 节点的注册 节点状态更新 容器管理(pod生命周期管理) 监听apiserver的容器事件 容器的创建、删除(CRI) 容器的网络的创建与删除

red5-server源码

red5-server源码:https://github.com/Red5/red5-server

TL-Tomcat中长连接的底层源码原理实现

长连接:浏览器告诉tomcat不要将请求关掉。  如果不是长连接,tomcat响应后会告诉浏览器把这个连接关掉。    tomcat中有一个缓冲区  如果发送大批量数据后 又不处理  那么会堆积缓冲区 后面的请求会越来越慢。

Windows环境利用VS2022编译 libvpx 源码教程

libvpx libvpx 是一个开源的视频编码库,由 WebM 项目开发和维护,专门用于 VP8 和 VP9 视频编码格式的编解码处理。它支持高质量的视频压缩,广泛应用于视频会议、在线教育、视频直播服务等多种场景中。libvpx 的特点包括跨平台兼容性、硬件加速支持以及灵活的接口设计,使其可以轻松集成到各种应用程序中。 libvpx 的安装和配置过程相对简单,用户可以从官方网站下载源代码