一种可扩展的同时进化实例和特征选择方法

2024-05-09 00:58

本文主要是介绍一种可扩展的同时进化实例和特征选择方法,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

#引用

##Latex

@article{GARCIAPEDRAJAS2013150,
title = “A scalable approach to simultaneous evolutionary instance and feature selection”,
journal = “Information Sciences”,
volume = “228”,
pages = “150 - 174”,
year = “2013”,
issn = “0020-0255”,
doi = “https://doi.org/10.1016/j.ins.2012.10.006”,
url = “http://www.sciencedirect.com/science/article/pii/S0020025512006718”,
author = “Nicol’{a}s Garc’{\i}a-Pedrajas and Aida de Haro-Garc’{\i}a and Javier P’{e}rez-Rodr’{\i}guez”,
keywords = “Simultaneous instance and feature selection, Instance selection, Feature selection, Instance-based learning, Very large problems”
}

##Normal

Nicolás García-Pedrajas, Aida de Haro-García, Javier Pérez-Rodríguez,
A scalable approach to simultaneous evolutionary instance and feature selection,
Information Sciences,
Volume 228,
2013,
Pages 150-174,
ISSN 0020-0255,
https://doi.org/10.1016/j.ins.2012.10.006.
(http://www.sciencedirect.com/science/article/pii/S0020025512006718)
Keywords: Simultaneous instance and feature selection; Instance selection; Feature selection; Instance-based learning; Very large problems


#摘要

enormous amount of information
bioinformatics, security and intrusion detection and text mining

data reduction — removing

  1. missing

  2. redundant

  3. information-poor data and/or

  4. erroneous data
    from the dataset to obtain a tractable problem size

  5. feature selection

  6. feature-value discretization

  7. instance selection

the divide-and-conquer principle + bookkeeping

in linear time


#主要内容

Instance selection:choosing a subset of the total available data to achieve the original purpose of the datamining application as though all the data were being used

  1. prototype selection(k-Nearest Neighbors)
  2. obtaining the training set for a learning algorithm(classification trees or neural networks)

‘‘the isolation of the smallest set of instances that enable us to predict the class of a query instance with the same (or higher) accuracy than the original set’’

The objectives of feature selection:

  1. To avoid over-fitting and to improve model performance
  2. To provide faster and more cost-effective models
  3. To gain a deeper insight into the underlying processes that generated the data

simultaneous instance and feature selection

这里写图片描述

scalable simultaneous instance and feature selection method (SSIFSM)


#相关工作

three fundamental approaches for scaling up learning methods:

  1. designing fast algorithms
  2. partitioning the data
  3. using a relational representation

The stratification strategy
分层策略
splits the training data into disjoint strata with equal class distribution

这里写图片描述


#算法

Scalable simultaneous instance and feature selection method (SSIFSM)

K K K classes and N N N training instances with M M M features
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , … , ( x N , y N ) } T = \left\{ \left(x_1, y_1 \right), \left(x_2, y_2 \right), \ldots, \left(x_N, y_N \right) \right\} T={(x1,y1),(x2,y2),,(xN,yN)}
Y = { 1 , … , K } Y = \left\{ 1, \ldots, K \right\} Y={1,,K}

simultaneous instance and feature selection

Bookkeeping

这里写图片描述

这里写图片描述

这里写图片描述

这里写图片描述

an evolutionary algorithm

这里写图片描述

a CHC algorithm —
Cross-generational elitist selection, Heterogeneous recombination and Cataclysmic mutation

  1. To obtain the next generation for a population of size P P P, the parents and the offspring are put together and the P P P best individuals are selected.
  2. To avoid premature convergence, only different individuals separated by a threshold Hamming distance—in our implementation, the length of the chromosome divided by four—are allowed to mate.
  3. During crossover, two parents exchange exactly half of their nonmatching bits. This operator is referred to as Half Uniform Crossover (HUX).
  4. Mutation is not used during the regular evolution. To avoid premature convergence or stagnation of the search, the population is reinitialized when the individuals are not diverse. In such a case, only the best individual is retained in the new population.

这里写图片描述

a c c ( x ) acc(x) acc(x) — the accuracy — 1-NN classifier
r e d ( x ) red(x) red(x) — the reduction — KaTeX parse error: Unexpected character: '' at position 3: 1 ̲ - \frac{N'}{N}…

α \alpha α — a weighting parameter — 0.5 0.5 0.5

the instance and feature selection

record the number of times that each instance and feature has been selected to be kept
— the number of votes (its similarity to the combination of classifiers in an ensemble by voting)
— repeated for r r r rounds
all the instances of a certain subset belong to the same class — ignored

the combination of the different rounds

the philosophy of ensembles of classifiers
several weak learners are combined to form a strong classifier — several weak (in the sense that they are applied to subsets of the training data) instance and feature selection procedures are combined to produce a strong and fast selection method

bagging or boosting

number of votes — KaTeX parse error: Unexpected character: '' at position 8: [ 0, r ̲\cdot s ] (an instance)
an instance is in s s s subsets in every round.
Each feature is in t t t subsets each round.
KaTeX parse error: Unexpected character: '' at position 8: [ 0, r ̲\cdot t ] (an feature)

threshold

majority voting — at least half (depends heavily on the problem)

automatically

这里写图片描述

θ i \theta_i θi — threshold for instance
θ f \theta_f θf — threshold for feature
T ( θ i , θ f ) T \left( \theta_i, \theta_f \right) T(θi,θf) — the subset of the training set
1-NN classifier
all the possible values
β \beta β — close to 1 1 1 0.75 0.75 0.75
r e d ( x ) = 1 − ( s i + s f ) N + M red \left( x \right) = 1 - \dfrac{ \left( s_i + s_f \right) }{ N + M } red(x)=1N+M(si+sf)

evaluate all possible pairs of instance and feature thresholds — KaTeX parse error: Unexpected character: '' at position 5: [ r ̲\cdot s ] \tim… O ( N 2 M ) O(N^2M) O(N2M)
a divide-and-conquer method
the same partition philosophy
training set is divided into random disjoint subsets and the accuracy is estimated separately in each subset using the average evaluation of all the subsets for the fitness of each pair of thresholds

这里写图片描述
这里写图片描述

The scalability of the method is assured by the following features:

  1. Application of the method to small datasets. Due to the small size of the subsets in which the selection process is applied, the selection process will always be fast, regardless of the complexity of the instance and feature selection method used in the subsets.
  2. Only small datasets must be kept in memory. This allows the application of instance and feature selection when datasets do not fit into memory.
  3. Bookkeeping is applied in the evolution for every subset.

##Complexity of our methodology

linear in the number of instances, N, of the dataset

The process of random partition — O ( N M ) O(NM) O(NM)
a subset of fixed size, n n n
the complexity of the CHC selection algorithm
K K K — the number of operations required by the selection algorithm to perform its task in a dataset of size n n n
N N N instances and M M M features
his selection process once for each subset, N n M m \frac{N}{n} \frac{M}{m} nNmM times
O ( ( N n M m ) K ) O \left( \left( \frac{N}{n} \frac{M}{m} \right) K \right) O((nNmM)K)
r r r rounds
O ( r ( N n M m ) K ) O \left( r \left( \frac{N}{n} \frac{M}{m} \right) K \right) O(r(nNmM)K)
linear
easy parallel implementation


##Parallel implementation

the master/slave architecture
master — the partition of the dataset and sends the subsets to each slave
slave — performs the selection algorithm + returns the selected instances and features to the master
master — stores the votes for each kept instance and feature

这里写图片描述

communication between different tasks
occurs only twice:

  1. Before each slave initiates its selection process, it must receive the subset of data to perform that process. This amount of information is always small because the method is based on each slave taking care of only a small part of the whole dataset. Furthermore, if the slaves can access the disk, they can read the necessary data directly from it.
  2. Once the selection process is finished, the slaves send the selection performed to the master. This selection consists of a list of the selected instances and features, which is a small sequence of integers.

the best value for the votes threshold:

  1. Before each slave initiates the evaluation of a certain pair of thresholds, it must receive the subset of data to perform that task. This amount of information is always small because the method is based on each slave taking care of only a small part of the whole dataset.
  2. Once the evaluation process is finished, the slaves send the evaluation performed to the master. The evaluation is the error obtained when the corresponding pair of thresholds is used, which is a real number.

#试验

50 problems

这里写图片描述

the UCI Machine Learning Repository

the reduction and accuracy
a 10-fold cross-validation method
a 1-nearest neighbor (1-NN) classifier

from small to medium sizes


##Algorithms for the comparison

  1. Nearest neighbor error using all instances and features is chosen as a baseline measure (1-NN). Any method of performing selection of features or instances must at least match the performance of the 1-NN algorithm or improve its accuracy if possible.
  2. As stated, Cano et al. [10] performed a comprehensive comparison of the performances of different evolutionary algorithms for instance selection. They compared a generational genetic algorithm, a steady-state genetic algorithm, a CHC genetic algorithm and a population-based incremental learning algorithm. Among these methods, CHC achieved the best overall performance (ISCHC). Therefore, we have included the CHC algorithm in our comparison. [10] J.R. Cano, F. Herrera, M. Lozano, Using evolutionary algorithms as instance selection for data reduction in KDD: an experimental study, IEEE Transactions on Evolutionary Computation 7 (2003) 561–575.
  3. Feature selection using a genetic algorithm (FSCHC). Following the same idea used for instance selection, we used a CHC algorithm for feature selection with the same characteristics of the algorithm used for instance selection.
  4. Instance and feature selection using a genetic algorithm (IS + FSCHC). Combining the two previous methods, we also used a CHC algorithm that simultaneously evolved the instances and features.
  5. Intelligent Multiobjective Evolutionary Algorithm (IMOEA) [13] method. This method is a multi-objective evolutionary algorithm, which considers both instance and feature selection. The algorithm has two objectives, maximization of training accuracy and minimization of the number of instances and features selected. The multi-objective algorithm used is based on Pareto dominance because the approach is common in multi-objective algorithms [65]. The fitness of each individual is the difference between the individuals it dominates and the individuals that dominate it. The algorithm also includes a new crossover operator called intelligent crossover, which incorporates the systematic reasoning ability of orthogonal experimental design [43] to estimate the contribution of each gene to the fitness of the individuals.
  6. The major aim of our method is scalability. However, if scalability can be achieved with a simple random sampling method, it may be argued that our method is superfluous. Thus, our last method for comparison is a CHC algorithm for instance and feature selection, which uses a random sampling of instances (SAMPLING). The method is applied using a random 10% of all the instances in the dataset.

The source code used for all methods is in C and is licensed under the GNU General Public License.

in a cluster of 32 blades
Each blade is a biprocessor DELL Power Edge M600 with four cores per processor
256 cores
a 1 GB network
2.5 GHz
each blade has 16 GB of memory


##Statistical tests

Iman-Davenport test — based on the χ F 2 \chi^2_F χF2
Friedman test — compares the average ranks of k k k algorithms — more powerful than the Friedman test

这里写图片描述
following a F F F distribution with KaTeX parse error: Unexpected character: '' at position 4: k -̲ 1 and KaTeX parse error: Unexpected character: '' at position 5: (k -̲ 1) (N - 1) degrees of freedom

pairwise comparisons — the Wilcoxon test — stronger
family-wise error

The test statistic for comparing the ith and jth classifier

这里写图片描述

z z z — used to find the corresponding probability from the table of normal distribution

The Bonferroni–Dunn test — Holm
Holm’s procedure — more powerful than Bonferroni–Dunn’s


##Evaluation measures

accuracy — the percentage of instances classified correctly
reduction — the percentage of total data removed during the evolution

class-imbalanced problems

这里写图片描述

  1. true positives (TPs)
  2. false positives (FPs)
  3. true negatives (TNs)
  4. false negatives (FNs)

  • the sensitivity (Sn) — S n = T P T P + F N Sn = \frac{TP}{TP+FN} Sn=TP+FNTP
  • the specificity (Sp) — S p = T N T N + F P Sp = \frac{TN}{TN+FP} Sp=TN+FPTN
  • the G - mean measure — G − m e a n = S p ⋅ S n G-mean = \sqrt{Sp \cdot Sn} Gmean=SpSn

the reduction — 1 − n N m M 1 - \frac{n}{N} \frac{m}{M} 1NnMm


##实验结果

regarding the size of the subsets used

  1. 1000 instances and seven features — (1000,7)
  2. 500 instances and nine features — (500,9)
  3. 100 instances and 13 features — (100,13)

cache — 300–400 MB

memory thrashing

这里写图片描述
这里写图片描述

no significant differences among the three configurations for both accuracy and reduction

execution time — remarkable differences

这里写图片描述
这里写图片描述

这里写图片描述

The Iman–Davenport test with a p-value of 0.0000 for both accuracy and reduction — significant differences

Holm’s procedure —

这里写图片描述

the κ \kappa κ-error relative movement diagrams —

这里写图片描述
— the reduction difference — instead of the κ \kappa κ difference value
These diagrams use an arrow to represent the results of two methods applied to the same dataset.
a convenient way of summarizing the results
arrows pointing up-right
arrows pointing down-left

这里写图片描述

这里写图片描述
Regarding time — an approximately linear behavior

Holm test — (15 rounds)

这里写图片描述


###运行时间

the philosophy of divide-and-conquer
bookkeeping

这里写图片描述

the wall-clock time

significantly faster

the speedup of SSIFSM with respect to the standard IS + FSCHC for the parallel and sequential implementations respectively —

这里写图片描述

这里写图片描述

three control experiments —

这里写图片描述

这里写图片描述

  1. a random selection method — [ 5 % , 15 % ] [5\%,15\%] [5%,15%] and [ 15 % , 25 % ] [15\%,25\%] [15%,25%] (instance and features)
  2. IB3 [1] algorithm for instance selection and ReliefF [55] method for feature selection
  3. IB3 [1] algorithm for feature selection and ReliefF [55] method for instance selection

the performance in terms of reduction was similar
a very significant worsening of the accuracy


###Scalability to very large datasets

problems with many instances, many features and both many instances and features

the largest set containing 50 million instances and 800 features

a large imbalance ratio

G-mean

这里写图片描述


#结论

simultaneous instance and feature selection

a bookkeeping mechanism

a voting method

a CHC algorithm

class-imbalanced data

scaled up to problems of very large size

scalability:

  1. instance and feature selection is always performed over small datasets
  2. a bookkeeping approach
  3. only small subsets must be kept in memory

decision trees
support vector machines

这篇关于一种可扩展的同时进化实例和特征选择方法的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/971938

相关文章

JAVA中整型数组、字符串数组、整型数和字符串 的创建与转换的方法

《JAVA中整型数组、字符串数组、整型数和字符串的创建与转换的方法》本文介绍了Java中字符串、字符数组和整型数组的创建方法,以及它们之间的转换方法,还详细讲解了字符串中的一些常用方法,如index... 目录一、字符串、字符数组和整型数组的创建1、字符串的创建方法1.1 通过引用字符数组来创建字符串1.2

Java调用Python代码的几种方法小结

《Java调用Python代码的几种方法小结》Python语言有丰富的系统管理、数据处理、统计类软件包,因此从java应用中调用Python代码的需求很常见、实用,本文介绍几种方法从java调用Pyt... 目录引言Java core使用ProcessBuilder使用Java脚本引擎总结引言python

Apache Tomcat服务器版本号隐藏的几种方法

《ApacheTomcat服务器版本号隐藏的几种方法》本文主要介绍了ApacheTomcat服务器版本号隐藏的几种方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需... 目录1. 隐藏HTTP响应头中的Server信息编辑 server.XML 文件2. 修China编程改错误

Java中switch-case结构的使用方法举例详解

《Java中switch-case结构的使用方法举例详解》:本文主要介绍Java中switch-case结构使用的相关资料,switch-case结构是Java中处理多个分支条件的一种有效方式,它... 目录前言一、switch-case结构的基本语法二、使用示例三、注意事项四、总结前言对于Java初学者

使用Python实现大文件切片上传及断点续传的方法

《使用Python实现大文件切片上传及断点续传的方法》本文介绍了使用Python实现大文件切片上传及断点续传的方法,包括功能模块划分(获取上传文件接口状态、临时文件夹状态信息、切片上传、切片合并)、整... 目录概要整体架构流程技术细节获取上传文件状态接口获取临时文件夹状态信息接口切片上传功能文件合并功能小

Oracle Expdp按条件导出指定表数据的方法实例

《OracleExpdp按条件导出指定表数据的方法实例》:本文主要介绍Oracle的expdp数据泵方式导出特定机构和时间范围的数据,并通过parfile文件进行条件限制和配置,文中通过代码介绍... 目录1.场景描述 2.方案分析3.实验验证 3.1 parfile文件3.2 expdp命令导出4.总结

更改docker默认数据目录的方法步骤

《更改docker默认数据目录的方法步骤》本文主要介绍了更改docker默认数据目录的方法步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一... 目录1.查看docker是否存在并停止该服务2.挂载镜像并安装rsync便于备份3.取消挂载备份和迁

JavaScript DOM操作与事件处理方法

《JavaScriptDOM操作与事件处理方法》本文通过一系列代码片段,详细介绍了如何使用JavaScript进行DOM操作、事件处理、属性操作、内容操作、尺寸和位置获取,以及实现简单的动画效果,涵... 目录前言1. 类名操作代码片段代码解析2. 属性操作代码片段代码解析3. 内容操作代码片段代码解析4.

SpringBoot3集成swagger文档的使用方法

《SpringBoot3集成swagger文档的使用方法》本文介绍了Swagger的诞生背景、主要功能以及如何在SpringBoot3中集成Swagger文档,Swagger可以帮助自动生成API文档... 目录一、前言1. API 文档自动生成2. 交互式 API 测试3. API 设计和开发协作二、使用

python忽略warnings的几种方法

《python忽略warnings的几种方法》本文主要介绍了几种在Python忽略警告信息的方法,,可以使用Python内置的警告控制机制来抑制特定类型的警告,下面就来介绍一下,感兴趣的可以了解一下... 目录方法 1: 使用 warnings 模块过滤特定类型和消息内容的警告方法 2: 使用 warnin