Data Mining数据挖掘—5. Association Analysis关联分析

2023-12-10 11:01

本文主要是介绍Data Mining数据挖掘—5. Association Analysis关联分析,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

6. Association Analysis

Given a set of records each of which contains some number of items from a given collection.
Produce dependency rules that will predict the occurrence of an item based on occurrences of other items.
Application area: Marketing and Sales Promotion, Content-based recommendation, Customer loyalty programs

Initially used for Market Basket Analysis to find how items purchased by customers are related. Later extended to more complex data structures: sequential patterns and subgraph patterns

6.1 Simple Approach: Pearson’s correlation coefficient

Pearson's correlation coefficient in Association Analysis

correlation not equals to causality

6.2 Definitoin

6.2.1 Frequent Itemset

Frequent Itemset

6.2.2 Association Rule

Association Rule

6.2.3 Evaluation Metrics

Evaluation Metrics

6.3 Associate Rule Mining Task

Given a set of transactions T, the goal of association rule mining is to find all rules having
– support ≥ minsup threshold
– confidence ≥ minconf threshold
minsup and minconf are provided by the user
Brute-force approach
Step1: List all possible association rules
Step2: Compute the support and confidence for each rule
Step3: Remove rules that fail the minsup and minconf thresholds

But Computationally prohibitive due to large number of candidates!

Brute-force Approach

Mining Association Rules

6.4 Apriori Algorithm

Two-step approach
Step1: Frequent Itemset Generation (Generate all itemsets whose support ≥ minsup)
Step2: Rule Generation (Generate high confidence rules from each frequent itemset; where each rule is a binary partitioning of a frequent itemset)

However, frequent itemset generation is still computationally expensive… Given d items, there are 2^d candidate itemsets!

Anti-Monotonicity of Support
Anti-Monotonicity of Support

Steps

  1. Start at k=1
  2. Generate frequent itemsets of length k=1
  3. Repeat until no new frequent itemsets are identified
    1. Generate length (k+1) candidate itemsets from length k frequent itemsets; increase k
    2. Prune candidate itemsets that cannot be frequent because they contain subsets of length k that are infrequent (Apriori Principle)
    3. Count the support of each remaining candidate by scanning the DB
    4. Eliminate candidates that are infrequent, leaving only those that are frequent

Illustrating the Apriori Principle

From Frequent Itemsets to Rules
From Frequent Itemsets to Rules

Challenge: Combinatorial Explosion1
Challenge: Combinatorial Explosion2

Rule Generation

Rule Generation for Apriori Algorithm

Complexity of Apriori Algorithm
Complexity of Apriori Algorithm

6.5 FP-growth Algorithm

usually faster than Apriori, requires at most two passes over the database
Use a compressed representation of the database using an FP-tree
Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets
FP-Tree Construction

FP-Tree Construction

FP-Growth(Summary)

6.6 Interestingness Measures

Interestingness measures can be used to prune or rank the derived rules
In the original formulation of association rules, support & confidence are the only interest measures used
various other measures have been proposed

Drawback of Confidence
Drawback of Confidence1

Drawback of Confidence2

6.6.1 Correlation

Correlation takes into account all data at once.
In our scenario: corr(tea,coffee) = -0.25
i.e., the correlation is negative
Interpretation: people who drink tea are less likely to drink coffee

6.6.2 Lift

Lift1

Lift2

Example: Lift

lift and correlation are symmetric [lift(tea → coffee) = lift(coffee → tea)]
confidence is asymmetric

6.6.3 Others

6.7 Handling Continuous and Categorical Attributes

6.7.1 Handling Categorical Attributes

Transform categorical attribute into asymmetric binary variables. Introduce a new “item” for each distinct attribute-value pair -> one-hot-encoding
Potential Issues
(1) Many attribute values
Many of the attribute values may have very low support
Potential solution: Aggregate the low-support attribute values -> bin for “other”
(2) Highly skewed attribute values
Example: 95% of the visitors have Buy = No
Most of the items will be associated with (Buy=No) item
Potential solution: drop the highly frequent items

6.7.2 Handling Continuous Attributes

Transform continuous attribute into binary variables using discretization:
Equal-width binning & Equal-frequency binning
Issue: Size of the intervals affects support & confidence - Too small intervals: not enough support but Too large intervals: not enough confidence

6.8 Effect of Support Distribution

Many real data sets have a skewed support distribution
How to set the appropriate minsup threshold?
If minsup is set too high, we could miss itemsets involving interesting rare items (e.g., expensive products)
If minsup is set too low, it is computationally expensive and the number of itemsets is very large
Using a single minimum support threshold may not be effective
Multiple Minimum Support
Multiple Minimum Support

6.9 Association Rules with Temporal Components

Association Rules with Temporal Components

6.10 Subgroup Discovery

Association Rule Mining: Find all patterns in the data
Classification: Identify the best patterns that can predict a target variable
Find all patterns that can explain a target variable.
从数据集中发现具有特定属性和特征的子群或子集。这个任务的目标是识别数据中与感兴趣的属性或行为相关的子群,以便更深入地理解数据、做出预测或采取相关行动。在某些情况下,子群发现可以用于生成新的特征,然后将这些特征用于分类任务。
子群发现旨在发现数据中的子群,而分类旨在将数据分为已知的类别。子群发现通常更加探索性,而分类通常更加预测性。
we have strong predictor variables. But we are also interested in the weaker ones

Algorithms
Early algorithms: Learn unpruned decision tree; Extract rule; Compute measures for rules, rate and rank
Newer algorithms: Based on association rule mining; Based on evolutionary algorithms

Rating Rules
Goals: rules should be covering many examples & Accurate
Rules of both high coverage and accuracy are interesting

Subgroup Discovery – Rating Rules

Subgroup Discovery – Metrics
Subgroup Discovery – Metrics

WRacc1

WRacc2

WRacc3

Subgroup Discovery – Summary

6.11 Summary

Association AnalysisApriori & FP-GrowthSubgroup Discovery
discovering patterns in data; patterns are described by rulesFinds rules with minimum support (i.e., number of transactions) and minimum confidence (i.e., strength of the implication)Learn rules for a particular target variable; Create a comprehensive model of a class

这篇关于Data Mining数据挖掘—5. Association Analysis关联分析的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/476920

相关文章

Redis主从复制实现原理分析

《Redis主从复制实现原理分析》Redis主从复制通过Sync和CommandPropagate阶段实现数据同步,2.8版本后引入Psync指令,根据复制偏移量进行全量或部分同步,优化了数据传输效率... 目录Redis主DodMIK从复制实现原理实现原理Psync: 2.8版本后总结Redis主从复制实

锐捷和腾达哪个好? 两个品牌路由器对比分析

《锐捷和腾达哪个好?两个品牌路由器对比分析》在选择路由器时,Tenda和锐捷都是备受关注的品牌,各自有独特的产品特点和市场定位,选择哪个品牌的路由器更合适,实际上取决于你的具体需求和使用场景,我们从... 在选购路由器时,锐捷和腾达都是市场上备受关注的品牌,但它们的定位和特点却有所不同。锐捷更偏向企业级和专

Spring中Bean有关NullPointerException异常的原因分析

《Spring中Bean有关NullPointerException异常的原因分析》在Spring中使用@Autowired注解注入的bean不能在静态上下文中访问,否则会导致NullPointerE... 目录Spring中Bean有关NullPointerException异常的原因问题描述解决方案总结

python中的与时间相关的模块应用场景分析

《python中的与时间相关的模块应用场景分析》本文介绍了Python中与时间相关的几个重要模块:`time`、`datetime`、`calendar`、`timeit`、`pytz`和`dateu... 目录1. time 模块2. datetime 模块3. calendar 模块4. timeit

python-nmap实现python利用nmap进行扫描分析

《python-nmap实现python利用nmap进行扫描分析》Nmap是一个非常用的网络/端口扫描工具,如果想将nmap集成进你的工具里,可以使用python-nmap这个python库,它提供了... 目录前言python-nmap的基本使用PortScanner扫描PortScannerAsync异

Oracle数据库执行计划的查看与分析技巧

《Oracle数据库执行计划的查看与分析技巧》在Oracle数据库中,执行计划能够帮助我们深入了解SQL语句在数据库内部的执行细节,进而优化查询性能、提升系统效率,执行计划是Oracle数据库优化器为... 目录一、什么是执行计划二、查看执行计划的方法(一)使用 EXPLAIN PLAN 命令(二)通过 S

性能分析之MySQL索引实战案例

文章目录 一、前言二、准备三、MySQL索引优化四、MySQL 索引知识回顾五、总结 一、前言 在上一讲性能工具之 JProfiler 简单登录案例分析实战中已经发现SQL没有建立索引问题,本文将一起从代码层去分析为什么没有建立索引? 开源ERP项目地址:https://gitee.com/jishenghua/JSH_ERP 二、准备 打开IDEA找到登录请求资源路径位置

SWAP作物生长模型安装教程、数据制备、敏感性分析、气候变化影响、R模型敏感性分析与贝叶斯优化、Fortran源代码分析、气候数据降尺度与变化影响分析

查看原文>>>全流程SWAP农业模型数据制备、敏感性分析及气候变化影响实践技术应用 SWAP模型是由荷兰瓦赫宁根大学开发的先进农作物模型,它综合考虑了土壤-水分-大气以及植被间的相互作用;是一种描述作物生长过程的一种机理性作物生长模型。它不但运用Richard方程,使其能够精确的模拟土壤中水分的运动,而且耦合了WOFOST作物模型使作物的生长描述更为科学。 本文让更多的科研人员和农业工作者

MOLE 2.5 分析分子通道和孔隙

软件介绍 生物大分子通道和孔隙在生物学中发挥着重要作用,例如在分子识别和酶底物特异性方面。 我们介绍了一种名为 MOLE 2.5 的高级软件工具,该工具旨在分析分子通道和孔隙。 与其他可用软件工具的基准测试表明,MOLE 2.5 相比更快、更强大、功能更丰富。作为一项新功能,MOLE 2.5 可以估算已识别通道的物理化学性质。 软件下载 https://pan.quark.cn/s/57

衡石分析平台使用手册-单机安装及启动

单机安装及启动​ 本文讲述如何在单机环境下进行 HENGSHI SENSE 安装的操作过程。 在安装前请确认网络环境,如果是隔离环境,无法连接互联网时,请先按照 离线环境安装依赖的指导进行依赖包的安装,然后按照本文的指导继续操作。如果网络环境可以连接互联网,请直接按照本文的指导进行安装。 准备工作​ 请参考安装环境文档准备安装环境。 配置用户与安装目录。 在操作前请检查您是否有 sud