使用MapReduce实现knn算法

2024-06-20 18:18

本文主要是介绍使用MapReduce实现knn算法,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

算法的流程

(1)首先将训练集以共享文件的方式分发到各个map节点

(2)每一个map节点主要<LongWritable ,Text,,LongWritable,ListWritable<DoubleWritable>> LongWritable 主要就是文件的偏移地址,保证唯一。ListWritable主要就是最近的类别。

Reduce节点主要计算出,每一个要预测节点的类别。

package knn;


public class Distance {

public static double EuclideanDistance(double[] a, double[] b)
throws Exception {
if (a.length != b.length)
throw new Exception("size not compatible!");
double sum = 0.0;
for (int i = 0; i < a.length; i++) {
sum += Math.pow(a[i] - b[i], 2);
}
return Math.sqrt(sum);
}
}

package knn;


import java.io.BufferedReader;


/**
 * KNearestNeigbour Classifier each instance in training set is of form
 * a1,a2,a3...an,l1 in which l1 represents the label. and each instance in
 * predict set is of form a1,a2,a3...an,-1,in which -1 is the label we want to
 * specify. In my algorithm,I assume that the trainning set is relatively small
 * so we can load them in memory and the predict set is large another thing we
 * need to pay attention to is that all our test instances are all in one file
 * so that the index of line is unique to each instance.
 * 
 */
public class KNearestNeighbour {
public static class KNNMap
extends
Mapper<LongWritable, Text, LongWritable, ListWritable<DoubleWritable>> {
private int k;
private ArrayList<Instance> trainSet;


@Override
protected void setup(Context context) throws IOException,
InterruptedException {
k = context.getConfiguration().getInt("k", 1);
trainSet = new ArrayList<Instance>();


Path[] trainFile = DistributedCache.getLocalCacheFiles(context
.getConfiguration());
// add all the tranning instances into attributes
BufferedReader br = null;
String line;
for (int i = 0; i < trainFile.length; i++) {
br = new BufferedReader(new FileReader(trainFile[0].toString()));
while ((line = br.readLine()) != null) {
Instance trainInstance = new Instance(line);
System.out.println(trainInstance.toString());
trainSet.add(trainInstance);
}
}
}


/**
* find the nearest k labels and put them in an object of type
* ListWritable. and emit <textIndex,lableList>
*/
@Override
public void map(LongWritable textIndex, Text textLine, Context context)
throws IOException, InterruptedException {
System.out.println(textLine.toString());
// distance stores all the current nearst distance value
// . trainLable store the corresponding lable
ArrayList<Double> distance = new ArrayList<Double>(k);
ArrayList<DoubleWritable> trainLable = new ArrayList<DoubleWritable>(
k);
for (int i = 0; i < k; i++) {
distance.add(Double.MAX_VALUE);
trainLable.add(new DoubleWritable(-1.0));
}
ListWritable<DoubleWritable> lables = new ListWritable<DoubleWritable>(
DoubleWritable.class);
Instance testInstance = new Instance(textLine.toString());
for (int i = 0; i < trainSet.size(); i++) {
try {
double dis = Distance.EuclideanDistance(trainSet.get(i)
.getAtrributeValue(), testInstance
.getAtrributeValue());
int index = indexOfMax(distance);
if (dis < distance.get(index)) {
distance.remove(index);
trainLable.remove(index);
distance.add(dis);
trainLable.add(new DoubleWritable(trainSet.get(i)
.getLable()));
}
} catch (Exception e) {
e.printStackTrace();
}
}
lables.setList(trainLable);
context.write(textIndex, lables);
}


/**
* return the index of the maximum number of an array

* @param array
* @return
*/
public int indexOfMax(ArrayList<Double> array) {
int index = -1;
Double min = Double.MIN_VALUE;
for (int i = 0; i < array.size(); i++) {
if (array.get(i) > min) {
min = array.get(i);
index = i;
}
}
return index;
}
}


public static class KNNReduce
extends
Reducer<LongWritable, ListWritable<DoubleWritable>, NullWritable, DoubleWritable> {


@Override
public void reduce(LongWritable index,
Iterable<ListWritable<DoubleWritable>> kLables, Context context)
throws IOException, InterruptedException {
/**
* each index can actually have one list because of the assumption
* that the particular line index is unique to one instance.
*/
DoubleWritable predictedLable = new DoubleWritable();
for (ListWritable<DoubleWritable> val : kLables) {
try {
predictedLable = valueOfMostFrequent(val);
break;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
context.write(NullWritable.get(), predictedLable);
}


public DoubleWritable valueOfMostFrequent(
ListWritable<DoubleWritable> list) throws Exception {
if (list.isEmpty())
throw new Exception("list is empty!");
else {
HashMap<DoubleWritable, Integer> tmp = new HashMap<DoubleWritable, Integer>();
for (int i = 0; i < list.size(); i++) {
if (tmp.containsKey(list.get(i))) {
Integer frequence = tmp.get(list.get(i)) + 1;
tmp.remove(list.get(i));
tmp.put(list.get(i), frequence);
} else {
tmp.put(list.get(i), new Integer(1));
}
}
// find the value with the maximum frequence.
DoubleWritable value = new DoubleWritable();
Integer frequence = new Integer(Integer.MIN_VALUE);
Iterator<Entry<DoubleWritable, Integer>> iter = tmp.entrySet()
.iterator();
while (iter.hasNext()) {
Map.Entry<DoubleWritable, Integer> entry = (Map.Entry<DoubleWritable, Integer>) iter
.next();
if (entry.getValue() > frequence) {
frequence = entry.getValue();
value = entry.getKey();
}
}
return value;
}
}
}


public static void main(String[] args) throws IOException,
InterruptedException, ClassNotFoundException {
Job kNNJob = new Job();
kNNJob.setJobName("kNNJob");
kNNJob.setJarByClass(KNearestNeighbour.class);
DistributedCache.addCacheFile(URI.create(args[2]), kNNJob
.getConfiguration());
kNNJob.getConfiguration().setInt("k", Integer.parseInt(args[3]));


kNNJob.setMapperClass(KNNMap.class);
kNNJob.setMapOutputKeyClass(LongWritable.class);
kNNJob.setMapOutputValueClass(ListWritable.class);


kNNJob.setReducerClass(KNNReduce.class);
kNNJob.setOutputKeyClass(NullWritable.class);
kNNJob.setOutputValueClass(DoubleWritable.class);


kNNJob.setInputFormatClass(TextInputFormat.class);
kNNJob.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(kNNJob, new Path(args[0]));
FileOutputFormat.setOutputPath(kNNJob, new Path(args[1]));


kNNJob.waitForCompletion(true);
System.out.println("finished!");
}
}

package knn;


public class Instance {
private double[] attributeValue;
private double lable;


/**
* a line of form a1 a2 ...an lable

* @param line
*/
public Instance(String line) {
System.out.println(line);
String[] value = line.split(" ");
attributeValue = new double[value.length - 1];
for (int i = 0; i < attributeValue.length; i++) {
attributeValue[i] = Double.parseDouble(value[i]);
System.out.print(attributeValue[i] + "\t");
}
lable = Double.parseDouble(value[value.length - 1]);
System.out.println(lable);
}


public double[] getAtrributeValue() {
return attributeValue;
}


public double getLable() {
return lable;
}
}

package knn;


import java.io.DataInput;


public class ListWritable<T extends Writable> implements Writable {
private List<T> list;
private Class<T> clazz;


public ListWritable() {
list = null;
clazz = null;
}


public ListWritable(Class<T> clazz) {
this.clazz = clazz;
list = new ArrayList<T>();
}


public void setList(List<T> list) {
this.list = list;
}


public boolean isEmpty() {
return list.isEmpty();
}


public int size() {
return list.size();
}


public void add(T element) {
list.add(element);
}


public void add(int index, T element) {
list.add(index, element);
}


public T get(int index) {
return list.get(index);
}


public T remove(int index) {
return list.remove(index);
}


public void set(int index, T element) {
list.set(index, element);
}


@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(clazz.getName());
out.writeInt(list.size());
for (T element : list) {
element.write(out);
}
}


@SuppressWarnings("unchecked")
@Override
public void readFields(DataInput in) throws IOException {
try {
clazz = (Class<T>) Class.forName(in.readUTF());
} catch (ClassNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
int count = in.readInt();
this.list = new ArrayList<T>();
for (int i = 0; i < count; i++) {
try {
T obj = clazz.newInstance();
obj.readFields(in);
list.add(obj);
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
}
}


}


训练集

1.0 2.0 3.0 1
1.0 2.1 3.1 1
0.9 2.2 2.9 1
3.4 6.7 8.9 2
3.0 7.0 8.7 2
3.3 6.9 8.8 2
2.5 3.3 10.0 3
2.4 2.9 8.0 3

这篇关于使用MapReduce实现knn算法的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1078890

相关文章

不懂推荐算法也能设计推荐系统

本文以商业化应用推荐为例,告诉我们不懂推荐算法的产品,也能从产品侧出发, 设计出一款不错的推荐系统。 相信很多新手产品,看到算法二字,多是懵圈的。 什么排序算法、最短路径等都是相对传统的算法(注:传统是指科班出身的产品都会接触过)。但对于推荐算法,多数产品对着网上搜到的资源,都会无从下手。特别当某些推荐算法 和 “AI”扯上关系后,更是加大了理解的难度。 但,不了解推荐算法,就无法做推荐系

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

hdu1043(八数码问题,广搜 + hash(实现状态压缩) )

利用康拓展开将一个排列映射成一个自然数,然后就变成了普通的广搜题。 #include<iostream>#include<algorithm>#include<string>#include<stack>#include<queue>#include<map>#include<stdio.h>#include<stdlib.h>#include<ctype.h>#inclu

康拓展开(hash算法中会用到)

康拓展开是一个全排列到一个自然数的双射(也就是某个全排列与某个自然数一一对应) 公式: X=a[n]*(n-1)!+a[n-1]*(n-2)!+...+a[i]*(i-1)!+...+a[1]*0! 其中,a[i]为整数,并且0<=a[i]<i,1<=i<=n。(a[i]在不同应用中的含义不同); 典型应用: 计算当前排列在所有由小到大全排列中的顺序,也就是说求当前排列是第

使用opencv优化图片(画面变清晰)

文章目录 需求影响照片清晰度的因素 实现降噪测试代码 锐化空间锐化Unsharp Masking频率域锐化对比测试 对比度增强常用算法对比测试 需求 对图像进行优化,使其看起来更清晰,同时保持尺寸不变,通常涉及到图像处理技术如锐化、降噪、对比度增强等 影响照片清晰度的因素 影响照片清晰度的因素有很多,主要可以从以下几个方面来分析 1. 拍摄设备 相机传感器:相机传

csu 1446 Problem J Modified LCS (扩展欧几里得算法的简单应用)

这是一道扩展欧几里得算法的简单应用题,这题是在湖南多校训练赛中队友ac的一道题,在比赛之后请教了队友,然后自己把它a掉 这也是自己独自做扩展欧几里得算法的题目 题意:把题意转变下就变成了:求d1*x - d2*y = f2 - f1的解,很明显用exgcd来解 下面介绍一下exgcd的一些知识点:求ax + by = c的解 一、首先求ax + by = gcd(a,b)的解 这个

综合安防管理平台LntonAIServer视频监控汇聚抖动检测算法优势

LntonAIServer视频质量诊断功能中的抖动检测是一个专门针对视频稳定性进行分析的功能。抖动通常是指视频帧之间的不必要运动,这种运动可能是由于摄像机的移动、传输中的错误或编解码问题导致的。抖动检测对于确保视频内容的平滑性和观看体验至关重要。 优势 1. 提高图像质量 - 清晰度提升:减少抖动,提高图像的清晰度和细节表现力,使得监控画面更加真实可信。 - 细节增强:在低光条件下,抖