本文主要是介绍openCV4.4.0 基于SIFT特征值的图像匹配【java】。。。。。搞到吐,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
前言
首先,java使用opengCV4.4.0成功搞出来基于SIFT的图像匹配是很开心的,但是还是要吐槽几句下面这块都是吐槽,干货在下面,代码自取哈。
java是要凉了还是咋地,真的没人搞openCV了么,在网上找了快一个星期真就找不到能用的代码。要么是C++要么是Python,,,我吐了。
部分提供出来的代码也是旧的版本的,旧的我也忍了,代码拿过来还是用不了!说是因为版权原因不能直接用,需要去官网拿opencv_contrib 源码自己去编译!但是opencv的模板匹配图像又不能满足我的需求!就只能去度娘看能不能白嫖到了。
网上能找到编译好的,但是版本又低,我又不确定是不是编译的jar,又要收费!我这么抠的人会去搞知识付费么?照着攻略自己干!结果看到要准备的一大堆软件,我又又吐了,,,。然后转身发现4.4.0自带SIFT算法,不需要编译!可是网上找不到能用的代码!于是根据折腾了大半个星期的积累尝试自己动手!
openCV的安装与使用应该不用说吧,都走到这一步了,en,应该是不用说了【主要是懒】
先看效果
原图
模板图 模板图有做缩放和旋转处理
匹配过程
匹配结果
再看代码
public void matchImage(BufferedImage templateImageB, BufferedImage originalImageB) {Mat resT = new Mat();Mat resO = new Mat();//即当detector 又当DetectorSIFT sift = SIFT.create();Mat templateImage = getMatify(templateImageB);Mat originalImage = getMatify(originalImageB);MatOfKeyPoint templateKeyPoints = new MatOfKeyPoint();MatOfKeyPoint originalKeyPoints = new MatOfKeyPoint();//获取模板图的特征点sift.detect(templateImage, templateKeyPoints);sift.detect(originalImage, originalKeyPoints);sift.compute(templateImage, templateKeyPoints, resT);sift.compute(originalImage, originalKeyPoints, resO);List<MatOfDMatch> matches = new LinkedList();DescriptorMatcher descriptorMatcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);System.out.println("寻找最佳匹配");printPic("ptest", templateImage);printPic("ptesO", originalImage);printPic("test", resT);printPic("tesO", resO);/*** knnMatch方法的作用就是在给定特征描述集合中寻找最佳匹配* 使用KNN-matching算法,令K=2,则每个match得到两个最接近的descriptor,然后计算最接近距离和次接近距离之间的比值,当比值大于既定值时,才作为最终match。*/descriptorMatcher.knnMatch(resT, resO, matches, 2);System.out.println("计算匹配结果");LinkedList<DMatch> goodMatchesList = new LinkedList();//对匹配结果进行筛选,依据distance进行筛选matches.forEach(match -> {DMatch[] dmatcharray = match.toArray();DMatch m1 = dmatcharray[0];DMatch m2 = dmatcharray[1];if (m1.distance <= m2.distance * nndrRatio) {goodMatchesList.addLast(m1);}});matchesPointCount = goodMatchesList.size();//当匹配后的特征点大于等于 4 个,则认为模板图在原图中,该值可以自行调整if (matchesPointCount >= 4) {System.out.println("模板图在原图匹配成功!");List<KeyPoint> templateKeyPointList = templateKeyPoints.toList();List<KeyPoint> originalKeyPointList = originalKeyPoints.toList();LinkedList<Point> objectPoints = new LinkedList();LinkedList<Point> scenePoints = new LinkedList();goodMatchesList.forEach(goodMatch -> {objectPoints.addLast(templateKeyPointList.get(goodMatch.queryIdx).pt);scenePoints.addLast(originalKeyPointList.get(goodMatch.trainIdx).pt);});MatOfPoint2f objMatOfPoint2f = new MatOfPoint2f();objMatOfPoint2f.fromList(objectPoints);MatOfPoint2f scnMatOfPoint2f = new MatOfPoint2f();scnMatOfPoint2f.fromList(scenePoints);//使用 findHomography 寻找匹配上的关键点的变换Mat homography = Calib3d.findHomography(objMatOfPoint2f, scnMatOfPoint2f, Calib3d.RANSAC, 3);/*** 透视变换(Perspective Transformation)是将图片投影到一个新的视平面(Viewing Plane),也称作投影映射(Projective Mapping)。*/Mat templateCorners = new Mat(4, 1, CvType.CV_32FC2);Mat templateTransformResult = new Mat(4, 1, CvType.CV_32FC2);templateCorners.put(0, 0, new double[]{0, 0});templateCorners.put(1, 0, new double[]{templateImage.cols(), 0});templateCorners.put(2, 0, new double[]{templateImage.cols(), templateImage.rows()});templateCorners.put(3, 0, new double[]{0, templateImage.rows()});//使用 perspectiveTransform 将模板图进行透视变以矫正图象得到标准图片Core.perspectiveTransform(templateCorners, templateTransformResult, homography);//矩形四个顶点 匹配的图片经过旋转之后就这个矩形的四个点的位置就不是正常的abcd了double[] pointA = templateTransformResult.get(0, 0);double[] pointB = templateTransformResult.get(1, 0);double[] pointC = templateTransformResult.get(2, 0);double[] pointD = templateTransformResult.get(3, 0);//指定取得数组子集的范围
// int rowStart = (int) pointA[1];
// int rowEnd = (int) pointC[1];
// int colStart = (int) pointD[0];
// int colEnd = (int) pointB[0];//rowStart, rowEnd, colStart, colEnd 好像必须左上右下 没必要从原图扣下来模板图了
// Mat subMat = originalImage.submat(rowStart, rowEnd, colStart, colEnd);
// printPic("yppt", subMat);//将匹配的图像用用四条线框出来Imgproc.rectangle(originalImage, new Point(pointA), new Point(pointC), new Scalar(0, 255, 0));/* Core.line(originalImage, new Point(pointA), new Point(pointB), new Scalar(0, 255, 0), 4);//上 A->BCore.line(originalImage, new Point(pointB), new Point(pointC), new Scalar(0, 255, 0), 4);//右 B->CCore.line(originalImage, new Point(pointC), new Point(pointD), new Scalar(0, 255, 0), 4);//下 C->DCore.line(originalImage, new Point(pointD), new Point(pointA), new Scalar(0, 255, 0), 4);//左 D->A*/MatOfDMatch goodMatches = new MatOfDMatch();goodMatches.fromList(goodMatchesList);Mat matchOutput = new Mat(originalImage.rows() * 2, originalImage.cols() * 2, Imgcodecs.IMREAD_COLOR);Features2d.drawMatches(templateImage, templateKeyPoints, originalImage, originalKeyPoints, goodMatches, matchOutput, new Scalar(0, 255, 0), new Scalar(255, 0, 0), new MatOfByte(), 2);Features2d.drawMatches(templateImage, templateKeyPoints, originalImage, originalKeyPoints, goodMatches, matchOutput, new Scalar(0, 255, 0), new Scalar(255, 0, 0), new MatOfByte(), 2);printPic("ppgc", matchOutput);printPic("ytwz", originalImage);} else {System.out.println("模板图不在原图中!");}printPic("模板特征点", resT);}public void printPic(String name, Mat pre) {Imgcodecs.imwrite(name + ".jpg", pre);}/*** 尝试把BufferedImage转换为Mat** @param im* @return*/public Mat getMatify(BufferedImage im) {BufferedImage bufferedImage = toBufferedImageOfType(im, BufferedImage.TYPE_3BYTE_BGR);//将bufferedimage转换为字节数组byte[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
// byte[] pixels = ((DataBufferByte) im.getRaster().getDataBuffer()).getData();Mat image = new Mat(bufferedImage.getHeight(), bufferedImage.getWidth(), CvType.CV_8UC3);image.put(0, 0, pixels);return image;}
当前这个方法可以使用SIFT实现特征值匹配,顺带再附上模板匹配的代码,聊胜于无记录一番
/*** 暂定返回一个点坐标吧,但是这个坐标是相对源图片的,不是最终坐标** @param sourceB* @param templateB*/public PicPoint matchTemplate(BufferedImage sourceB, BufferedImage templateB) {Mat source = getMatify(sourceB);Mat template = getMatify(templateB);//创建于原图相同的大小,储存匹配度Mat result = Mat.zeros(source.rows() - template.rows() + 1, source.cols() - template.cols() + 1, CvType.CV_32FC1);//调用模板匹配方法Imgproc.matchTemplate(source, template, result, Imgproc.TM_SQDIFF_NORMED);//规格化Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1);//获得最可能点,MinMaxLocResult是其数据格式,包括了最大、最小点的位置x、yCore.MinMaxLocResult mlr = Core.minMaxLoc(result);Point matchLoc = mlr.minLoc;//在原图上的对应模板可能位置画一个绿色矩形Imgproc.rectangle(source, matchLoc, new Point(matchLoc.x + template.width(), matchLoc.y + template.height()), new Scalar(0, 255, 0));//将结果输出到对应位置printPic("E:\\study\\CV\\result3.png", source);return new PicPoint(matchLoc);}
里面缺的类只是一个普通的实体,耗时一个星期实现出来,记录学习一下,openCV涉及java的东西实在太少了,如果有需要的代码自取。
/**********************************************补充一下图片转化的代码====================================================
/*** 转换图片类型** @param original* @param type* @return*/public static BufferedImage toBufferedImageOfType(BufferedImage original, int type) {if (original == null) {throw new IllegalArgumentException("original == null");}// Don't convert if it already has correct typeif (original.getType() == type) {return original;}// Create a buffered imageBufferedImage image = new BufferedImage(original.getWidth(), original.getHeight(), type);// Draw the image onto the new bufferGraphics2D g = image.createGraphics();try {g.setComposite(AlphaComposite.Src);g.drawImage(original, 0, 0, null);} finally {g.dispose();}return image;}
ps:这玩意是真的烦,有需要的xdm不明白的可以私聊我。看到都会回复qaq
这篇关于openCV4.4.0 基于SIFT特征值的图像匹配【java】。。。。。搞到吐的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!