图像直方图比较

2024-08-22 03:28
文章标签 比较 图像 直方图

本文主要是介绍图像直方图比较,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

对于直方图的比较,我们可以使用 OpenCV 提供的函数 compareHist() 进行比较,从而得到一个数值,表示两个直方图的匹配程度(相似性)。

原理

对于两个直方图( H 1 H_{1} H1 H 2 H_{2} H2),我们首先选用一个指标( d ( H 1 , H 2 ) d(H_{1}, H_{2}) d(H1,H2)),表示两个直方图的匹配度。

根据 Histogram Comparison 参考资料[1],OpenCV的 compareHist 函数计算直方图匹配度所提供的指标有以下4种:

  1. HISTCMP_CORREL:相关性
  2. HISTCMP_CHISQR:卡方
  3. HISTCMP_INTERSECT:交集
  4. HISTCMP_BHATTACHARYYA:巴氏距离

而根据 HistCompMethods[2],OpenCV的 compareHist 函数计算直方图匹配度所提供的指标有以下7种(实际为6种,巴氏和海林格算一种):

  1. HISTCMP_CORREL:相关性(Correlation)

    d ( H 1 , H 2 ) = ∑ I ( H 1 ( I ) − H 1 ˉ ) ( H 2 ( I ) − H 2 ˉ ) ∑ I ( H 1 ( I ) − H 1 ˉ ) 2 ∑ I ( H 2 ( I ) − H 2 ˉ ) 2 d(H_1,H_2) = \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}} d(H1,H2)=I(H1(I)H1ˉ)2I(H2(I)H2ˉ)2 I(H1(I)H1ˉ)(H2(I)H2ˉ)

    其中,

    H 1 ˉ = 1 N ∑ I H 1 ( I ) \bar{H_1} = \frac{1}{N}\sum_I H_1(I) H1ˉ=N1IH1(I)

    N N N是直方图桶的数量。

  2. HISTCMP_CHISQR:卡方(Chi-Square)

    d ( H 1 , H 2 ) = ∑ I ( H 1 ( I ) − H 2 ( I ) ) 2 H 1 ( I ) d(H_1,H_2) = \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)} d(H1,H2)=IH1(I)(H1(I)H2(I))2

  3. HISTCMP_INTERSECT:交集(Intersection)

    d ( H 1 , H 2 ) = ∑ I min ⁡ ( H 1 ( I ) , H 2 ( I ) ) d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I)) d(H1,H2)=Imin(H1(I),H2(I))

  4. HISTCMP_BHATTACHARYYA:巴氏距离(Bhattacharyya distance),事实上 OpenCV 计算的是与巴氏距离系数相关的海林格距离(Hellinger distance)。

    d ( H 1 , H 2 ) = 1 − 1 H 1 ˉ H 2 ˉ N 2 ∑ I H 1 ( I ) ⋅ H 2 ( I ) d(H_1,H_2) = \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}} d(H1,H2)=1H1ˉH2ˉN2 1IH1(I)H2(I)

  5. HISTCMP_HELLINGER:海林格距离(Hellinger distance),同巴氏距离。

  6. HISTCMP_CHISQR_ALT:修正卡方(Alternative Chi-Square)

    d ( H 1 , H 2 ) = 2 × ∑ I ( H 1 ( I ) − H 2 ( I ) ) 2 H 1 ( I ) + H 2 ( I ) d(H_1,H_2) = 2 × \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)+H_2(I)} d(H1,H2)=2×IH1(I)+H2(I)(H1(I)H2(I))2

    常被用于材质比较。

  7. HISTCMP_KL_DIV:库尔巴克-莱布勒散度(Kullback-Leibler divergence)

    d ( H 1 , H 2 ) = ∑ I H 1 ( I ) log ⁡ ( H 1 ( I ) H 2 ( I ) ) d(H_1,H_2) = \sum _I H_1(I) \log \left(\frac{H_1(I)}{H_2(I)}\right) d(H1,H2)=IH1(I)log(H2(I)H1(I))

OpenCV 中的声明与定义(以 4.10 版为例)

  • OpenCV 中 compareHist函数声明:

      /** @brief Compares two histograms.The function cv::compareHist compares two dense or two sparse histograms using the specified method.The function returns \f$d(H_1, H_2)\f$ .While the function works well with 1-, 2-, 3-dimensional dense histograms, it may not be suitablefor high-dimensional sparse histograms. In such histograms, because of aliasing and samplingproblems, the coordinates of non-zero histogram bins can slightly shift. To compare such histogramsor more general sparse configurations of weighted points, consider using the #EMD function.@param H1 First compared histogram.@param H2 Second compared histogram of the same size as H1 .@param method Comparison method, see #HistCompMethods*/CV_EXPORTS_W double compareHist( InputArray H1, InputArray H2, int method );/** @overload */CV_EXPORTS double compareHist( const SparseMat& H1, const SparseMat& H2, int method );
    
  • OpenCV 中 HistCompMethods 枚举:

      enum HistCompMethods {/** Correlation\f[d(H_1,H_2) =  \frac{\sum_I (H_1(I) - \bar{H_1}) (H_2(I) - \bar{H_2})}{\sqrt{\sum_I(H_1(I) - \bar{H_1})^2 \sum_I(H_2(I) - \bar{H_2})^2}}\f]where\f[\bar{H_k} =  \frac{1}{N} \sum _J H_k(J)\f]and \f$N\f$ is a total number of histogram bins. */HISTCMP_CORREL        = 0,/** Chi-Square\f[d(H_1,H_2) =  \sum _I  \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)}\f] */HISTCMP_CHISQR        = 1,/** Intersection\f[d(H_1,H_2) =  \sum _I  \min (H_1(I), H_2(I))\f] */HISTCMP_INTERSECT     = 2,/** Bhattacharyya distance(In fact, OpenCV computes Hellinger distance, which is related to Bhattacharyya coefficient.)\f[d(H_1,H_2) =  \sqrt{1 - \frac{1}{\sqrt{\bar{H_1} \bar{H_2} N^2}} \sum_I \sqrt{H_1(I) \cdot H_2(I)}}\f] */HISTCMP_BHATTACHARYYA = 3,HISTCMP_HELLINGER     = HISTCMP_BHATTACHARYYA, //!< Synonym for HISTCMP_BHATTACHARYYA/** Alternative Chi-Square\f[d(H_1,H_2) =  2 * \sum _I  \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)+H_2(I)}\f]This alternative formula is regularly used for texture comparison. See e.g. @cite Puzicha1997 */HISTCMP_CHISQR_ALT    = 4,/** Kullback-Leibler divergence\f[d(H_1,H_2) = \sum _I H_1(I) \log \left(\frac{H_1(I)}{H_2(I)}\right)\f] */HISTCMP_KL_DIV        = 5};
    
  • OpenCV 中 compareHist 函数定义:

// C O M P A R E   H I S T O G R A M S double cv::compareHist( InputArray _H1, InputArray _H2, int method )
{CV_INSTRUMENT_REGION();Mat H1 = _H1.getMat(), H2 = _H2.getMat();const Mat* arrays[] = {&H1, &H2, 0};Mat planes[2];NAryMatIterator it(arrays, planes);double result = 0;int j;CV_Assert( H1.type() == H2.type() && H1.depth() == CV_32F );double s1 = 0, s2 = 0, s11 = 0, s12 = 0, s22 = 0;CV_Assert( it.planes[0].isContinuous() && it.planes[1].isContinuous() );for( size_t i = 0; i < it.nplanes; i++, ++it ){const float* h1 = it.planes[0].ptr<float>();const float* h2 = it.planes[1].ptr<float>();const int len = it.planes[0].rows*it.planes[0].cols*H1.channels();j = 0;if( (method == CV_COMP_CHISQR) || (method == CV_COMP_CHISQR_ALT)){for( ; j < len; j++ ){double a = h1[j] - h2[j];double b = (method == CV_COMP_CHISQR) ? h1[j] : h1[j] + h2[j];if( fabs(b) > DBL_EPSILON )result += a*a/b;}}else if( method == CV_COMP_CORREL ){
#if (CV_SIMD_64F || CV_SIMD_SCALABLE_64F)v_float64 v_s1 = vx_setzero_f64();v_float64 v_s2 = vx_setzero_f64();v_float64 v_s11 = vx_setzero_f64();v_float64 v_s12 = vx_setzero_f64();v_float64 v_s22 = vx_setzero_f64();for ( ; j <= len - VTraits<v_float32>::vlanes(); j += VTraits<v_float32>::vlanes()){v_float32 v_a = vx_load(h1 + j);v_float32 v_b = vx_load(h2 + j);// 0-1v_float64 v_ad = v_cvt_f64(v_a);v_float64 v_bd = v_cvt_f64(v_b);v_s12 = v_muladd(v_ad, v_bd, v_s12);v_s11 = v_muladd(v_ad, v_ad, v_s11);v_s22 = v_muladd(v_bd, v_bd, v_s22);v_s1 = v_add(v_s1, v_ad);v_s2 = v_add(v_s2, v_bd);// 2-3v_ad = v_cvt_f64_high(v_a);v_bd = v_cvt_f64_high(v_b);v_s12 = v_muladd(v_ad, v_bd, v_s12);v_s11 = v_muladd(v_ad, v_ad, v_s11);v_s22 = v_muladd(v_bd, v_bd, v_s22);v_s1 = v_add(v_s1, v_ad);v_s2 = v_add(v_s2, v_bd);}s12 += v_reduce_sum(v_s12);s11 += v_reduce_sum(v_s11);s22 += v_reduce_sum(v_s22);s1 += v_reduce_sum(v_s1);s2 += v_reduce_sum(v_s2);
#elif CV_SIMD && 0 //Disable vectorization for CV_COMP_CORREL if f64 is unsupported due to low precisionv_float32 v_s1 = vx_setzero_f32();v_float32 v_s2 = vx_setzero_f32();v_float32 v_s11 = vx_setzero_f32();v_float32 v_s12 = vx_setzero_f32();v_float32 v_s22 = vx_setzero_f32();for (; j <= len - VTraits<v_float32>::vlanes(); j += VTraits<v_float32>::vlanes()){v_float32 v_a = vx_load(h1 + j);v_float32 v_b = vx_load(h2 + j);v_s12 = v_muladd(v_a, v_b, v_s12);v_s11 = v_muladd(v_a, v_a, v_s11);v_s22 = v_muladd(v_b, v_b, v_s22);v_s1 += v_a;v_s2 += v_b;}s12 += v_reduce_sum(v_s12);s11 += v_reduce_sum(v_s11);s22 += v_reduce_sum(v_s22);s1 += v_reduce_sum(v_s1);s2 += v_reduce_sum(v_s2);
#endiffor( ; j < len; j++ ){double a = h1[j];double b = h2[j];s12 += a*b;s1 += a;s11 += a*a;s2 += b;s22 += b*b;}}else if( method == CV_COMP_INTERSECT ){
#if (CV_SIMD_64F || CV_SIMD_SCALABLE_64F)v_float64 v_result = vx_setzero_f64();for ( ; j <= len - VTraits<v_float32>::vlanes(); j += VTraits<v_float32>::vlanes()){v_float32 v_src = v_min(vx_load(h1 + j), vx_load(h2 + j));v_result = v_add(v_result, v_add(v_cvt_f64(v_src), v_cvt_f64_high(v_src)));}result += v_reduce_sum(v_result);
#elif CV_SIMDv_float32 v_result = vx_setzero_f32();for (; j <= len - VTraits<v_float32>::vlanes(); j += VTraits<v_float32>::vlanes()){v_float32 v_src = v_min(vx_load(h1 + j), vx_load(h2 + j));v_result = v_add(v_result, v_src);}result += v_reduce_sum(v_result);
#endiffor( ; j < len; j++ )result += std::min(h1[j], h2[j]);}else if( method == CV_COMP_BHATTACHARYYA ){
#if (CV_SIMD_64F || CV_SIMD_SCALABLE_64F)v_float64 v_s1 = vx_setzero_f64();v_float64 v_s2 = vx_setzero_f64();v_float64 v_result = vx_setzero_f64();for ( ; j <= len - VTraits<v_float32>::vlanes(); j += VTraits<v_float32>::vlanes()){v_float32 v_a = vx_load(h1 + j);v_float32 v_b = vx_load(h2 + j);v_float64 v_ad = v_cvt_f64(v_a);v_float64 v_bd = v_cvt_f64(v_b);v_s1 = v_add(v_s1, v_ad);v_s2 = v_add(v_s2, v_bd);v_result = v_add(v_result, v_sqrt(v_mul(v_ad, v_bd)));v_ad = v_cvt_f64_high(v_a);v_bd = v_cvt_f64_high(v_b);v_s1 = v_add(v_s1, v_ad);v_s2 = v_add(v_s2, v_bd);v_result = v_add(v_result, v_sqrt(v_mul(v_ad, v_bd)));}s1 += v_reduce_sum(v_s1);s2 += v_reduce_sum(v_s2);result += v_reduce_sum(v_result);
#elif CV_SIMD && 0 //Disable vectorization for CV_COMP_BHATTACHARYYA if f64 is unsupported due to low precisionv_float32 v_s1 = vx_setzero_f32();v_float32 v_s2 = vx_setzero_f32();v_float32 v_result = vx_setzero_f32();for (; j <= len - VTraits<v_float32>::vlanes(); j += VTraits<v_float32>::vlanes()){v_float32 v_a = vx_load(h1 + j);v_float32 v_b = vx_load(h2 + j);v_s1 += v_a;v_s2 += v_b;v_result += v_sqrt(v_a * v_b);}s1 += v_reduce_sum(v_s1);s2 += v_reduce_sum(v_s2);result += v_reduce_sum(v_result);
#endiffor( ; j < len; j++ ){double a = h1[j];double b = h2[j];result += std::sqrt(a*b);s1 += a;s2 += b;}}else if( method == CV_COMP_KL_DIV ){for( ; j < len; j++ ){double p = h1[j];double q = h2[j];if( fabs(p) <= DBL_EPSILON ) {continue;}if(  fabs(q) <= DBL_EPSILON ) {q = 1e-10;}result += p * std::log( p / q );}}elseCV_Error( cv::Error::StsBadArg, "Unknown comparison method" );}if( method == CV_COMP_CHISQR_ALT )result *= 2;else if( method == CV_COMP_CORREL ){size_t total = H1.total();double scale = 1./total;double num = s12 - s1*s2*scale;double denom2 = (s11 - s1*s1*scale)*(s22 - s2*s2*scale);result = std::abs(denom2) > DBL_EPSILON ? num/std::sqrt(denom2) : 1.;}else if( method == CV_COMP_BHATTACHARYYA ){s1 *= s2;s1 = fabs(s1) > FLT_EPSILON ? 1./std::sqrt(s1) : 1.;result = std::sqrt(std::max(1. - result*s1, 0.));}return result;
}

应用示例

比较两个图像的灰度图匹配度。

#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>using namespace cv;
using std::cout;
using std::endl;
using std::string;// 命令行参数
const char* keys ="{ help  h| | Print help message. }""{ @input1 |wukong1.jpg | Path to input image 1. }""{ @input2 |wukong2.jpg | Path to input image 2. }";const string CMP_METHODS[]{"Correlation", "Chi-Squared", "Intersection", "Bhattacharyya", "Hellinger", "Alternative Chi-Squared", "Kullback-Leibler Divergence"};int main(int argc, char** argv)
{// 命令行解析CommandLineParser parser(argc, argv, keys);// 加载图像Mat img1 = imread(parser.get<String>("@input1"));Mat img2 = imread(parser.get<String>("@input2"));// 检查图像是否加载成功if (img1.empty() || img2.empty()){cout << "Could not open or find the image!\n" << endl;cout << "Usage: " << argv[0] << " <Input_Image1> <Input_Image2>" << endl;parser.printMessage();return EXIT_FAILURE;}// 转换为灰度图Mat gray1, gray2;cvtColor(img1, gray1, COLOR_BGR2GRAY);cvtColor(img2, gray2, COLOR_BGR2GRAY);// 计算直方图int histSize = 256; // 256个灰度级,桶数// 灰度级范围float range[] = { 0, 256 };const float* histRange = { range };// 其它直方图计算参数bool uniform = true, accumulate = false;Mat hist1, hist2;calcHist(&gray1, 1, 0, Mat(), hist1, 1, &histSize, &histRange, uniform, accumulate);calcHist(&gray2, 1, 0, Mat(), hist2, 1, &histSize, &histRange, uniform, accumulate);// 归一化直方图normalize(hist1, hist1, 0, 1, NORM_MINMAX, -1, Mat());normalize(hist2, hist2, 0, 1, NORM_MINMAX, -1, Mat());// 比较直方图double correlation = compareHist(hist1, hist2, HISTCMP_CORREL);cout << "Similarity between the two images with " << CMP_METHODS[0] << ": " << correlation << endl;double chiSquared = compareHist(hist1, hist2, HISTCMP_CHISQR);cout << "Similarity between the two images with " << CMP_METHODS[1] << ": " << chiSquared << endl;double intersection = compareHist(hist1, hist2, HISTCMP_INTERSECT);cout << "Similarity between the two images with " << CMP_METHODS[2] << ": " << intersection << endl;double bhattacharyya = compareHist(hist1, hist2, HISTCMP_BHATTACHARYYA);cout << "Similarity between the two images with " << CMP_METHODS[3] << ": " << bhattacharyya << endl;double hellinger = compareHist(hist1, hist2, HISTCMP_HELLINGER);cout << "Similarity between the two images with " << CMP_METHODS[4] << ": " << hellinger << endl;double alternativeChiSquared = compareHist(hist1, hist2, HISTCMP_CHISQR_ALT);cout << "Similarity between the two images with " << CMP_METHODS[5] << ": " << alternativeChiSquared << endl;double kullbackLeibler = compareHist(hist1, hist2, HISTCMP_KL_DIV);cout << "Similarity between the two images with " << CMP_METHODS[6] << ": " << kullbackLeibler << endl;// 显示图像imshow("Image 1", img1);imshow("Image 2", img2);waitKey(0);return EXIT_SUCCESS;
}
import cv2
import numpy as np# 读取图像
image1_path = '../data/Histogram_Comparison_Source_0.jpg'
image2_path = '../data/Histogram_Comparison_Source_1.jpg'image1 = cv2.imread(image1_path)
image2 = cv2.imread(image2_path)# 转换为灰度图像
gray_image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
gray_image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)# 计算灰度图像的直方图
hist1 = cv2.calcHist([gray_image1], [0], None, [256], [0, 256])
hist2 = cv2.calcHist([gray_image2], [0], None, [256], [0, 256])# 归一化直方图
cv2.normalize(hist1, hist1, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX)
cv2.normalize(hist2, hist2, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX)# 使用compareHist函数比较直方图
comparison_method = cv2.HISTCMP_CORREL  # 相关性比较方法
corr_similarity = cv2.compareHist(hist1, hist2, comparison_method)
chi_squared_similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_CHISQR)
intersection_similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_INTERSECT)
bhattacharyya_similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_BHATTACHARYYA)
hellinger_similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_HELLINGER)
alternative_chi_squared_similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_CHISQR_ALT)
kbl_divergence_similarity = cv2.compareHist(hist1, hist2, cv2.HISTCMP_KL_DIV)print(f"Similarity between the two images with Correlation: {corr_similarity}")
print(f"Similarity between the two images with Chi-Squared: {chi_squared_similarity}")
print(f"Similarity between the two images with Intersection: {intersection_similarity}")
print(f"Similarity between the two images with Bhattacharyya: {bhattacharyya_similarity}")
print(f"Similarity between the two images with Hellinger: {hellinger_similarity}")
print(f"Similarity between the two images with Alternative Chi-Squared: {alternative_chi_squared_similarity}")
print(f"Similarity between the two images with Kullback-Leibler Divergence: {kbl_divergence_similarity}")# 显示结果
cv2.imshow('Image 1', image1)
cv2.imshow('Image 2', image2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Similarity between the two images with Correlation: 0.5185838942269291
Similarity between the two images with Chi-Squared: 53.894400613305585
Similarity between the two images with Intersection: 25.52862914837897
Similarity between the two images with Bhattacharyya: 0.358873990739656
Similarity between the two images with Hellinger: 0.358873990739656
Similarity between the two images with Alternative Chi-Squared: 34.90277478480459
Similarity between the two images with Kullback-Leibler Divergence: 71.12863163014705

参考资料

  1. Histogram Comparison
  2. HistCompMethods

这篇关于图像直方图比较的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1095104

相关文章

百度/小米/滴滴/京东,中台架构比较

小米中台建设实践 01 小米的三大中台建设:业务+数据+技术 业务中台--从业务说起 在中台建设中,需要规范化的服务接口、一致整合化的数据、容器化的技术组件以及弹性的基础设施。并结合业务情况,判定是否真的需要中台。 小米参考了业界优秀的案例包括移动中台、数据中台、业务中台、技术中台等,再结合其业务发展历程及业务现状,整理了中台架构的核心方法论,一是企业如何共享服务,二是如何为业务提供便利。

基于人工智能的图像分类系统

目录 引言项目背景环境准备 硬件要求软件安装与配置系统设计 系统架构关键技术代码示例 数据预处理模型训练模型预测应用场景结论 1. 引言 图像分类是计算机视觉中的一个重要任务,目标是自动识别图像中的对象类别。通过卷积神经网络(CNN)等深度学习技术,我们可以构建高效的图像分类系统,广泛应用于自动驾驶、医疗影像诊断、监控分析等领域。本文将介绍如何构建一个基于人工智能的图像分类系统,包括环境

关键字synchronized、volatile的比较

关键字volatile是线程同步的轻量级实现,所以volatile性能肯定比synchronized要好,并且volatile只能修饰于变量,而synchronized可以修饰方法,以及代码块。随着JDK新版本的发布,synchronized关键字的执行效率上得到很大提升,在开发中使用synchronized关键字的比率还是比较大的。多线程访问volatile不会发生阻塞,而synchronize

Verybot之OpenCV应用一:安装与图像采集测试

在Verybot上安装OpenCV是很简单的,只需要执行:         sudo apt-get update         sudo apt-get install libopencv-dev         sudo apt-get install python-opencv         下面就对安装好的OpenCV进行一下测试,编写一个通过USB摄像头采

【python计算机视觉编程——7.图像搜索】

python计算机视觉编程——7.图像搜索 7.图像搜索7.1 基于内容的图像检索(CBIR)从文本挖掘中获取灵感——矢量空间模型(BOW表示模型)7.2 视觉单词**思想****特征提取**: 创建词汇7.3 图像索引7.3.1 建立数据库7.3.2 添加图像 7.4 在数据库中搜索图像7.4.1 利用索引获取获选图像7.4.2 用一幅图像进行查询7.4.3 确定对比基准并绘制结果 7.

【python计算机视觉编程——8.图像内容分类】

python计算机视觉编程——8.图像内容分类 8.图像内容分类8.1 K邻近分类法(KNN)8.1.1 一个简单的二维示例8.1.2 用稠密SIFT作为图像特征8.1.3 图像分类:手势识别 8.2贝叶斯分类器用PCA降维 8.3 支持向量机8.3.2 再论手势识别 8.4 光学字符识别8.4.2 选取特征8.4.3 多类支持向量机8.4.4 提取单元格并识别字符8.4.5 图像校正

stl的sort和手写快排的运行效率哪个比较高?

STL的sort必然要比你自己写的快排要快,因为你自己手写一个这么复杂的sort,那就太闲了。STL的sort是尽量让复杂度维持在O(N log N)的,因此就有了各种的Hybrid sort algorithm。 题主你提到的先quicksort到一定深度之后就转为heapsort,这种是introsort。 每种STL实现使用的算法各有不同,GNU Standard C++ Lib

研究生生涯中一些比较重要的网址

Mali GPU相关: 1.http://malideveloper.arm.com/resources/sdks/opengl-es-sdk-for-linux/ 2.http://malideveloper.arm.com/resources/tools/arm-development-studio-5/ 3.https://www.khronos.org/opengles/sdk/do

性能测试工具 wrk,ab,locust,Jmeter 压测结果比较

前言 在开发服务端软件时,经常需要进行性能测试,一般我采用手写性能测试代码的方式进行测试,那有什么现成的好的性能测试工具吗? 性能测试工具 wrk,ab,locust,Jmeter 压测结果比较 详见: 性能测试工具 wrk,ab,locust,Jmeter 压测结果比较 Jmeter性能测试 入门

HalconDotNet中的图像特征与提取详解

文章目录 简介一、边缘特征提取二、角点特征提取三、区域特征提取四、纹理特征提取五、形状特征提取 简介   图像特征提取是图像处理中的一个重要步骤,用于从图像中提取有意义的特征,以便进行进一步的分析和处理。HalconDotNet提供了多种图像特征提取方法,每种方法都有其特定的应用场景和优缺点。 一、边缘特征提取   边缘特征提取是图像处理中最基本的特征提取方法之一,通过检