基于C++和Python的虹膜识别测试结果对比

2023-11-05 22:20

本文主要是介绍基于C++和Python的虹膜识别测试结果对比,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

一、说明

二、测试处理

1、基于Python的虹膜识别

2、基于C++虹膜识别

三、测试结果


一、说明

本文主要对基于C++和Python的虹膜识别的效果进行测试,并对比其结果。基于C++的虹膜识别测试工程为:基于C++和Opencv2的虹膜识别工程。基于Python的测试代码为:https://github.com/thuyngch/Iris-Recognition。

二、测试处理

为了使用相同的数据集进行测试,本文以Python版本的虹膜识别为主,保存测试过程中使用的测试图片,然后用C++版本对这些图片进行测试,测试结果的评价标准也按照Python版本中的标准:

fscore = 2*precision*recall / (precision+recall)

1、基于Python的虹膜识别

主要以代码https://github.com/thuyngch/Iris-Recognition为主,使用说明在原代码中已经很详细,本文使用的环境为:Window10、Python3.7.3。需要注意的几点如下:

(1)虹膜识别的测试文件为:eval_casia1.py,需要到python文件加下运行:python3 eval_casia1.py。注意代码中测试数据集CASIA-IrisV1存放路径:

CASIA1_DIR = "../CASIA1"

(2)直接运行python3 eval_casia1.py会出错,根据错误提示,添加了

if __name__ == '__main__':

即代码为:

if __name__ == '__main__':#------------------------------------------------------------------------------#	Main execution#------------------------------------------------------------------------------# Get identities of MMU2 datasetidentities = glob(os.path.join(CASIA1_DIR, "**"))identities = sorted([os.path.basename(identity) for identity in identities])n_identities = len(identities)print("Number of identities:", n_identities)# Construct a dictionary of filesfiles_dict = {}image_files = []for identity in identities:files = glob(os.path.join(CASIA1_DIR, identity, "*.*"))shuffle(files)files_dict[identity] = files[:N_IMAGES]image_files += files[:N_IMAGES]n_image_files = len(image_files)print("Number of image files:", n_image_files)# save test images to file 'pro_imgs.txt'txt_open = open('./pro_imgs.txt', 'w')for i in image_files:line = i + '\n'txt_open.write(line)txt_open.close()# Ground truthground_truth = np.zeros([n_image_files, n_image_files], dtype=int)for i in range(ground_truth.shape[0]):for j in range(ground_truth.shape[1]):if i//N_IMAGES == j//N_IMAGES:ground_truth[i, j] = 1# Evaluate parameterspools = Pool(processes=cpu_count())best_results = []for eye_threshold in tqdm(eyelashes_thresholds, total=len(eyelashes_thresholds)):# Extract featuresargs = zip(image_files, repeat(eye_threshold), repeat(False))features = list(pools.map(pool_func_extract_feature, args))# Calculate the distancesargs = []for i in range(n_image_files):for j in range(n_image_files):if i>=j:continuearg = (features[i][0], features[i][1], features[j][0], features[j][1])args.append(arg)distances = pools.map(pool_func_calHammingDist, args)# Construct a distance matrixk = 0dist_mat = np.zeros([n_image_files, n_image_files])for i in range(n_image_files):for j in range(n_image_files):if i<j:dist_mat[i, j] = distances[k]k += 1elif i>j:dist_mat[i, j] = dist_mat[j, i]# Metricsaccuracies, precisions, recalls, fscores = [], [], [], []for threshold in thresholds:decision_map = (dist_mat<=threshold).astype(int)accuracy = (decision_map==ground_truth).sum() / ground_truth.sizeprecision = (ground_truth*decision_map).sum() / decision_map.sum()recall = (ground_truth*decision_map).sum() / ground_truth.sum()fscore = 2*precision*recall / (precision+recall)accuracies.append(accuracy)precisions.append(precision)recalls.append(recall)fscores.append(fscore)# Save the best resultbest_fscore = max(fscores)best_threshold = thresholds[fscores.index(best_fscore)]best_accuracy = accuracies[fscores.index(best_fscore)]best_precision = precisions[fscores.index(best_fscore)]best_recall = recalls[fscores.index(best_fscore)]best_results.append((eye_threshold, best_threshold, best_fscore, best_accuracy, best_precision, best_recall))# Show the final best resulteye_thresholds = [item[0] for item in best_results]thresholds = [item[1] for item in best_results]fscores = [item[2] for item in best_results]accuracies = [item[3] for item in best_results]precisions = [item[4] for item in best_results]recalls = [item[5] for item in best_results]print("Maximum fscore: ", max(fscores))print("Best accuracy: ", accuracies[fscores.index(max(fscores))])print("Best precision: ", precisions[fscores.index(max(fscores))])print("Best recall: ", recalls[fscores.index(max(fscores))])print("Best eye_threshold: ", eye_thresholds[fscores.index(max(fscores))])print("Best threshold: ", thresholds[fscores.index(max(fscores))])
CASIA-IrisV1数据集共有108人,每人为7张虹膜图片,代码中随机从7张图片中选取4个:

N_IMAGES = 4

故总测试图片数为:

108*4=432

将随机选取的图片进行保存:

 # save test images to file 'pro_imgs.txt'txt_open = open('./pro_imgs.txt', 'w')for i in image_files:line = i + '\n'txt_open.write(line)txt_open.close()

代码中有两个参数:eyelashes_thresholds和thresholds。eyelashes_thresholds为眼睫毛的阈值,范围为[10,250],选取25个数值:

eyelashes_thresholds = np.linspace(start=10, stop=250, num=25)

[ 10. 20. 30. 40. 50. 60. 70. 80. 90. 100. 110. 120. 130. 140. 150. 160. 170. 180. 190. 200. 210. 220. 230. 240. 250.]

thresholds为判断两个虹膜是否为同一个的阈值,两个虹膜的差异值小于thresholds则为同一人,否则为不同人。取值范围为[0, 1],选取100个,步长为:

(1-0)/(100-1)= 0.01010101

thresholds = np.linspace(start=0.0, stop=1.0, num=100)

[0. 0.01010101 0.02020202 0.03030303 0.04040404 0.05050505 0.06060606 0.07070707 0.08080808 0.09090909 0.1010101 0.11111111 0.12121212 0.13131313 0.14141414 0.15151515 0.16161616 0.17171717 0.18181818 0.19191919 0.2020202 0.21212121 0.22222222 0.23232323 0.24242424 0.25252525 0.26262626 0.27272727 0.28282828 0.29292929 0.3030303 0.31313131 0.32323232 0.33333333 0.34343434 0.35353535 0.36363636 0.37373737 0.38383838 0.39393939 0.4040404 0.41414141 0.42424242 0.43434343 0.44444444 0.45454545 0.46464646 0.47474747 0.48484848 0.49494949 0.50505051 0.51515152 0.52525253 0.53535354 0.54545455 0.55555556 0.56565657 0.57575758 0.58585859 0.5959596 0.60606061 0.61616162 0.62626263 0.63636364 0.64646465 0.65656566 0.66666667 0.67676768 0.68686869 0.6969697 0.70707071 0.71717172 0.72727273 0.73737374 0.74747475 0.75757576 0.76767677 0.77777778 0.78787879 0.7979798 0.80808081 0.81818182 0.82828283 0.83838384 0.84848485 0.85858586 0.86868687 0.87878788 0.88888889 0.8989899 0.90909091 0.91919192 0.92929293 0.93939394 0.94949495 0.95959596 0.96969697 0.97979798 0.98989899 1. ]

(3)注意运行路径

2、基于C++虹膜识别

具体代码详见:基于C++和Opencv2的虹膜识别工程。原代码中测试图片为:下一个与上一个进行比较。为了和Python代码保持一致,需要做下修改。

(1)将CASIA1测试数据图片复制到data文件夹中;

(2)修改process.ini文件。修改图片存放文件名称为Python中保存的文件:

Load List of images = pro_imgs.txt

修改图片存放路径:

Load original images = CASIA1/

屏蔽所有保存路径:

#Save segmented images = Output/SegmentedImages/

#Save contours parameters = Output/CircleParameters/

#Save masks of iris = Output/Masks/

#Save normalized images = Output/NormalizedImages/

#Save normalized masks = Output/NormalizedMasks/

#Save iris codes = Output/IrisCodes/

#Save matching scores = Output/score.txt

(3)添加my_run()。在OsiManager.h中加入void my_run();如下图所示

在OsiManager.cpp中添加:

void OsiManager::my_run()
{cout << endl;cout << "================" << endl;cout << "Start processing" << endl;cout << "================" << endl;cout << endl;// If matching is requested, create a fileofstream result_matching;if (mProcessMatching && mOutputFileMatchingScores != ""){try{result_matching.open(mOutputFileMatchingScores.c_str(), ios::out);}catch (exception & e){cout << e.what() << endl;throw runtime_error("Cannot create the file for matching scores : " + mOutputFileMatchingScores);}}int num_test_imgs = mListOfImages.size();int N_IMAGES = 4;float threshold = 0.393939393939394;int decision_map_num = 0;int ground_truth_num = 0;// ground_truth and dist_mat initialbool(*ground_truth)[432] = new bool[num_test_imgs][432];float(*dist_mat)[432] = new float[num_test_imgs][432];bool(*decision_map)[432] = new bool[num_test_imgs][432];for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){if (i / N_IMAGES == j / N_IMAGES){ground_truth[i][j] = 1;ground_truth_num++;}elseground_truth[i][j] = 0;dist_mat[i][j] = 0;decision_map[i][j] = 0;}}//end ground_truth and dist_mat initialvector<OsiEye> eyr_res;cout << "Extract Features start!" << endl;for (int i = 0; i < num_test_imgs; i++){OsiEye eye;processOneEye(mListOfImages[i], eye);eyr_res.push_back(eye);cout << i + 1 << " pic is over!!!" << endl;}cout << "Extract Features end!!!" << endl;for (int i = 0; i < num_test_imgs; i++){for (int j = 0; j < num_test_imgs; ++j){cout << "start: " << i << " " << j << " ";if (i < j){OsiEye eyr_i = eyr_res[i];OsiEye eyr_j = eyr_res[j];float score = (eyr_res[i]).match((eyr_res[j]), mpApplicationPoints);dist_mat[i][j] = score;}else if (i > j){dist_mat[i][j] = dist_mat[j][i];}if (dist_mat[i][j] < threshold){decision_map[i][j] = 1;decision_map_num++;}cout << dist_mat[i][j] << " " << decision_map[i][j] << endl;}}// result int accuracy_num = 0;int precision_num = 0;for (int i = 0; i < num_test_imgs; i++){for (int j = 0; j < num_test_imgs; ++j){if (decision_map[i][j] == ground_truth[i][j]){accuracy_num++;}if (decision_map[i][j] && ground_truth[i][j]){precision_num++;}}}float accuracy = float(accuracy_num) / float(num_test_imgs*num_test_imgs);float precision_res = float(precision_num) / float(decision_map_num);float recall = float(precision_num) / float(ground_truth_num);float fscore = 2 * precision_res*recall / (precision_res + recall);cout << "Result: " << endl << "accuracy: " << accuracy << "\t"<< "precision_res: " << precision_res << "\t"<< "recall: " << recall << "\t"<< "fscore: " << fscore << endl;// end result//将阈值从0到1按照0.01步长递增,寻找最优的fscore值,并获取对应的最优阈值float fscore_best = 0.0;float threshold_best = 0.0;float recall_best = 0.0;float precision_best = 0.0;float accuracy_best = 0.0;for (threshold = 0.0; threshold < 1.0; threshold += 0.01010101){int accuracy_num = 0;int precision_num = 0;decision_map_num = 0;for (int i = 0; i < num_test_imgs; ++i){for (int j = 0; j < num_test_imgs; ++j){if (dist_mat[i][j] < threshold){decision_map[i][j] = 1;decision_map_num++;}if (decision_map[i][j] == ground_truth[i][j]){accuracy_num++;}if (decision_map[i][j] && ground_truth[i][j]){precision_num++;}}}float accuracy = float(accuracy_num) / float(num_test_imgs*num_test_imgs);float precision_res = float(precision_num) / float(decision_map_num);float recall = float(precision_num) / float(ground_truth_num);float fscore = 2 * precision_res*recall / (precision_res + recall);cout << "Result:   " << "accuracy: " << accuracy << "\t"<< "precision_res: " << precision_res << "\t"<< "recall: " << recall << "\t"<< "fscore: " << fscore << endl;if (fscore > fscore_best){fscore_best = fscore;threshold_best = threshold;recall_best = recall;precision_best = precision_res;accuracy_best = accuracy;}for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){decision_map[i][j] = 0;}}}cout << "Best fscore is: " << fscore_best << ", Best threshold is: " << threshold_best <<", Best recall is: " << recall_best << ", Best precision is: " << precision_best << ", Best accuracy is: " << accuracy_best << endl;// save dist_matofstream result_dist;string result_dist_path = "../data/dist_mat.txt";try{result_dist.open(result_dist_path.c_str(), ios::out);}catch (exception & e){cout << e.what() << endl;throw runtime_error("Cannot create the file for result_dist_path scores : " + result_dist_path);}if (result_dist){try{for (int i = 0; i < num_test_imgs; i++){int j = 0;for (j = 0; j < num_test_imgs - 1; ++j){result_dist << dist_mat[i][j] << " ";}result_dist << dist_mat[i][j] << "\n";}}catch (exception & e){cout << e.what() << endl;throw runtime_error("Error while saving result of matching in " + mOutputFileMatchingScores);}}// end save dist_matcout << endl;cout << "==============" << endl;cout << "End processing" << endl;cout << "==============" << endl;cout << endl;} // end of function

注意:这里只选取了Python代码中的thresholds作为最优结果选择参数,其他参数使用原C++代码中默认的。

(4)若在上述代码中保存了decision_map.txt文件,则寻找最优阈值的代码如下:

void test_best_thr(string test_resuilt_file)
{// Open the fileifstream file(test_resuilt_file.c_str(), ifstream::in);if (!file.good())throw runtime_error("Cannot read configuration file " + test_resuilt_file);int decision_map_num = 0;int ground_truth_num = 0;int N_IMAGES = 4;int num_test_imgs = 432; //测试图片总数,定义选取N_IMAGES时,总数为:N_IMAGES*108(108为数据集CASIA1中人的数量,即文件夹的数量)// ground_truth and dist_mat initialbool(*ground_truth)[432] = new bool[num_test_imgs][432]; //定义ground_truth变量float(*dist_mat)[432] = new float[num_test_imgs][432];  //定义测试结果dist_mat变量bool(*decision_map)[432] = new bool[num_test_imgs][432]; // 根据测试结果和设置的阈值得到的正确与否(0/1)的变量// 变量初始化for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){if (i / N_IMAGES == j / N_IMAGES){ground_truth[i][j] = 1;ground_truth_num++;}elseground_truth[i][j] = 0;dist_mat[i][j] = 0;decision_map[i][j] = 0;}}// 读取结果文件并按照结果文件保存的格式重新将结果取到dist_mat变量中int x = 0;bool flag = true;while (file.good() && !file.eof() && flag){// Get the new linestring line;getline(file, line);// Filter out commentsif (!line.empty()){int y = 0;size_t start = 0, index = line.find_first_of(" ", 0);while (index != line.npos){if (start != index){std::string s = line.substr(start, index - start);float res = atof(s.c_str());dist_mat[x][y] = res;y++;start = index + 1;index = line.find_first_of(" ", start);}elsebreak;}if (!line.substr(start).empty()){std::string s = line.substr(start);float res = atof(s.c_str());dist_mat[x][y] = res;y++;}if (y != num_test_imgs){cout << y << " != " << num_test_imgs << endl;flag = false;break;}}x++;}if (x != num_test_imgs){cout << x << " != " << num_test_imgs << endl;return;}//将阈值从0到1按照0.01步长递增,寻找最优的fscore值,并获取对应的最优阈值float fscore_best = 0.0;float threshold_best = 0.0;float recall_best = 0.0;float precision_best = 0.0;float accuracy_best = 0.0;for (float threshold = 0; threshold < 1.0; threshold += 0.01010101){int accuracy_num = 0;int precision_num = 0;decision_map_num = 0;for (int i = 0.0; i < num_test_imgs; ++i){for (int j = 0; j < num_test_imgs; ++j){if (dist_mat[i][j] < threshold){decision_map[i][j] = 1;decision_map_num++;}if (decision_map[i][j] == ground_truth[i][j]){accuracy_num++;}if (decision_map[i][j] && ground_truth[i][j]){precision_num++;}}}float accuracy = float(accuracy_num) / float(num_test_imgs*num_test_imgs);float precision_res = float(precision_num) / float(decision_map_num);float recall = float(precision_num) / float(ground_truth_num);float fscore = 2 * precision_res*recall / (precision_res + recall);cout << "Result:   " << "accuracy: " << accuracy << "\t"<< "precision_res: " << precision_res << "\t"<< "recall: " << recall << "\t"<< "fscore: " << fscore << endl;if (fscore > fscore_best){fscore_best = fscore;threshold_best = threshold;recall_best = recall;precision_best = precision_res;accuracy_best = accuracy;}for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){decision_map[i][j] = 0;}}}cout << "Best fscore is: " << fscore_best << ", Best threshold is: " << threshold_best << ", Best recall is: " << recall_best << ", Best precision is: " << precision_best << ", Best accuracy is: " << accuracy_best << endl;
}

(5)main中调用

将osi.run()改为osi.my_run();

三、测试结果

测试结果如下表所示,在CASIA-IrisV1上,C++版本的效果要优于Python版本,具体原因,后续看代码比较再来说明。

这篇关于基于C++和Python的虹膜识别测试结果对比的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/352820

相关文章

C++中使用vector存储并遍历数据的基本步骤

《C++中使用vector存储并遍历数据的基本步骤》C++标准模板库(STL)提供了多种容器类型,包括顺序容器、关联容器、无序关联容器和容器适配器,每种容器都有其特定的用途和特性,:本文主要介绍C... 目录(1)容器及简要描述‌php顺序容器‌‌关联容器‌‌无序关联容器‌(基于哈希表):‌容器适配器‌:(

Python判断for循环最后一次的6种方法

《Python判断for循环最后一次的6种方法》在Python中,通常我们不会直接判断for循环是否正在执行最后一次迭代,因为Python的for循环是基于可迭代对象的,它不知道也不关心迭代的内部状态... 目录1.使用enuhttp://www.chinasem.cnmerate()和len()来判断for

使用Python实现高效的端口扫描器

《使用Python实现高效的端口扫描器》在网络安全领域,端口扫描是一项基本而重要的技能,通过端口扫描,可以发现目标主机上开放的服务和端口,这对于安全评估、渗透测试等有着不可忽视的作用,本文将介绍如何使... 目录1. 端口扫描的基本原理2. 使用python实现端口扫描2.1 安装必要的库2.2 编写端口扫

使用Python实现操作mongodb详解

《使用Python实现操作mongodb详解》这篇文章主要为大家详细介绍了使用Python实现操作mongodb的相关知识,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录一、示例二、常用指令三、遇到的问题一、示例from pymongo import MongoClientf

使用Python合并 Excel单元格指定行列或单元格范围

《使用Python合并Excel单元格指定行列或单元格范围》合并Excel单元格是Excel数据处理和表格设计中的一项常用操作,本文将介绍如何通过Python合并Excel中的指定行列或单... 目录python Excel库安装Python合并Excel 中的指定行Python合并Excel 中的指定列P

一文详解Python中数据清洗与处理的常用方法

《一文详解Python中数据清洗与处理的常用方法》在数据处理与分析过程中,缺失值、重复值、异常值等问题是常见的挑战,本文总结了多种数据清洗与处理方法,文中的示例代码简洁易懂,有需要的小伙伴可以参考下... 目录缺失值处理重复值处理异常值处理数据类型转换文本清洗数据分组统计数据分箱数据标准化在数据处理与分析过

Python调用另一个py文件并传递参数常见的方法及其应用场景

《Python调用另一个py文件并传递参数常见的方法及其应用场景》:本文主要介绍在Python中调用另一个py文件并传递参数的几种常见方法,包括使用import语句、exec函数、subproce... 目录前言1. 使用import语句1.1 基本用法1.2 导入特定函数1.3 处理文件路径2. 使用ex

Python脚本实现自动删除C盘临时文件夹

《Python脚本实现自动删除C盘临时文件夹》在日常使用电脑的过程中,临时文件夹往往会积累大量的无用数据,占用宝贵的磁盘空间,下面我们就来看看Python如何通过脚本实现自动删除C盘临时文件夹吧... 目录一、准备工作二、python脚本编写三、脚本解析四、运行脚本五、案例演示六、注意事项七、总结在日常使用

Python将大量遥感数据的值缩放指定倍数的方法(推荐)

《Python将大量遥感数据的值缩放指定倍数的方法(推荐)》本文介绍基于Python中的gdal模块,批量读取大量多波段遥感影像文件,分别对各波段数据加以数值处理,并将所得处理后数据保存为新的遥感影像... 本文介绍基于python中的gdal模块,批量读取大量多波段遥感影像文件,分别对各波段数据加以数值处

python管理工具之conda安装部署及使用详解

《python管理工具之conda安装部署及使用详解》这篇文章详细介绍了如何安装和使用conda来管理Python环境,它涵盖了从安装部署、镜像源配置到具体的conda使用方法,包括创建、激活、安装包... 目录pytpshheraerUhon管理工具:conda部署+使用一、安装部署1、 下载2、 安装3