[iOS]拍照后人脸检测

2024-08-20 23:32
文章标签 检测 ios 拍照 人脸

本文主要是介绍[iOS]拍照后人脸检测,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

[iOS]拍照后人脸检测


Demo:http://download.csdn.net/detail/u012881779/9677467



#import "FaceStreamViewController.h"
#import <AVFoundation/AVFoundation.h>@interface FaceStreamViewController ()<AVCaptureMetadataOutputObjectsDelegate, UIAlertViewDelegate>@property (strong, nonatomic) AVCaptureSession            * session;
// AVCaptureSession对象来执行输入设备和输出设备之间的数据传递
@property (nonatomic, strong) AVCaptureDeviceInput        * videoInput;
// AVCaptureDeviceInput对象是输入流
@property (nonatomic, strong) AVCaptureStillImageOutput   * stillImageOutput;
// 照片输出流对象,当然这里照相机只有拍照功能,所以只需要这个对象就够了
@property (nonatomic, strong) AVCaptureVideoPreviewLayer  * previewLayer;
// 拍照按钮
@property (nonatomic, strong) UIButton                    * shutterButton;@property (nonatomic, strong ) UITapGestureRecognizer     * tapGesture;
@property (weak, nonatomic) IBOutlet UIButton             * tapPaizhaoBut;@end@implementation FaceStreamViewController- (void)viewDidLoad {[super viewDidLoad];[self session];[self swapFrontAndBackCameras];
}// 点击拍照
- (IBAction)tapShutterCameraAction:(id)sender {[self shutterCamera];
}-(void)viewWillAppear:(BOOL)animated {[super viewWillAppear:animated];[self.session startRunning];}- (void)dealloc {[self releaseAction];
}- (void)releaseAction {self.session = nil;self.videoInput = nil;self.stillImageOutput = nil;self.previewLayer = nil;self.shutterButton = nil;self.tapGesture = nil;
}- (void)viewDidAppear:(BOOL)animated {[super viewDidAppear:animated];self.tapGesture=[[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(onViewClicked:)];[self.view addGestureRecognizer:self.tapGesture];}- (void)onViewClicked:(id)sender {[self swapFrontAndBackCameras];
}// Switching between front and back cameras
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position {NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];for ( AVCaptureDevice *device in devices )if ( device.position == position )return device;return nil;
}// 打开前置摄像头
- (void)swapFrontAndBackCameras {// Assume the session is already runningNSArray *inputs = self.session.inputs;for ( AVCaptureDeviceInput *input in inputs ) {AVCaptureDevice *device = input.device;if ( [device hasMediaType:AVMediaTypeVideo] ) {AVCaptureDevicePosition position = device.position;AVCaptureDevice *newCamera = nil;AVCaptureDeviceInput *newInput = nil;if (position == AVCaptureDevicePositionFront)newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];elsenewCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];newInput = [AVCaptureDeviceInput deviceInputWithDevice:newCamera error:nil];// beginConfiguration ensures that pending changes are not applied immediately[self.session beginConfiguration];[self.session removeInput:input];[self.session addInput:newInput];// Changes take effect once the outermost commitConfiguration is invoked.[self.session commitConfiguration];break;}} 
}- (AVCaptureSession *)session {if (!_session) {// 1.获取输入设备(摄像头)AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];// 2.根据输入设备创建输入对象self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];if (self.videoInput == nil) {return nil;}// 3.创建输出对象self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];// 这是输出流的设置参数AVVideoCodecJPEG参数表示以JPEG的图片格式输出图片NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];[self.stillImageOutput setOutputSettings:outputSettings];// 4.创建会话(桥梁)AVCaptureSession *session = [[AVCaptureSession alloc]init];// 实现高质量的输出和摄像,默认值为AVCaptureSessionPresetHigh,可以不写[session setSessionPreset:AVCaptureSessionPresetHigh];// 5.添加输入和输出到会话中(判断session是否已满)if ([session canAddInput:self.videoInput]) {[session addInput:self.videoInput];}if ([session canAddOutput:self.stillImageOutput]) {[session addOutput:self.stillImageOutput];}// 6.创建预览图层self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];self.previewLayer.frame = [UIScreen mainScreen].bounds;[self.view.layer insertSublayer:self.previewLayer atIndex:0];_session = session;}return _session;
}- (void) shutterCamera {AVCaptureConnection * videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];if (!videoConnection) {NSLog(@"take photo failed!");return;}[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {if (imageDataSampleBuffer == NULL) {return;}NSData * imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];UIImage * imagevv = [UIImage imageWithData:imageData];NSLog(@"\n已经获取到图片");imagevv = [self fixOrientation:imagevv];// 图片中是否包含人脸imagevv = [self judgeInPictureContainImage:imagevv];if(imagevv){imageData = UIImageJPEGRepresentation(imagevv, 0.5);NSString *sandboxPath = NSHomeDirectory();NSString *documentPath = [sandboxPath stringByAppendingPathComponent:@"Documents"];NSString *FilePath=[documentPath stringByAppendingPathComponent:@"headerImgData.jpg"];NSData *imgData = imageData;NSFileManager *fileManager = [NSFileManager defaultManager];//向一个文件中写入数据,属性字典允许你制定要创建[fileManager createFileAtPath:FilePath contents:imgData attributes:nil];}else{}}];
}- (UIImage *)fixOrientation:(UIImage *)aImage {// No-op if the orientation is already correctif (aImage.imageOrientation == UIImageOrientationUp)return aImage;// We need to calculate the proper transformation to make the image upright.// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.CGAffineTransform transform = CGAffineTransformIdentity;switch (aImage.imageOrientation) {case UIImageOrientationDown:case UIImageOrientationDownMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);transform = CGAffineTransformRotate(transform, M_PI);break;case UIImageOrientationLeft:case UIImageOrientationLeftMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);transform = CGAffineTransformRotate(transform, M_PI_2);break;case UIImageOrientationRight:case UIImageOrientationRightMirrored:transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);transform = CGAffineTransformRotate(transform, -M_PI_2);break;default:break;}switch (aImage.imageOrientation) {case UIImageOrientationUpMirrored:case UIImageOrientationDownMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);transform = CGAffineTransformScale(transform, -1, 1);break;case UIImageOrientationLeftMirrored:case UIImageOrientationRightMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);transform = CGAffineTransformScale(transform, -1, 1);break;default:break;}// Now we draw the underlying CGImage into a new context, applying the transform// calculated above.CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height,CGImageGetBitsPerComponent(aImage.CGImage), 0,CGImageGetColorSpace(aImage.CGImage),CGImageGetBitmapInfo(aImage.CGImage));CGContextConcatCTM(ctx, transform);switch (aImage.imageOrientation) {case UIImageOrientationLeft:case UIImageOrientationLeftMirrored:case UIImageOrientationRight:case UIImageOrientationRightMirrored:// Grr...CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);break;default:CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);break;}// And now we just create a new UIImage from the drawing contextCGImageRef cgimg = CGBitmapContextCreateImage(ctx);UIImage *img = [UIImage imageWithCGImage:cgimg];CGContextRelease(ctx);CGImageRelease(cgimg);return img;
}// 图片中是否包含人脸
- (UIImage *)judgeInPictureContainImage:(UIImage *)thePicture {UIImage *newImg;UIImage *aImage = thePicture;CIImage *image = [CIImage imageWithCGImage:aImage.CGImage];NSDictionary  *opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHighforKey:CIDetectorAccuracy];CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFacecontext:niloptions:opts];//得到面部数据NSArray* features = [detector featuresInImage:image];for (CIFaceFeature *f in features){CGRect aRect = f.bounds;NSLog(@"%f, %f, %f, %f", aRect.origin.x, aRect.origin.y, aRect.size.width, aRect.size.height);CGRect newRect = CGRectMake(0, 0, aImage.size.width, aImage.size.height);float blFloat = 320/320.0;newRect.size.width = aImage.size.width;float heiFloat = aImage.size.width/(blFloat);newRect.size.height = heiFloat;float zFloat = (aImage.size.height - newRect.size.height)/2.0;newRect.origin.y = zFloat;newImg = [self imageFromImage:aImage inRect:newRect];//眼睛和嘴的位置if(f.hasLeftEyePosition) NSLog(@"Left eye %g %g\n", f.leftEyePosition.x, f.leftEyePosition.y);if(f.hasRightEyePosition) NSLog(@"Right eye %g %g\n", f.rightEyePosition.x, f.rightEyePosition.y);if(f.hasMouthPosition) NSLog(@"Mouth %g %g\n", f.mouthPosition.x, f.mouthPosition.y);}if(![self judgeChangePictureHaveFace:newImg]){newImg = nil;}return newImg;
}- (BOOL)judgeChangePictureHaveFace:(UIImage *)thePicture {BOOL result = NO;UIImage *aImage = thePicture;CIImage *image = [CIImage imageWithCGImage:aImage.CGImage];NSDictionary  *opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHighforKey:CIDetectorAccuracy];CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFacecontext:niloptions:opts];//得到面部数据NSArray* features = [detector featuresInImage:image];for (CIFaceFeature *f in features){result = YES;CGRect aRect = f.bounds;}return result;
}// 剪切图片
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {CGImageRef sourceImageRef = [image CGImage];CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);UIImage *newImage = [UIImage imageWithCGImage:newImageRef];CGImageRelease(newImageRef);return newImage;
}@end

示图:



这篇关于[iOS]拍照后人脸检测的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1091487

相关文章

综合安防管理平台LntonAIServer视频监控汇聚抖动检测算法优势

LntonAIServer视频质量诊断功能中的抖动检测是一个专门针对视频稳定性进行分析的功能。抖动通常是指视频帧之间的不必要运动,这种运动可能是由于摄像机的移动、传输中的错误或编解码问题导致的。抖动检测对于确保视频内容的平滑性和观看体验至关重要。 优势 1. 提高图像质量 - 清晰度提升:减少抖动,提高图像的清晰度和细节表现力,使得监控画面更加真实可信。 - 细节增强:在低光条件下,抖

安卓链接正常显示,ios#符被转义%23导致链接访问404

原因分析: url中含有特殊字符 中文未编码 都有可能导致URL转换失败,所以需要对url编码处理  如下: guard let allowUrl = webUrl.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) else {return} 后面发现当url中有#号时,会被误伤转义为%23,导致链接无法访问

烟火目标检测数据集 7800张 烟火检测 带标注 voc yolo

一个包含7800张带标注图像的数据集,专门用于烟火目标检测,是一个非常有价值的资源,尤其对于那些致力于公共安全、事件管理和烟花表演监控等领域的人士而言。下面是对此数据集的一个详细介绍: 数据集名称:烟火目标检测数据集 数据集规模: 图片数量:7800张类别:主要包含烟火类目标,可能还包括其他相关类别,如烟火发射装置、背景等。格式:图像文件通常为JPEG或PNG格式;标注文件可能为X

基于 YOLOv5 的积水检测系统:打造高效智能的智慧城市应用

在城市发展中,积水问题日益严重,特别是在大雨过后,积水往往会影响交通甚至威胁人们的安全。通过现代计算机视觉技术,我们能够智能化地检测和识别积水区域,减少潜在危险。本文将介绍如何使用 YOLOv5 和 PyQt5 搭建一个积水检测系统,结合深度学习和直观的图形界面,为用户提供高效的解决方案。 源码地址: PyQt5+YoloV5 实现积水检测系统 预览: 项目背景

JavaFX应用更新检测功能(在线自动更新方案)

JavaFX开发的桌面应用属于C端,一般来说需要版本检测和自动更新功能,这里记录一下一种版本检测和自动更新的方法。 1. 整体方案 JavaFX.应用版本检测、自动更新主要涉及一下步骤: 读取本地应用版本拉取远程版本并比较两个版本如果需要升级,那么拉取更新历史弹出升级控制窗口用户选择升级时,拉取升级包解压,重启应用用户选择忽略时,本地版本标志为忽略版本用户选择取消时,隐藏升级控制窗口 2.

Android 10.0 mtk平板camera2横屏预览旋转90度横屏拍照图片旋转90度功能实现

1.前言 在10.0的系统rom定制化开发中,在进行一些平板等默认横屏的设备开发的过程中,需要在进入camera2的 时候,默认预览图像也是需要横屏显示的,在上一篇已经实现了横屏预览功能,然后发现横屏预览后,拍照保存的图片 依然是竖屏的,所以说同样需要将图片也保存为横屏图标了,所以就需要看下mtk的camera2的相关横屏保存图片功能, 如何实现实现横屏保存图片功能 如图所示: 2.mtk

【iOS】MVC模式

MVC模式 MVC模式MVC模式demo MVC模式 MVC模式全称为model(模型)view(视图)controller(控制器),他分为三个不同的层分别负责不同的职责。 View:该层用于存放视图,该层中我们可以对页面及控件进行布局。Model:模型一般都拥有很好的可复用性,在该层中,我们可以统一管理一些数据。Controlller:该层充当一个CPU的功能,即该应用程序

[数据集][目标检测]血细胞检测数据集VOC+YOLO格式2757张4类别

数据集格式:Pascal VOC格式+YOLO格式(不包含分割路径的txt文件,仅仅包含jpg图片以及对应的VOC格式xml文件和yolo格式txt文件) 图片数量(jpg文件个数):2757 标注数量(xml文件个数):2757 标注数量(txt文件个数):2757 标注类别数:4 标注类别名称:["Platelets","RBC","WBC","sickle cell"] 每个类别标注的框数:

Temu官方宣导务必将所有的点位材料进行检测-RSL资质检测

关于饰品类产品合规问题宣导: 产品法规RSL要求 RSL测试是根据REACH法规及附录17的要求进行测试。REACH法规是欧洲一项重要的法规,其中包含许多对化学物质进行限制的规定和高度关注物质。 为了确保珠宝首饰的安全性,欧盟REACH法规规定,珠宝首饰上架各大电商平台前必须进行RSLReport(欧盟禁限用化学物质检测报告)资质认证,以确保产品不含对人体有害的化学物质。 RSL-铅,

YOLOv8/v10+DeepSORT多目标车辆跟踪(车辆检测/跟踪/车辆计数/测速/禁停区域/绘制进出线/绘制禁停区域/车道车辆统计)

01:YOLOv8 + DeepSort 车辆跟踪 该项目利用YOLOv8作为目标检测模型,DeepSort用于多目标跟踪。YOLOv8负责从视频帧中检测出车辆的位置,而DeepSort则负责关联这些检测结果,从而实现车辆的持续跟踪。这种组合使得系统能够在视频流中准确地识别并跟随特定车辆。 02:YOLOv8 + DeepSort 车辆跟踪 + 任意绘制进出线 在此基础上增加了用户