本文主要是介绍AVCaptureSession获取摄像头视频及buffer(可用来自定义相机),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
发现自己真的很懒,很久都不想写文章(主要是不知道写什么,太难的不会,太简单的又感觉没必要😔)
这篇文章的初衷是为了获取涉嫌头取到的视频buffer以和本地视频的buffer通过OpenGL混合叠加,跟上一篇是姊妹篇:AVPlayer实现播放视频和AVFoundation获取视频的buffer
还是先上效果图:
效果图.gif
1、创建session
// 1 创建session
_captureSession = [[AVCaptureSession alloc]init];
// 设置视频质量,这里根据自己需要设置;
_captureSession.sessionPreset = AVCaptureSessionPreset640x480;
2、拿到设备的摄像头
// 2 device_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];_device = [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
/**取得摄像头的方向@param position 摄像头方向@return 摄像头*/
-(AVCaptureDevice *)getCameraDeviceWithPosition:(AVCaptureDevicePosition )position{NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];for (AVCaptureDevice *camera in cameras) {if ([camera position]==position) {return camera;}}return nil;}
3、创建并添加input
// 3 inputNSError *deviceError = nil;AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc]initWithDevice:_device error:&deviceError];// 4 add inputif ([_captureSession canAddInput:input]) {[_captureSession addInput:input];}else{NSLog(@"创建失败了,%@",deviceError);return;}
4、添加Video Out
// 5 video outdispatch_queue_t queue = dispatch_queue_create("cameraQueue", NULL);AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc]init];[videoOut setSampleBufferDelegate:self queue:queue];videoOut.alwaysDiscardsLateVideoFrames = NO;
5、设置视频格式
可以根据自己的需要设置,视频设置是一个字典形式的设置
// 6 视频格式设置
// [videoOut setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)}];[videoOut connectionWithMediaType:AVMediaTypeVideo];
6、添加输出 并显示预览
这里将_prevLayer设置成属性是为了后续调节Frame,我是在viewDidLoad中加载的摄像头,所以View的frame可能还不准确,frame我在
// 7 添加输出if ([_captureSession canAddOutput:videoOut]) {[_captureSession addOutput:videoOut];}
// self.mGLView.isFullYUVRange = YES;AVCaptureConnection *connection = [videoOut connectionWithMediaType:AVMediaTypeVideo];[connection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];[_captureSession startRunning];// 8 预览的layer_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: _captureSession];// _prevLayer.frame = self.view.bounds;_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;[self.view.layer addSublayer: _prevLayer];
// 不要忘了在viewDidLayoutSubviews中给prevLayer的frame赋值
-(void)viewDidLayoutSubviews{_prevLayer.frame = self.view.bounds;
}
实现AVCaptureVideoDataOutputSampleBufferDelegate的代理方法拿到buffer做进一步处理
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);// int width1 = (int)CVPixelBufferGetWidth(pixelBuffer);
//
// int height1 = (int)CVPixelBufferGetHeight(pixelBuffer);
//
// NSLog(@"video width: %d height: %d", width1, height1);
// 拿到某一帧可以转换为图片保存起来,跟直接照相功能相似,也可以拿到某段时间内的视频帧,保存成视频
// 或者做一些其他美颜之类的处理等待;
// NSLog(@"在这里获得video sampleBuffer,做进一步处理(编码H.264)");// [self.mGLView displayPixelBuffer:pixelBuffer];// [pool drain];}
这篇关于AVCaptureSession获取摄像头视频及buffer(可用来自定义相机)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!