A Real-Time Fire Detection Method from Video with Multifeature Fusion 具有多特征融合的视频实时火灾检测方法 (论文翻译)

本文主要是介绍A Real-Time Fire Detection Method from Video with Multifeature Fusion 具有多特征融合的视频实时火灾检测方法 (论文翻译),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!


英文版论文原文:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6664547/pdf/CIN2019-1939171.pdf


具有多特征融合的视频实时火灾检测方法

A Real-Time Fire Detection Method from Video with Multifeature Fusion
Faming Gong, Chuantao Li, Wenjuan Gong, Xin Li, Xiangbing Yuan, Yuhui Ma& Tao Song

  • 中国石油大学计算机与通信工程系,青岛266580
  • Department of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, China
  • 中国石油化工股份有限公司胜利油田分公司海洋采油厂,山东东营
  • China Petroleum and Chemical Corporation Shengli Oilfield Branch Ocean Oil Production Plant, Dongying, Shandong, China
  • 马德里工业大学计算机科学学院人工智能系,西班牙博阿迪亚德尔蒙特,蒙特哥西多校园,28660马德里,西班牙
  • Department of Artificial Intelligence, Faculty of Computer Science, Polytechnical University of Madrid, Campus de Montegancedo, Boadilla del Monte, 28660 Madrid, Spain

Abstract

火灾对人们的生命和财产构成的威胁日益严重。针对传统火灾探测中误报率高的问题,提出了一种基于火焰多特征融合的创新探测方法。首先,我们将火焰的运动检测和颜色检测结合起来作为火的预处理阶段。该方法节省了筛选候选火灾像素的大量计算时间。其次,尽管火焰是不规则的,但它在图像顺序方面具有一定的相似性。针对这一特点,提出了一种基于时空关系的火焰质心稳定算法,计算了图像各帧火焰区域的质心,并添加了时间信息,得到了火焰质心的时空信息。然后,我们提取了特征,包括火焰的空间变异性,形状变异性和面积变异性,以提高识别的准确性。最后,我们使用支持向量机进行训练,完成了候选火灾图像的分析,并实现了自动火灾监控。实验结果表明,与最新技术相比,该方法可以提高准确性,降低误报率。该方法可以应用于实时摄像机监视系统,例如家庭安全,森林火灾警报和商业监视。

The threat to people’s lives and property posed by fires has become increasingly serious. To address the problem of a high false alarm rate in traditional fire detection, an innovative detection method based on multifeature fusion of flame is proposed. First, we combined the motion detection and color detection of the flame as the fire preprocessing stage. This method saves a lot of computation time in screening the fire candidate pixels. Second, although the flame is irregular, it has a certain similarity in the sequence of the image. According to this feature, a novel algorithm of flame centroid stabilization based on spatiotemporal relation is proposed, and we calculated the centroid of the flame region of each frame of the image and added the temporal information to obtain the spatiotemporal information of the flame centroid. Then, we extracted features including spatial variability, shape variability, and area variability of the flame to improve the accuracy of recognition. Finally, we used support vector machine for training, completed the analysis of candidate fire images, and achieved automatic fire monitoring. Experimental results showed that the proposed method could improve the accuracy and reduce the false alarm rate compared with a state-of-the-art technique. The method can be applied to real-time camera monitoring systems, such as home security, forest fire alarms, and commercial monitoring.

1. Introduction

随着世界城市化进程的迅速发展,城市常住人口和人口密度都在增加。发生火灾时,它将严重威胁人们的生命,并造成重大经济损失。据不完全统计,2016年全国发生大火31.2万起,死亡1582人,受伤1065人,直接财产损失37.2亿美元[1、2]。 2019年3月和4月,世界各地发生了许多大型火灾事故,例如中国的凉山森林火灾,法国的巴黎圣母院火灾,意大利的森林火灾以及俄罗斯的草原火灾,造成了很大的火灾事故。损害人们的生命和财产。因此,火灾探测对于保护人们的生命和财产至关重要。当前城市中的检测方法依靠各种传感器进行检测[3-6],包括烟雾警报器,温度警报器和红外线警报器。尽管这些警报可以起作用,但它们具有重大缺陷。首先,必须达到空气中一定浓度的颗粒才能触发警报。触发警报时,火势可能已经过强而无法控制,无法达到预警的目的。其次,大多数警报只能在封闭的环境中使用,而封闭的环境对于宽阔的空间(例如室外或公共场所)无效。第三,可能会有误报。当非火粒子浓度达到警报浓度时,将自动发出警报。人类无法干预并及时获取最新信息。

With the rapid spread of urbanization in the world, both the number of permanent residents in cities and the population density are increasing. When a fire occurs, it seriously threatens people’s lives and causes major economic losses. According to incomplete statistics, there were 312,000 fires in the country in 2016, with 1,582 people killed and 1,065 injured, and a direct property loss of 3.72 billion dollars [1, 2]. In March and April of 2019, there were many large-scale fire accidents around the world, such as forest fires in Liangshan, China, the Notre Dame fire in France, forest fires in Italy, and the grassland fire in Russia, which caused great damage to people’s lives and property. THerefore, fire detection is vitally important to protecting people’s lives and property. THe current detection methods in cities rely on various sensors for detection [3–6], including smoke alarms, temperature alarms, and infrared ray alarms. Although these alarms can play a role, they have major flaws. First, a certain concentration of particles in the air must be reached to trigger an alarm. When an alarm is triggered, a fire may already be too strong to control, defeating the purpose of early warning. Second, most of the alarms can only be functional in a closed environment, which is ineffective for a wide space, such as outdoors or public spaces. Third, there may be false alarms. When the nonfire particle concentration reaches the alarm concentration, it will automatically sound the alarm. Human beings cannot intervene and get the latest information in time.

为了防止火灾并阻碍火灾的迅速发展,有必要建立一个可以早期发现火灾的监控系统。 建立基于摄像机的火灾自动监控算法可以实现不间断的24/7全天候自动监控,从而大大降低了人工成本,城市监控系统的迅速普及为基于摄像机的火灾探测提供了基础[7]。 大大降低成本增加了这种系统的经济可行性。

To prevent fires and hinder their rapid growth, it is necessary to establish a monitoring system that can detect early fires. Establishing a camera-based automatic fire monitoring algorithm can achieve 24/7 automatic monitoring without interruption, which greatly reduces labor costs, and the rapid spread of urban monitoring systems provides the groundwork for camera-based fire detection [7]. Greatly reducing the cost increases the economic feasibility of such systems.

本文提出了一种基于火焰多特征的火灾检测方法,该方法首先将帧差检测[8]筛选出不移动的火象素与RGB颜色模型[9]结合起来。 在预处理模块中,帧差异检测操作迅速,并且不包含复杂的计算,对环境的要求较低,并且无需考虑时间,天气和其他因素。 另外,采用的RGB / HIS颜色模型相对稳定。 然后,考虑到空间变异性,面积变异性,边界复杂性和形状变异性,使用火焰像素点,凸包[10]和质心的数量确定火焰的特性。 最后,使用成熟的支持向量机进行验证。

This paper proposed a fire detection method based on the multifeatures of flame, which firstly combined frame difference detection [8] screening nonmoving fire pixels with RGB color model [9] screening nonfire color pixels. In the preprocessing module, the frame difference detection operates quickly and does not include complex calculations, has low environmental requirements, and does not need to consider the time of day, weather, and other factors. In addition, the adopted RGB/HIS color model is relatively stable. Then, taking into the account the spatial variability, the area variability, boundary complexity, and shape variability properties, the characteristics of the flame are determined by using the number of flame pixel points, the convex hull [10], and the centroid. Finally, a mature support vector machine is used for verification.

基于摄像机的火灾监控系统可以通过视频处理实时监控指定区域。 根据视频检测到火灾时,它将捕获的警报图像发送给管理员。 管理员根据提交的警报图像做出最终确认。 例如,当高速公路上发生事故并引起火灾时,根据检测算法传输的图像,人们可以立即营救受害者,从而节省了宝贵的时间并最大程度地减少了伤害。

The camera-based fire monitoring system can monitor the specified area in real time through video processing. When a fire is detected based on the video, it will send a captured alarm image to the administrator. The administrator makes a final confirmation based on the submitted alarm image. For example, when an accident occurs on a highway and causes a fire, based on the image transmitted by the detection algorithm, one can immediately rescue the victims, saving precious time and minimizing damage.

本文的主要贡献如下:(1)将基于帧差的运动检测与基于RGB / HSI模型的颜色检测相结合。颜色检测仅适用于完成运动检测阶段的运动区域。我们的方法提高了精度,减少了冗余计算。此外,我们还改进了帧差法。 (2)根据连续图像帧之间的空间相关性,我们改进了从一个单一图像帧检测火的传统方法。通过基于时空火焰质心稳定性的检测方法,时间形成与火焰特征结合在一起。同时,我们结合了在火灾预处理阶段获得的数据,以减少计算冗余和计算复杂度。 (3)我们提取了各种火焰特征,空间变异性,形状变异性和面积变异性。我们使用支持向量机进行训练,完成最终验证,降低误报率和误报率,并提高准确性。

The main contributions of this paper are as follows: (1) We combine motion detection based on frame difference with color detection based on the RGB/HSI model. Color detection is only for regions of motion that the motion detection phase is completed. Our method has improved the precision and reduced redundant calculation. In addition, we have improved the frame difference method. (2) According to the spatial correlation between consecutive image frames, we have improved the traditional methods of detecting fire from one single image frame. Temporal information is combined with the flame features through a space-time flame centroid stability-based detection method. At the same time, we combine the data obtained during the fire preprocessing phase to reduce computational redundancy and computational complexity. (3) We extracted various flame features, spatial variability, shape variability, and area variability. We used the support vector machine to train, complete the final verification, reduce the false negatives rate and false positives rate, and improve the accuracy.

2. Related Works

对于火灾探测,传统方法是使用传感器进行探测。 缺陷之一是错误率高,因为触发警报基于颗粒的浓度或周围温度,因此很容易受到周围环境的干扰。 同时,这种方法无法得知火灾的位置和实时状态。 对于有明显火灾危险的室外场景,这种类型的传感器无法提供有效的检测。 由于传统火灾识别中存在许多问题,如何准确识别火灾已引起了广泛的关注。 因此,火灾探测在火灾探测传感器的方向,检测设备的改进以及基于视频图像的火灾探测方面已实现了快速发展。

For fire detection, the traditional method is to use a sensor for detection. One of the defects is the high false rate becausethe trigger alarm is based on the concentration of particles or the surrounding temperature and is therefore easily disturbed by the surroundings. At the same time, this method cannot know the location and the real-time status of the fire. For outdoor scenes, which are notable fire hazards, this type of sensor cannot provide effective detection. Due to the many problems in traditional fire identification, how to accurately identify the fire has received great attention. Therefore, fire detection has achieved rapid development in the direction of fire detection sensors, improvement in detection equipment, and fire detection based on video images.

Khan等。 [11]提出了一种基于视频的方法,该方法使用火焰动力学和室内静态火焰检测,并使用火焰的颜色,周长,面积和圆度。他们的方法将诸如蜡烛之类的小火作为不重要的部分。通过去除火焰增长特性然后进行判断,该方法在早期火灾预警中可能存在很大问题。 Seebamrungsat等。 [12]提出了基于HSV和YCbCr组合的规则。该系统需要额外的色彩空间转换,因此比仅使用一种色彩空间方法要好,但是其工作仅利用了火焰的静态特性。该方法相对脆弱并且不够稳定。 Foggia等。 [13]提出了一种基于火焰词袋策略无序性的新型运动检测方法。 Chen和Huang [14]提出了一个高斯模型来模拟HSV并分析火焰的时间和空间因素,但是高斯混合模型需要更长的计算时间,并且分析更加模糊。克鲁格等。 [6]¨使用了氢传感器。上述方法对传统的火灾探测传感器进行了改进,提高了探测的可靠性,降低了探测灵敏度。然而,由于燃烧产物的不同特性,传感器的使用受到限制。 Burnett和Wing [15]使用了一种新型的低成本相机,该相机可以减少烟雾对火焰的干扰,并具有出色的RGB和HSV检测能力,但是在这种相机的普及和应用方面仍然存在一些限制。 Toreyin等。 [16]提出了一种用于检测视频中运动像素的高斯混合背景估计方法。该方法通过颜色模型选择候选火灾区域,然后在时域和空域中执行小波分析以确定该区域的高频活动。与先前的问题类似,该方法在实际应用中具有很高的计算复杂度。 Han等。 [17]使用基于高斯模型和多色模型的运动检测,并获得了良好的实验结果。但是,由于高斯模型和颜色模型需要大量的计算时间,因此它们无法应用于实际场景。 Chen等。 [18]改进了传统的火焰检测方法,并将火焰闪烁检测算法集成到该方案中以检测彩色视频序列中的火焰。测试结果表明,该算法是有效,鲁棒和高效的。但是,计算速度较慢,适用于320 * 240图像。它可能不适用于高质量图像。 Dimitropoulos等。和Çetin等。 [19,20]使用多种火焰特性进行判断并取得了良好的效果。 Hashemzadeh和Zademehdi [21]提出了一种基于ICA K-medoids的候选火象素检测技术,这是实际应用的基础。 Savcı等。 [22]提出了一种新的基于Markov002E的方法。 Kim等。 [23]使用超光谱相机克服了RGB相机的局限性,后者无法区分火焰和一般光源(卤素,LED)。根据实验结果,它们取得了良好的效果,并且可能存在一些局限性,例如更高的相机成本。 Wu等。 [24]提出了基于辐射域特征模型的组合。 Giwa和Benkrid [25]提出了一种新的可区分颜色的转换矩阵,该矩阵可有效防止误报。 Patel等。 [26]提出的技术已经使用颜色提示和火焰闪烁来检测火灾。目前,由于深度学习在广泛的应用中具有很高的识别精度,因此它已成为一个活跃的话题。在研究中[27-29],使用了一种用于火灾探测的深度学习方法,可以实现较高的精度。我们可以使用深度学习技术来解决我们在火焰检测过程中遇到的问题。但是,存在某些限制。例如,深度学习在大数据下可以具有更好的准确性,但是火灾次数更少,并且使用相机拍摄的实际火焰样本更少。深度学习需要性能更高的设备和培训,而这需要花费更多时间。例如,在Alves等人的研究中。 [30],火焰数据集为800张图像。

Khan et al. [11] proposed a method based on video using flame dynamics and static indoor flame detection, using the color, perimeter, area, and roundness of the flame. Their method takes a small fire such as a candle as an unimportant part. By removing and then applying the flame growth characteristics to judge, this method may have a big problem in early fire warning. Seebamrungsat et al. [12] proposed a rule based on the combination of HSV and YCbCr. THeir system requires extra conversion of color space and is therefore better than using only one color space method, but their work only uses the static characteristics of the flame. The method is relatively fragile and not stable enough. Foggia et al. [13] proposed a novel motion detection method based on the disordered nature of the flame-word bag strategy. Chen and Huang [14] proposed a Gaussian model to simulate HSV and analyze the time and space factors of the flame, but the Gaussian mixture model requires higher calculation time and the analysis is fuzzier. Kruger et al. [6] ¨ used a hydrogen sensor. The above method improves the traditional fire detection sensors, which improve the reliability of the detection and reduce the detection sensitivity. However, the use of the sensor has limitations due to the different properties of the combustion product. Burnett and Wing [15] used a new low-cost camera that can reduce the interference of smoke on the flame and has excellent detection capabilities for RGB and HSV, but there are still some limitations in the popularity and application of this camera. Toreyin et al. [16] proposed a Gaussian mixture background ¨ estimation method for detecting motion pixels in video. This method selects candidate fire regions by color model and then performs wavelet analysis in time and space domains to determine high frequency activity in the region. Similar to the previous problem, this method has high computational complexity in practical applications. Han et al. [17] used motion detection based on the Gaussian model and multicolor model and obtained good experimental results. However, since Gaussian models and color models require a large amount of computational time, they cannot be applied to actual scenes. Chen et al. [18] improved the traditional flame detection method, and the flame flickering detection algorithm is incorporated into the scheme to detect fires in color video sequences. Testing results show that the proposed algorithms are effective, robust, and efficient. However, the calculation speed is slow, suitable for 320∗240 images. It may not be suitable for high quality images. Dimitropoulos et al. and Çetin et al. [19, 20] used a variety of flame characteristics to judge and achieved good results. Hashemzadeh and Zademehdi [21] proposed a candidate fire pixel detection technique based on ICA K-medoids, which is the basis for practical applications. Savcı et al. [22] proposed a new method based on Markov002E. Kim et al. [23] used an ultraspectral camera to overcome the limitations of RGB cameras which cannot distinguish between flame and general light source (halogen, LED). According to the experimental results, they have achieved good results, and there may be limitations, such as higher camera costs. Wu et al. [24] proposed based on the combination of radiation domain feature models. Giwa and Benkrid [25] proposed a new color-differentiating conversion matrix that is robust against false alarm. Patel et al. [26] proposed technique has been using the color cue and flame flicker for detecting fire. At present, deep learning has become an active topic due to its high accuracy of recognition in a wide range of applications. In the studies [27–29], a deep learning method for fire detection is used and high accuracy is achieved. We can use deep learning technology to solve the problems we have in the process of flame detection. However, there are certain limitations. For example, deep learning can have better accuracy under big data, but the number of fires is less, and the actual flame samples taken by using the camera are less. Deep learning requires higher performance equipment and training that takes more time. For example, in the study by Alves et al. [30], the flame dataset is 800 images.

通过上面的讨论,基于视频的火灾检测已经被快速研究并利用多场技术来开发,以解决当前方法的现有局限性,但是仍然存在一些问题。 与实验中的图像相比,相机图像可能没有丰富的色彩信息,因此可能导致较高的假阴性率。 如果算法涉及较少的火焰特征,则可能会出现较高的误报率。 考虑到实用性,需要对传统的火灾探测进行优化。

From what has been discussed above, fire detection based on video has rapidly been studied and developed with multifield technology used to solve the existing limitation of current methods, but there are still some problems. Compared with the images in the experiment, the camera images may not have rich color information and therefore may cause a higher false negative rate. If the algorithm involves fewer flame characteristics, higher false positives may occur. Taking practicality into account, traditional fire detection needs to be optimized.

在本文中,第3节介绍了火灾预处理方法,包括运动检测和颜色检测。 第4节介绍了火焰高级特征的提取方法和基于支持向量机的分类方法。第5节介绍了实验结果。最后一部分总结了结论。 算法流程图如图1所示。

In this paper, fire preprocessing methods are presented in Section 3, including motion detection and color detection. The extraction method of the advanced features of the flame and the classification method based on the support vector machine are presented in Section 4. The experimental results are presented in Section 5. Finally, conclusions are drawn in the last section. The algorithm flowchart is shown in Figure 1.

3. Fire Detection Preprocessing

3.1。 火焰特征。 我们提取了几个火焰特征。 对于静态特征,颜色是火焰最明显的特征之一,通常为红色。 通常,使用颜色配置文件是进行火灾探测的有效方法。 至于动态特性,火焰具有丰富的特性。 首先,火焰没有固定的形状。 其次,火焰边界处无序。 第三,射击位置在连续图像中具有一定的相似性。 在分析火焰特征的基础上,我们设计了火灾检测模块,该模块考虑了火焰的多种特征。

3.1. Flame Features. We extracted several flame characteristics. For static features, color is one of the most obviousfeature of flames, which are usually red. In general, usingcolor profiles is an effective method for fire detection. As for dynamic features, the flame has abundant characteristics. First, the flame has no fixed shape. Second, there is disorder in the boundaries of the flame. Third, the fire location has a certain similarity in the continuous image. Based on the analysis of flame features, we have designed fire detection module that takes into account multifeatures of the flame.

3.2。基于改进帧差法的运动检测。由于火的动态性质,火焰的形状是不规则的并且不断变化。因此,当使用火作为运动检测的重要功能时,通常的检测方法是连续的帧变化[31],背景减法[32]和混合的高斯背景建模[33]。背景减法需要正确设置背景,因为白天和黑夜之间有很大的差距。通常,很难拥有一个常数,并且必须为其设置参数,这比静态背景要复杂得多。混合高斯模型过于复杂,需要在预处理阶段设置历史框架,高斯混合数,背景更新率和噪声。因此,该算法不适用于预处理。帧差法的优点是易于实现,编程复杂度低,对光线等场景变化不敏感,可以适应各种动态环境,稳定性好。缺点是无法提取对象的整个区域。对象内部有一个“空洞”,只能提取边界。因此,本文采用了一种改进的帧差方法。

3.2. Motion Detection Based on Improved Frame Difference Method. Due to the dynamic nature of fire, the shape of the flame is irregular and changes constantly. So when using fire as a significant feature for motion detection, the usual detection methods are continuous frame changes [31], background subtraction [32], and mixed Gaussian background modeling [33]. The background subtraction needs to properly set the background because there is a large gap between day and night; in general, it is hard to have a constant, and one must set parameters for it, which is more complicated than a static background. The mixed Gaussian model is too complex and needs to set the history frame, Gaussian mixture number, background update rate, and noise at the preprocessing stage. Therefore, this algorithm is not suitable for preprocessing. The advantage of the frame difference method is simple to implement, the programming complexity is low, and it is not sensitive to scene changes such as light and can adapt to various dynamic environments with good stability. The disadvantage is that the complete area of the object cannot be extracted. There is an “empty hole” inside the object, and only the boundary can be extracted. Therefore, an improved frame difference method is adopted in this paper.

由于气流和燃烧特性本身会导致火焰像素不断变化[34],因此可以通过比较两个连续的图像来删除非火像素图像。我们使用一种改进的帧差算法。首先,视频流被转换为帧图像。其次,对图像进行灰度处理以将三个RGB通道转换为单个通道,从而节省了计算时间。第三,如果图像是初始帧,则执行初始化操作。对于其他帧图像,使用与前一帧的帧差异。帧差公式如(1)所示。然后,执行去噪操作,并且去噪公式如(2)所示。第四,对去噪图像进行扩张,以增强区域的连通性并降低颜色检测阶段的计算复杂度。膨胀运算的公式如(3)所示。第五,完成区域优化。对扩张后的图像进行轮廓检测。如果不满足条件,则删除该帧,然后使用下一帧图像进行检测。根据矩形之间的距离合并区域。最后,将区域的合并坐标传递到火焰颜色检测阶段:

Since the airflow and the properties of the combustion itself will cause a constant change in the flame pixels [34], nonfire pixel images can be removed by comparing two consecutive images. We use an improved frame difference algorithm. First, the video stream is converted into a frame image. Second, the image is processed in grayscale to convert three RGB channels into a single channel, which saves calculation time. Third, if the image is an initial frame, an initialization operation is performed. For the other frame images, the frame difference with the previous frame is used. The frame difference formula is shown in (1). Then, denoising operations are performed, and the denoising formula is as shown in (2). Fourth, the denoised image is dilated to enhance the connectivity of the region and reduce the computational complexity in the color detection phase. The formula for the dilation operation is as shown in (3). Fifth, regional optimization is done. Contour detection is performed on the dilated image. If the condition is not satisfied, the frame is removed, and the next frame of image is used for detection. Regions are merged based on the distance between rectangles. Finally, the merged coordinates of the region are passed to the flame color detection phase:

F v ( x , y ) = ∣ F c ( x , y ) − F p ( x , y ) ∣ (1) \tag{1} F_v(x,y)=|F_c(x,y)-F_p(x,y)| Fv(x,y)=Fc(x,y)Fp(x,y)(1)

其中 F c ( x , y ) F_c(x, y) Fc(x,y) 是当前帧,如图2(b)所示。 F p ( x , y ) F_p(x, y) Fp(x,y) 是前一帧,如图2(a)所示。 F v ( x , y ) F_v(x, y) Fv(x,y) 是当前帧和前一帧之间的差,如图2(c)所示。 我们不能在帧差后直接使用图像; 我们需要去噪并使用阈值来分割所需的部分:

where F c ( x , y ) F_c(x, y) Fc(x,y) is the current frame, as shown in Figure 2(b). F p ( x , y ) F_p(x, y) Fp(x,y) is the previous frame, as shown in Figure 2(a). F v ( x , y ) F_v(x, y) Fv(x,y) is the difference between the current frame and the previous frame, as shown in Figure 2©. We cannot directly use the image after the frame difference; we need to denoise and use the threshold to segment out the parts we need:

F t ( x , y ) = { m a x v a l if  i f F v ( x , y ) > t h r e s h 0 if  o t h e r w i s e (2) \tag{2} F_t(x,y) = \begin{cases} maxval &\text{if } if\space F_v(x,y)\gt thresh\\0 &\text{if } otherwise \end{cases} Ft(x,y)={maxval0if if Fv(x,y)>threshif otherwise(2)

其中 F v ( x , y ) F_v(x, y) Fv(x,y) 是当前帧和前一帧之间的差,如果其像素大于设置的阈值,则将其分配给最大值,否则设置为零。 Thresh是设置的阈值,这里我们设置20以确保不删除运动像素。 F t ( x , y ) F_t(x, y) Ft(x,y) 是去噪的结果,如图2(d)所示。 从图2(e)可以看出,帧差检测结果之后存在一定的“空洞”。 如果直接检测到当前的二进制图像,它将获得非常复杂的结果,如图2(e)所示。 进行下一个测试将导致非常复杂的计算。

where F v ( x , y ) F_v(x, y) Fv(x,y) is the difference between the current frame and the previous frame, which is assigned to the max value if its pixel is greater than the set threshold, otherwise set to zero. Thresh is the set threshold, and here we set 20 to ensure that the motion pixels are not removed. F t ( x , y ) F_t(x, y) Ft(x,y) is the result of denoising, as shown in Figure 2(d). It can be seen from Figure 2(e) that there is a certain “empty hole” after the result of the frame difference detection. If the current binary image is directly detected, it will obtain a very complicated result, as shown in Figure 2(e). Performing the next test will result in a very complicated calculation.

在这里插入图片描述

图1:算法流程图。 在预处理后的判断语句中,条件1指示要检测的对象是否满足要求,即火焰的基本特征(颜色和动态)。 在步骤2的判断语句中,条件2指示在计算之后所提取的特征向量是否是正确的类别。

Figure 1: Flowchart of the algorithm. In the judgment statement after preprocessing, condition 1 indicates whether the object to be detected satisfies the requirements, that is, the basic features (color and dynamics) of the flame. In the judgment statement of step 2, condition 2 indicates whether the extracted feature vector is the correct category after calculation.

这篇关于A Real-Time Fire Detection Method from Video with Multifeature Fusion 具有多特征融合的视频实时火灾检测方法 (论文翻译)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/378867

相关文章

流媒体平台/视频监控/安防视频汇聚EasyCVR播放暂停后视频画面黑屏是什么原因?

视频智能分析/视频监控/安防监控综合管理系统EasyCVR视频汇聚融合平台,是TSINGSEE青犀视频垂直深耕音视频流媒体技术、AI智能技术领域的杰出成果。该平台以其强大的视频处理、汇聚与融合能力,在构建全栈视频监控系统中展现出了独特的优势。视频监控管理系统EasyCVR平台内置了强大的视频解码、转码、压缩等技术,能够处理多种视频流格式,并以多种格式(RTMP、RTSP、HTTP-FLV、WebS

综合安防管理平台LntonAIServer视频监控汇聚抖动检测算法优势

LntonAIServer视频质量诊断功能中的抖动检测是一个专门针对视频稳定性进行分析的功能。抖动通常是指视频帧之间的不必要运动,这种运动可能是由于摄像机的移动、传输中的错误或编解码问题导致的。抖动检测对于确保视频内容的平滑性和观看体验至关重要。 优势 1. 提高图像质量 - 清晰度提升:减少抖动,提高图像的清晰度和细节表现力,使得监控画面更加真实可信。 - 细节增强:在低光条件下,抖

【C++】_list常用方法解析及模拟实现

相信自己的力量,只要对自己始终保持信心,尽自己最大努力去完成任何事,就算事情最终结果是失败了,努力了也不留遗憾。💓💓💓 目录   ✨说在前面 🍋知识点一:什么是list? •🌰1.list的定义 •🌰2.list的基本特性 •🌰3.常用接口介绍 🍋知识点二:list常用接口 •🌰1.默认成员函数 🔥构造函数(⭐) 🔥析构函数 •🌰2.list对象

C#实战|大乐透选号器[6]:实现实时显示已选择的红蓝球数量

哈喽,你好啊,我是雷工。 关于大乐透选号器在前面已经记录了5篇笔记,这是第6篇; 接下来实现实时显示当前选中红球数量,蓝球数量; 以下为练习笔记。 01 效果演示 当选择和取消选择红球或蓝球时,在对应的位置显示实时已选择的红球、蓝球的数量; 02 标签名称 分别设置Label标签名称为:lblRedCount、lblBlueCount

浅谈主机加固,六种有效的主机加固方法

在数字化时代,数据的价值不言而喻,但随之而来的安全威胁也日益严峻。从勒索病毒到内部泄露,企业的数据安全面临着前所未有的挑战。为了应对这些挑战,一种全新的主机加固解决方案应运而生。 MCK主机加固解决方案,采用先进的安全容器中间件技术,构建起一套内核级的纵深立体防护体系。这一体系突破了传统安全防护的局限,即使在管理员权限被恶意利用的情况下,也能确保服务器的安全稳定运行。 普适主机加固措施:

webm怎么转换成mp4?这几种方法超多人在用!

webm怎么转换成mp4?WebM作为一种新兴的视频编码格式,近年来逐渐进入大众视野,其背后承载着诸多优势,但同时也伴随着不容忽视的局限性,首要挑战在于其兼容性边界,尽管WebM已广泛适应于众多网站与软件平台,但在特定应用环境或老旧设备上,其兼容难题依旧凸显,为用户体验带来不便,再者,WebM格式的非普适性也体现在编辑流程上,由于它并非行业内的通用标准,编辑过程中可能会遭遇格式不兼容的障碍,导致操

透彻!驯服大型语言模型(LLMs)的五种方法,及具体方法选择思路

引言 随着时间的发展,大型语言模型不再停留在演示阶段而是逐步面向生产系统的应用,随着人们期望的不断增加,目标也发生了巨大的变化。在短短的几个月的时间里,人们对大模型的认识已经从对其zero-shot能力感到惊讶,转变为考虑改进模型质量、提高模型可用性。 「大语言模型(LLMs)其实就是利用高容量的模型架构(例如Transformer)对海量的、多种多样的数据分布进行建模得到,它包含了大量的先验

烟火目标检测数据集 7800张 烟火检测 带标注 voc yolo

一个包含7800张带标注图像的数据集,专门用于烟火目标检测,是一个非常有价值的资源,尤其对于那些致力于公共安全、事件管理和烟花表演监控等领域的人士而言。下面是对此数据集的一个详细介绍: 数据集名称:烟火目标检测数据集 数据集规模: 图片数量:7800张类别:主要包含烟火类目标,可能还包括其他相关类别,如烟火发射装置、背景等。格式:图像文件通常为JPEG或PNG格式;标注文件可能为X

AI hospital 论文Idea

一、Benchmarking Large Language Models on Communicative Medical Coaching: A Dataset and a Novel System论文地址含代码 大多数现有模型和工具主要迎合以患者为中心的服务。这项工作深入探讨了LLMs在提高医疗专业人员的沟通能力。目标是构建一个模拟实践环境,人类医生(即医学学习者)可以在其中与患者代理进行医学

【北交大信息所AI-Max2】使用方法

BJTU信息所集群AI_MAX2使用方法 使用的前提是预约到相应的算力卡,拥有登录权限的账号密码,一般为导师组共用一个。 有浏览器、ssh工具就可以。 1.新建集群Terminal 浏览器登陆10.126.62.75 (如果是1集群把75改成66) 交互式开发 执行器选Terminal 密码随便设一个(需记住) 工作空间:私有数据、全部文件 加速器选GeForce_RTX_2080_Ti