Tamura texture features

2023-10-23 19:50
文章标签 features texture tamura

本文主要是介绍Tamura texture features,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

 

 

上图是灰度共生矩阵的原理图。

原理可以详细阅读有关论文,这里有详细介绍:查看链接

基于人类对纹理的视觉感知的心理学的研究,Tamura等人提出了纹理特征的表达[14]。Tamura纹理特征的六个分量对应于心理学角度上纹理特征的六种属性,分别是粗糙度(coarseness)、对比度(contrast)、方向度(directionality)、 线像度(linelikeness)、规整度(regularity)和粗略度(roughness)。其中,前三个分量对于图像检索尤其重要


Tamura's Texture Features

Today's CBIR systems use in most cases the set of six visual features, namely,

  • coarseness,
  • contrast,
  • directionality,
  • linelikeness,
  • regularity,
  • roughness
selected by Tamura, Mori, and Yamawaki ( Tamura e.a., 1977 Castelli & Bergman, 2002 ) on the basis of psychological experiments.

Coarseness relates to distances of notable spatial variations of grey levels, that is, implicitly, to the size of the primitive elements (texels) forming the texture. The proposed computational procedure accounts for differences between the average signals for the non-overlapping windows of different size:

  1. At each pixel (x,y), compute six averages for the windows of size 2k × 2kk=0,1,...,5, around the pixel.
  2. At each pixel, compute absolute differencesEk(x,y) between the pairs of nonoverlapping averages in the horizontal and vertical directions.
  3. At each pixel, find the value of k that maximises the difference Ek(x,y) in either direction and set the best size Sbest(x,y)=2k.
  4. Compute the coarseness feature Fcrs by averaging Sbest(x,y) over the entire image.

Instead of the average of Sbest(x,y, an improved coarseness feature to deal with textures having multiple coarseness properties is a histogram characterising the whole distribution of the best sizes over the image (Castelli & Bergman, 2002).

Contrast measures how grey levels qq = 0, 1, ..., qmax, vary in the image g and to what extent their distribution is biased to black or white. The second-order and normalised fourth-order central moments of the grey level histogram (empirical probability distribution), that is, the variance, σ2, and kurtosis, α4, are used to define the contrast:  where  and m is the mean grey level, i.e. the first order moment of the grey level probability distribution. The value n=0.25 is recommended as the best for discriminating the textures.

Degree of directionality is measured using the frequency distribution of oriented local edges against their directional angles. The edge strength e(x,y) and the directional angle a(x,y) are computed using the Sobel edge detector approximating the pixel-wise x- and y-derivatives of the image:

e(x,y) = 0.5(|Δx(x,y)| + |Δy(x,y)| )
a(x,y) = tan-1(Δy(x,y) / Δx(x,y))

where Δx(x,y) and Δy(x,y) are the horizontal and vertical grey level differences between the neighbouring pixels, respectively. The differences are measured using the following 3 × 3 moving window operators:

−101   1  1  1
−101   0  0  0
−101 −1−1−1

A histogram Hdir(a) of quantised direction values a is constructed by counting numbers of the edge pixels with the corresponding directional angles and the edge strength greater than a predefined threshold. The histogram is relatively uniform for images without strong orientation and exhibits peaks for highly directional images. The degree of directionality relates to the sharpness of the peaks: 


where  np  is the number of peaks,  ap  is the position of the  p th peak,  wp  is the range of the angles attributed to the  p th peak (that is, the range between valleys around the peak),  r  denotes a normalising factor related to quantising levels of the angles  a , and  a  is the quantised directional angle (cyclically in modulo 180 o ).

Three other features are highly correlated with the above three features and do not add much to the effectiveness of the texture description. The linelikeness feature Flin is defined as an average coincidence of the edge directions (more precisely, coded directional angles) that co-occurred in the pairs of pixels separated by a distance d along the edge direction in every pixel. The edge strength is expected to be greater than a given threshold eliminating trivial "weak" edges. The coincidence is measured by the cosine of difference between the angles, so that the co-occurrences in the same direction are measured by +1 and those in the perpendicular directions by -1.

The regularity feature is defined as Freg=1-r(scrs+scon+sdir + slin) where r is a normalising factor and each s... means the standard deviation of the corresponding feature F... in each subimage the texture is partitioned into. The roughness feature is given by simply summing the coarseness and contrast measures: Frgh=Fcrs+Fcon

In the most cases, only the first three Tamura's features are used for the CBIR. These features capture the high-level perceptual attributes of a texture well and are useful for image browsing. However, they are not very effective for finer texture discrimination (Castelli & Bergman, 2002).



这篇关于Tamura texture features的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/270102

相关文章

C++20中lambda表达式新增加支持的features

1.弃用通过[=]隐式捕获this,应使用[=,this]或[=,*this]显示捕获: namespace {struct Foo {int x{ 1 };void print(){//auto change1 = [=] { // badauto change1 = [=, this] { // good, this: referencethis->x = 11;};chang

【译】PCL官网教程翻译(19):从深度图像中提取NARF特征 - How to extract NARF Features from a range image

英文原文阅读 从深度图像中提取NARF特征 本教程演示如何从深度图像中提取位于NARF关键点位置的NARF描述符。可执行文件使我们能够从磁盘加载点云(如果没有提供,也可以创建点云),从中提取感兴趣的点,然后在这些位置计算描述符。然后,它在图像和3D查看器中可视化这些位置。 代码 首先,在您喜欢的编辑器中创建一个名为narf_feature_extract .cpp的文件,并在其中放置以下代

【Material-UI】Select 组件的Advanced features和Props

文章目录 一、Select 组件概述1. 组件介绍2. 高级功能概述 二、Select 组件的基础用法三、Select 组件的高级功能1. 多选(Multiselect)2. 自动完成(Autocomplete)3. 异步加载(Async)4. 可创建选项(Creatable) 四、Select 组件的属性详解1. variant 属性2. Props 属性3. 标签与辅助文本 五、总结

C++面试基础系列-C++Features

系列文章目录 文章目录 系列文章目录C++面试基础系列-C++FeaturesOverview1.C++Features关于作者 C++面试基础系列-C++Features Overview 1.C++Features C++与C的区别在于C++拥有更多的新特性“Explicit C++”(显式C++)通常指的是在C++编程中明确地、直接地使用C++的特性和语法,

Cube Texture与环境纹理

一. Cube Texture: cube texture顾名思义是一个立方体纹理,普通纹理一般是一张二维图, 由二维坐标(u,v)决定贴图目标的像素点, 立方体纹理即是由一个立方体 (六个面/六张纹理)每个面上的二维图组成,是一个包含了上下左右前后 六个面的纹理组。因此在使用立方体纹理时也就不能简单的使用(u,v)坐标 来对纹理采样,需要对二维纹理坐标扩展到三维,由三维坐标(u,

openlayers官方教程(九)Vector Data——Downloading features

Downloading features 在上传和编辑数据之后,我们想要用户来下载我们的结果。我们需要将数据序列化为GeoJSON格式,并且创建 用于在浏览器中触发保存文件的downLoad属性。同时在地图上添加一个按钮,让用户可以清除已有要素重新开始。 首先,我们来添加按钮,把下面代码添加到index.html的map-container中: <div id="tools"><a i

openlayers官方教程(七)Vector Data——Drawing new features

Drawing new features 前面两篇文章我们已经实现了数据的加载和修改,下一步来实现draw交互,可以使用户画新的features并添加到数据源中。 首先第一步,还是在main.js中导入Draw包 import Draw from 'ol/interaction/Draw'; 其次,创建一个draw交互并添加到矢量数据源 map.addInteraction(ne

openlayers官方教程(六)Vector Data——Modifying features

Modifying features 我们已经能够将数据载入,现在我们要去编辑这些features,利用Modify交互可以实现对矢量数据源的编辑 首先,在main.js中导入Modify交互 import Modify from 'ol/interaction/Modify'; 下一步,在矢量图层上创建一个新的交互,并添加到图层 map.addInteraction(new M

webgl_framebuffer_texture

ThreeJS 官方案例学习(webgl_framebuffer_texture) 1.效果图 2.源码 <template><div><div id="container"></div><div id="selection"><div></div></div></div></template><script>import * as THREE from 'three';// 导

深度学习论文: DINOv2: Learning Robust Visual Features without Supervision

深度学习论文: DINOv2: Learning Robust Visual Features without Supervision DINOv2: Learning Robust Visual Features without Supervision PDF: https://arxiv.org/abs/2304.07193 PyTorch代码: https://github.com/shan