AnswerOpenCV一周佳作欣赏(0615-0622)

2024-02-02 04:59

本文主要是介绍AnswerOpenCV一周佳作欣赏(0615-0622),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、 How to make auto-adjustments(brightness and contrast) for image Android Opencv Image Correction

i'm using OpenCV for Android.
I would like to know,how to make image correction(auto adjustments of brightness/contrast) for image(bitmap) in android via OpenCV or that can be done by native ColorMatrixFilter from Android!??

I tried to google,but didn't found good tutorials/examples.
So how can i achieve my goal? Any ideas?
Thanks!


算法问题,需要寻找到自动调整亮度和对比度的算法。

回答:

Brightness and contrast is linear operator with parameter alpha and beta

O(x,y) = alpha * I(x,y) + beta

In OpenCV you can do this with convertTo.

The question here is how to calculate alpha and beta automatically ?

Looking at histogram, alpha operates as color range amplifier, beta operates as range shift.

Automatic brightness and contrast optimization calculates alpha and beta so that the output range is 0..255.

input range = max(I) - min(I)
 
wanted output range  = 255;
 
alpha  = output range / input range = 255 / ( max(I) - min(I) )

min(O) = alpha * min(I) beta
beta  = -min(I) * alpha

Histogram Wings Cut (clipping)

To maximize the result of it's useful to cut out some color with few pixels.

This is done cutting left right and wings of histogram where color frequency is less than a value (typically 1%). Calculating cumulative distribution from the histogram you can easly find where to cut.

may be sample chart helps to understand:

image description


By the way BrightnessAndContrastAuto could be named normalizeHist because it works on BGR and gray images stretching the histogram to the full range without touching bins balance. If input image has range 0..255 BrightnessAndContrastAuto will do nothing.

Histogram equalization and CLAE works only on gray images and they change grays level balancing. look at the images below:

Normalization vs Equalization

也就是实现了彩色的颜色增强算法
void BrightnessAndContrastAuto( const cv : :Mat  &src, cv : :Mat  &dst,  float clipHistPercent)
  {
    CV_Assert(clipHistPercent  > =  0);
    CV_Assert((src.type()  == CV_8UC1)  || (src.type()  == CV_8UC3)  || (src.type()  == CV_8UC4));
     int histSize  =  256;
     float alpha, beta;
     double minGray  =  0, maxGray  =  0;
     //to calculate grayscale histogram
    cv : :Mat gray;
     if (src.type()  == CV_8UC1) gray  = src;
     else  if (src.type()  == CV_8UC3) cvtColor(src, gray, CV_BGR2GRAY);
     else  if (src.type()  == CV_8UC4) cvtColor(src, gray, CV_BGRA2GRAY);
     if (clipHistPercent  ==  0)
    {
         // keep full available range
        cv : :minMaxLoc(gray,  &minGray,  &maxGray);
    }
     else
    {
        cv : :Mat hist;  //the grayscale histogram
         float range[]  = {  0256 };
         const  float * histRange  = { range };
         bool uniform  =  true;
         bool accumulate  =  false;
        calcHist( &gray,  10, cv : :Mat (), hist,  1&histSize,  &histRange, uniform, accumulate);
         // calculate cumulative distribution from the histogram
        std : :vector < float > accumulator(histSize);
        accumulator[ 0= hist.at < float >( 0);
         for ( int i  =  1; i  < histSize; i ++)
        {
            accumulator[i]  = accumulator[i  -  1+ hist.at < float >(i);
        }
         // locate points that cuts at required value
         float max  = accumulator.back();
        clipHistPercent  *= (max  /  100. 0);  //make percent as absolute
        clipHistPercent  /=  2. 0// left and right wings
         // locate left cut
        minGray  =  0;
         while (accumulator[minGray]  < clipHistPercent)
            minGray ++;
         // locate right cut
        maxGray  = histSize  -  1;
         while (accumulator[maxGray]  > = (max  - clipHistPercent))
            maxGray --;
    }
     // current range
     float inputRange  = maxGray  - minGray;
    alpha  = (histSize  -  1/ inputRange;    // alpha expands current range to histsize range
    beta  =  -minGray  * alpha;              // beta shifts current range so that minGray will go to 0
     // Apply brightness and contrast normalization
     // convertTo operates with saurate_cast
    src.convertTo(dst,  - 1, alpha, beta);
     // restore alpha channel from source 
     if (dst.type()  == CV_8UC4)
    {
         int from_to[]  = {  33};
        cv : :mixChannels( &src,  4&dst, 1, from_to,  1);
    }
     return;
}

  效果比较不错。

二、 Template matching behavior - Color

I am evaluating template matching algorithm to differentiate similar and dissimilar objects. What I found is confusing, I had an impression of template matching is a method which compares raw pixel intensity values. Hence when the pixel value varies I expected Template Matching to give a less match percentage.

I have a template and search image having same shape and size differing only in color(Images attached). When I did template matching surprisingly I am getting match percentage greater than 90%.

img = cv2.imread( './images/searchtest.png', cv2.IMREAD_COLOR)template = cv2.imread( './images/template.png', cv2.IMREAD_COLOR)
 
res  = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
 
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) print(max_val)

Template Image :
Template Image

Search Image :
Search Image

Can someone give me an insight why it is happening so? I have even tried this in HSV color space, Full BGR image, Full HSV image, Individual channels of B,G,R and Individual channels of H,S,V. In all the cases I am getting a good percentage.

Any help could be really appreciated.

image description


 
这个问题的核心就是彩色模板匹配。很有趣的问题。回答直接给出了source code,
https://github.com/LaurentBerger/ColorMatchTemplate
关于这个问题,我认为,彩色模板匹配的意义不是很大,毕竟用于定位的时候,黑白效果更好。
三、 likely position of Feature Matching.

I am using Brute Force Matcher with L2 norm. Referring this link https://docs.opencv.org/2.4/doc/tutor...

After the process, I get below image as output

image description

What is the likely position of the object suggested by the feature matching?

I don't understand how to choose the likely position using this image :(

这是一个只知其一不知其二的问题,他能够找到旋转的地方,但是对于下一步如何做没有思路。其实,解决此类问题,最好的方法就是参考教程做一遍。

当然,管理员的回答非常明确:

to retrieve the position of your matched object, you need some further steps :

  • filter the matches for outliers
  • extract the 2d point locations from the keypoints
  • apply findHomography() on the matched 2d points to get a transformation matrix between your query and the scene image
  • apply perspectiveTransform on the boundingbox of the query object, to see, where it is located in the scene image.

我也给出具体回答:

image description

//used surf
//
#include "stdafx.h"
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include "opencv2/calib3d/calib3d.hpp"
using namespace std;
using namespace cv;
int main( int argc, char** argv )
{
Mat img_1 ;
Mat img_2 ;
Mat img_raw_1 = imread("c1.bmp");
Mat img_raw_2 = imread("c3.bmp");
cvtColor(img_raw_1,img_1,CV_BGR2GRAY);
cvtColor(img_raw_2,img_2,CV_BGR2GRAY);
//-- Step 1: 使用SURF识别出特征点
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: 描述SURF特征
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: 匹配
FlannBasedMatcher matcher;//BFMatcher为强制匹配
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//取最大最小距离
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors_1.rows; i++ )
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
if( matches[i].distance <= 3*min_dist )//这里的阈值选择了3倍的min_dist
good_matches.push_back( matches[i]); 
}
}
//-- Localize the object from img_1 in img_2
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < (int)good_matches.size(); i++ )
{    
//这里采用“帧向拼接图像中添加的方法”,因此左边的是scene,右边的是obj
scene.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
obj.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
}
//直接调用ransac,计算单应矩阵
Mat H = findHomography( obj, scene, CV_RANSAC );
//图像对准
Mat result;
warpPerspective(img_raw_2,result,H,Size(2*img_2.cols,img_2.rows));
Mat half(result,cv::Rect(0,0,img_2.cols,img_2.rows));
img_raw_1.copyTo(half);
imshow("result",result);
waitKey(0);
return 0;
}
四、 Run OpenCV to MFC
  I have reproduced this sample, in a MFC app.

The cv::Mat is a CDOcument variable member:

// Attributes
public :
std : :vector <CBlob > m_blobs;
cv : :Mat m_Mat;

and I draw rectangles over image, with an method (called at some time intervals, according FPS rate):

DrawBlobInfoOnImage(m_blobs, m_Mat);

Here is the code of this method:

void CMyDoc : :DrawBlobInfoOnImage(std : :vector <CBlob > & blobs, cv : :Mat & Mat)
{
for ( unsigned int i = 0;i < blobs.size(); ++i)
{
    if (blobs[i].m_bStillBeingTracked)
    {
        cv : :rectangle(Mat, blobs[i].m_rectCurrentBounding, SCALAR_RED, 2);
        double dFontScale = blobs[i].m_dCurrentDiagonalSize / 60. 0;
        int nFontThickness = ( int)roundf(dFontScale * 1. 0);
        cv : :putText(Mat, (LPCTSTR)IntToString(i), blobs[i].m_vecPointCenterPositions.back(), CV_FONT_HERSHEY_SIMPLEX, dFontScale, SCALAR_GREEN, nFontThickness);
    }
}
}

but the result of this method is something like that:

image description

My question is: how can I draw only the last blobs result over my image ?

I have tried to clean up m_Mat, and to enable to draw only blobs.size() - 1 blob over image, none of this worked ...


非常有意思的问题,主要就是说他能够用mfc调用oepncv了,但是不知道如何将视频中的框子正确显示(也就是不要有拖尾)

这个也是对问题思考不是很深造成的问题。其实,解决的方法无外乎两个

一是直接修改视频流,也就是在原来的“读取-显示”循环里面添加一些东西(也就是框子),如果是这种方法,你使用或者不使用mfc基本没有什么影响;

比如<PariticalFilter在MFC上的运行,源代码公开>

https://www.cnblogs.com/jsxyhelu/p/6336429.html

二是采用mfc的机制。mfc不是都是对话框吗?那就创建一个窗体,专门用来显示这个矩形就好啦。 

比如<GreenOpenPaint的实现(五)矩形框>https://www.cnblogs.com/jsxyhelu/p/6354341.html

     

    五、How to find the thickness of the red color sealent in the image ????

    Hi,

    I want to find the thickness of the red colored sealent in the image.

    First I'm extracting the sealent portion by using findcontours by having minimum and maximum conotur area.And then check the Area,length and thickness of the sealent,i can find the area as well as length but i m not able to find the thickness of the sealent portion.

    Please help me guys........below is the example image.

    http://answers.opencv.org/upfiles/15295661161373758.jpg


    提示一下:

    do a distance transform and a skeleton on the binary image.

    这是一个算法问题,具体解决,下周给出实现。大家可以先研究一下。

     


    来自为知笔记(Wiz)


    这篇关于AnswerOpenCV一周佳作欣赏(0615-0622)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



    http://www.chinasem.cn/article/669546

    相关文章

    巧用“Windows 7媒体中心”欣赏图片文件

    长久以来,在Windows系统中查看照片都是使用自带的照片查看器,不过即使在win7中,照片查看器的功能还是比较简单,而且还无法直接查看GIF格式的照片。其实我们可以换个角度,在不安装第三方软件的情况下,让观看照片的功能更加强大,那就是使用系统自带的Windows Media Center(简称媒体中心)来看图片。   简单使用方法   使用媒体中心看照片最简单的方法是,先

    篆刻作品欣赏孙溟㠭凿刻山东临清“独占鳌头”

    孙溟㠭凿刻山东临清“獨占鳌头”  我的家乡山东临清城区,史称“中洲”,西有卫河,其北侧为元代运河,由问津桥入卫河,南侧为明代运河由头闸入卫,一南一北,形成纵贯市区的“人”字形,中洲四面环水,两运河交汇处地势突出,明正德年砌石为坝,以防水患,其状如鳌头,运河四处河闸如鳌四足,鳌后广济桥如尾,时任知州马伦提名“鳌头矶”,明代临清文人方元焕为鳌头矶题“獨占”,寓“魁星点斗,獨占鳌头”之意,今在高考

    AI大模型日报#0622:Claude 3.5 Sonnet超越GPT-4o、盘古大模型跳级发布、松鼠AI多模态教育大模型

    导读:AI大模型日报,爬虫+LLM自动生成,一文览尽每日AI大模型要点资讯!目前采用“文心一言”(ERNIE-4.0-8K-latest)生成了今日要点以及每条资讯的摘要。欢迎阅读!《AI大模型日报》今日要点:中科大与上海AI Lab等团队发布了高质量视频数据集ShareGPT4Video,通过GPT-4v的视觉能力实现视频的高质量描述,对视频理解和生成任务有着重要意义。同时,OpenAI收购数据

    DolrPHP框架加紧开发中,先欣赏两张图

    其实估计在你看到这个名称的时候,就已经猜到这个名称的意义了,dolr中文读出来就是美元符$,可是碰巧它还有另一个意思:分布式的目标定位和选路,解释一下: 引用 Tapestry是一种有组织的P2P重叠网,是可扩展的基础设施。分布式的目标定位和选路(DOLR)机制使其具有高性能,并可扩展与位置无关的选路,利用局部资源将消息送到最靠近的终点。这样可以提高效率、减少消息时延并提高吞吐量

    骚年,拿出看漂亮小姐姐的劲头,去欣赏UI吧,这才是你的饭碗。

    Hi,我是大千UI的工场,网页UI是我们核心义务之一,持续接单中…… 高尔基说:我扑在书籍上,就像饥饿的人铺在面包上。 你说:你眼光扑在抖音漂亮小姐姐身上,就如同饿虎在羔羊上。 大千UI工场说:拿出看漂亮小姐姐劲头,欣赏优秀的UI作品,这才是你吃饭的家伙式。因为漂亮的网页UI对于客户的品牌提升和运营太重要了,客户愿意为这个买单。 一个精美、易于使用和功能齐全的网页UI可以吸引用户的

    ICLR2024佳作:多视图Transformer再次升级,直接感知三维几何信息

    论文标题: GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers 论文作者: Takeru Miyato, Bernhard Jaeger, Max Welling, Andreas Geiger 导读: 本文提出一种几何感知注意力机制,替换Transformer中原有的位置编码方式,使得Transfo

    [机缘参悟-190] - 《道家-水木然人间清醒1》读书笔记 -13- 关系界限 - IT人学会欣赏自己、与自己孤独相处,向内求

    目录 前言: 1. 做人不求全,做事不求多 2. 一个人成熟的标志 3. 不必向别人解释自己 4. 孤独常伴,唯有内心强大 5. 下一轮文明的引领者 6. 外求与内求 7. 总有人能让我们欣赏 8. 凶狠和温柔 9. 陪伴自己 10. 珍惜拥有 11. 抽离感 12. 利 他 13. 利他与利己的统一:佛家的福报 14. 期 待:外求与内求 15. 内 核 前

    诺兰电影欣赏笔记

    2012:蝙蝠侠:黑暗骑士崛起(Batman 3: The Dark Knight Rises) 播放平台:优酷

    欣赏倪诗韵青桐断纹古琴很罕见:万中无一。

    欣赏倪诗韵青桐断纹古琴很罕见:万中无一。龙池侧签海门倪诗韵制,带收藏证书此琴断纹优美如江面波光粼粼,为流水蛇腹断,是倪老师作品精品中的精品。细心的朋友可以看出倪老师在这张琴上题字非常小心认真。用一个词来形容——万中无一,真是一点也不过分。这张琴的音色,和龚一老师录制《琴乐探微》中的那张老琴极其相似,抚之铿锵有声。想必是青桐材质使然,当然不乏倪老师精湛的斫琴匠艺和独特的匠心。

    【芯片IC】常见拆解欣赏 单片机、FPGA、RS232

    1. Giga Devices GD32F103CBT6 2.Altera Cyclone EP1C3 is the smallest 1-st generation FPGA from Altera. 在多晶硅层面,我们可以看到每个 M4K 块被细分为两半(两列共 26 个 “矩形”)。逻辑元件阵列是非对称的,在阵列右侧正中间有一个 PLL。 外设占据了芯片面积的一半,考虑到