Object Recognition, Computer Vision, and the Caltech 101: A Response to Pinto et al.

2024-04-29 01:08

本文主要是介绍Object Recognition, Computer Vision, and the Caltech 101: A Response to Pinto et al.,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Object Recognition, Computer Vision, and the Caltech 101: A Response to Pinto et al.


Yann LeCun, David G. Lowe, Jitendra Malik, Jim Mutch, Pietro Perona, and Tomaso Poggio


Readers of the recent paper “Why is Real-World Visual Object Recognition Hard?” [8] who are unfamiliar with the literature on computer vision are likely to come away with the impression that the problem of making visual recognition invariant with respect to position, scale, and pose has been overlooked. We would therefore like to clarify two main points.

(1) The paper criticizes the popular Caltech 101 benchmark dataset for not containing images of objects at a variety of positions, scales, and poses. It is true that Caltech 101 does not test these kinds of variability; however, this omission is intentional. Techniques for addressing these issues were the focus of much work in the 1980s [11]. For example, datasets like that of Murase and Nayar [6] focused on the problem of recognizing specific objects from a variety of 3d poses, but did not address the issue of object categories and the attendant intra-category variation in shape and texture. Pinto et al.’s synthetic dataset is in much the same spirit as Murase and Nayar’s. Caltech 101 was created to test a system [4,3] that was already position, scale, and pose invariant, with the goal of focusing on the more difficult problem of categorization. Its lack of position, scale, and pose variation is stated explicitly on the Caltech 101 website [2], where the dataset is available for download, and is often explicitly restated in later papers that use the dataset (including three of the five cited in Fig. 1). This is not to say that Caltech 101 is without problems. For example, as the authors state, correlation of object classes and backgrounds is a concern, and the relative success of their “toy” model does seem to suggest that the baseline for what is considered good performance on this dataset should be raised.

(2) The paper mentions the existence of other standard datasets (LabelMe [10], Peekaboom [12], StreetScenes [1], NORB [5], PASCAL [7]), many of which contain other forms of variability such as position, scale, and pose variation, occlusion, and multiple objects. But the authors do not mention that, unlike their “toy” model, most of the computer vision / bio-inspired algorithms they cite do address some of these issues as well, and have in fact been tested on more than one dataset. Thus, many of these algorithms should be capable of dealing fairly well with the “difficult” task of the paper’s Fig. 2, on which the authors’ algorithm – unsurprisingly – fails. Caltech 101 is one of the most popular datasets currently in use, but it is by no means the sole standard of success on the object recognition problem. See [9] for a recent review of current datasets and the types of variability contained in each.

In conclusion, researchers in computer vision are well aware of the need for invariance to position, scale, and pose, among other challenges in visual recognition. We wish to reassure PLoS readers that research on these topics is alive and well.


References

[1] Bileschi S (2006) StreetScenes: Towards scene understanding in still images. [Ph.D. Thesis]. Cambridge (Massachusetts): MIT EECS.

[2] Caltech 101 dataset (accessed 2008-02-17) Available: http://www.vision.caltech....

[3] Fei-Fei L, Fergus R, and Perona P (2004) Learning generative visual models from few
training examples: an incremental Bayesian approach tested on 101 object categories.
IEEE CVPR 2004, Workshop on Generative-Model Based Vision.

[4] Fergus R, Perona P, Zisserman A (2003) Object Class Recognition by Unsupervised
Scale-Invariant Learning. Proc. CVPR 1006: 264-271.

[5] LeCun Y, Huang FJ, and Bottou L (2004) Learning methods for generic object recognition with invariance to pose and lighting. IEEE CVPR 2004: 97–104.

[6] Murase H and Nayar SK (1995) Visual learning and recognition of 3-D objects from appearance. International Journal of Computer Vision, Vol. 14, pp. 5-24.

[7] PASCAL Object Recognition Database Collection, Visual Object Classes Challenge (accessed 2007-12-26) Available: http://www.pascal-network....

[8] Pinto N, Cox DD, and DiCarlo JJ (2008) Why is Real-World Visual Object Recognition Hard? PLoS Computational Biology, 4(1):e27.

[9] Ponce J, Berg TL, Everingham MR, Forsyth DA, Hebert M, Lazebnik S, Marszalek M, Schmid C, Russell BC, Torralba A, Williams CKI, Zhang J, and Zisserman A (2006) Dataset Issues in Object Recognition. In Toward Category-Level Object Recognition, eds. Ponce J, Hebert M, Schmid C, and Zisserman A, LNCS 4170, Springer-Verlag, pp 29-48.

[10] Russell B, Torralba A, Murphy K, and Freeman WT (2005) LabelMe: a database and web-based tool for image annotation. Cambridge (Massachusetts): MIT Artificial Intelligence Lab Memo AIM-2005–025.

[11] Ullman S (1996) High-Level Vision: Object Recognition and Visual Cognition. MIT press.

[12] Von Ahn L, Liu R, and Blum M (2006) Peekaboom: a game for locating objects in images. ACM SIGCHI 2006: 55–64.

这篇关于Object Recognition, Computer Vision, and the Caltech 101: A Response to Pinto et al.的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/944734

相关文章

【Python报错已解决】AttributeError: ‘list‘ object has no attribute ‘text‘

🎬 鸽芷咕:个人主页  🔥 个人专栏: 《C++干货基地》《粉丝福利》 ⛺️生活的理想,就是为了理想的生活! 文章目录 前言一、问题描述1.1 报错示例1.2 报错分析1.3 解决思路 二、解决方法2.1 方法一:检查属性名2.2 步骤二:访问列表元素的属性 三、其他解决方法四、总结 前言 在Python编程中,属性错误(At

error while loading shared libraries: libnuma.so.1: cannot open shared object file:

腾讯云CentOS,安装Mysql时: 1.yum remove libnuma.so.1 2.yum install numactl.x86_64

Computer Exercise

每日一练 单选题 在计算机机箱前面板接口插针上(     C   )表示复位开关。 A.SPK    B.PWRLED    C.RESET    D.HDDLED每台PC机最多可接(     B   )块IDE硬盘。 A.2    B.4    C.6    D.8(     B   )拓扑结构由连接成封闭回路的网络结点组成的,每一结点与它左右相邻的结点连接。 A.总线型    B

java基础总结12-面向对象8(Object类)

1 Object类介绍 Object类在JAVA里面是一个比较特殊的类,JAVA只支持单继承,子类只能从一个父类来继承,如果父类又是从另外一个父类继承过来,那他也只能有一个父类,父类再有父类,那也只能有一个,JAVA为了组织这个类组织得比较方便,它提供了一个最根上的类,相当于所有的类都是从这个类继承,这个类就叫Object。所以Object类是所有JAVA类的根基类,是所有JAVA类的老祖宗

王立平--Object-c

object-c通常写作objective-c或者obj-c,是根据C语言所衍生出来的语言,继承了C语言的特性,是扩充C的面向对象编程语言。它主要使用于MacOSX和GNUstep这两个使用OpenStep标准的系统,而在NeXTSTEP和OpenStep中它更是基本语言。Objective-C可以在gcc运作的系统写和编译,因为gcc含Objective-C的编译器。在MA

复盘高质量Vision Pro沉浸式视频的制作流程与工具

在探索虚拟现实(VR)和增强现实(AR)技术的过程中,高质量的沉浸式体验是至关重要的。最近,国外开发者Dreamwieber在其作品中展示了如何使用一系列工具和技术,创造出令人震撼的Vision Pro沉浸式视频。本文将详细复盘Dreamwieber的工作流,希望能为从事相关领域的开发者们提供有价值的参考。 一、步骤和工作流 构建基础原型 目的:快速搭建起一个基本的模型,以便在设备

图论篇--代码随想录算法训练营第五十二天打卡| 101. 孤岛的总面积,102. 沉没孤岛,103. 水流问题,104.建造最大岛屿

101. 孤岛的总面积 题目链接:101. 孤岛的总面积 题目描述: 给定一个由 1(陆地)和 0(水)组成的矩阵,岛屿指的是由水平或垂直方向上相邻的陆地单元格组成的区域,且完全被水域单元格包围。孤岛是那些位于矩阵内部、所有单元格都不接触边缘的岛屿。 现在你需要计算所有孤岛的总面积,岛屿面积的计算方式为组成岛屿的陆地的总数。 解题思路: 从周边找到陆地,然后通过 dfs或者bfs 将

一键部署Phi 3.5 mini+vision!多模态阅读基准数据集MRR-Benchmark上线,含550个问答对

小模型又又又卷起来了!微软开源三连发!一口气发布了 Phi 3.5 针对不同任务的 3 个模型,并在多个基准上超越了其他同类模型。 其中 Phi-3.5-mini-instruct 专为内存或算力受限的设备推出,小参数也能展现出强大的推理能力,代码生成、多语言理解等任务信手拈来。而 Phi-3.5-vision-instruct 则是多模态领域的翘楚,能同时处理文本和视觉信息,图像理解、视频摘要

ZeroMQ(java)之Requerst/Response模式

自己最开始是在cloud foundry中接触过消息服务器(nats),或者说是消息中间件,也算是初步知道了一个消息服务器对于分布式的网络系统的重要性,后来自己也曾想过在一些项目中使用它,尤其是在一些分布式的环境下,可以极大的方便整个系统的实现。。。。 例如如下的形式: 在中间通过一个消息中间件,可以很方便的将各个woker的数据发送到最终的统计服务器来做数据的统计,从而

COD论文笔记 ECCV2024 Just a Hint: Point-Supervised Camouflaged Object Detection

这篇论文的主要动机、现有方法的不足、拟解决的问题、主要贡献和创新点: 1. 动机 伪装物体检测(Camouflaged Object Detection, COD)旨在检测隐藏在环境中的伪装物体,这是一个具有挑战性的任务。由于伪装物体与背景的细微差别和模糊的边界,手动标注像素级的物体非常耗时,例如每张图片可能需要 60 分钟来标注。因此,作者希望通过减少标注负担,提出了一种仅依赖“点标注”的弱