【TED, Fei-Fei Li】How we're teaching computers to understand pictures

2023-10-19 07:50

本文主要是介绍【TED, Fei-Fei Li】How we're teaching computers to understand pictures,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

以下是斯坦福教授李飞飞教授在TED上的演讲实录:
https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures?language=zh-cn

这里写图片描述

English Transcript:

0:13
Let me show you something.

0:17
(Video) Girl: Okay, that’s a cat sitting in a bed. The boy is petting the elephant. Those are people that are going on an airplane. That’s a big airplane.

0:32
Fei-Fei Li: This is a three-year-old child describing what she sees in a series of photos. She might still have a lot to learn about this world, but she’s already an expert at one very important task: to make sense of what she sees. Our society is more technologically advanced than ever. We send people to the moon, we make phones that talk to us or customize radio stations that can play only music we like. Yet, our most advanced machines and computers still struggle at this task. So I’m here today to give you a progress report on the latest advances in our research in computer vision, one of the most frontier and potentially revolutionary technologies in computer science.

1:23
Yes, we have prototyped cars that can drive by themselves, but without smart vision, they cannot really tell the difference between a crumpled paper bag on the road, which can be run over, and a rock that size, which should be avoided. We have made fabulous megapixel cameras, but we have not delivered sight to the blind. Drones can fly over massive land, but don’t have enough vision technology to help us to track the changes of the rainforests. Security cameras are everywhere, but they do not alert us when a child is drowning in a swimming pool. Photos and videos are becoming an integral part of global life. They’re being generated at a pace that’s far beyond what any human, or teams of humans, could hope to view, and you and I are contributing to that at this TED. Yet our most advanced software is still struggling at understanding and managing this enormous content. So in other words, collectively as a society, we’re very much blind, because our smartest machines are still blind.

2:42
“Why is this so hard?” you may ask. Cameras can take pictures like this one by converting lights into a two-dimensional array of numbers known as pixels, but these are just lifeless numbers. They do not carry meaning in themselves. Just like to hear is not the same as to listen, to take pictures is not the same as to see, and by seeing, we really mean understanding. In fact, it took Mother Nature 540 million years of hard work to do this task, and much of that effort went into developing the visual processing apparatus of our brains, not the eyes themselves. So vision begins with the eyes, but it truly takes place in the brain.

3:37
So for 15 years now, starting from my Ph.D. at Caltech and then leading Stanford’s Vision Lab, I’ve been working with my mentors, collaborators and students to teach computers to see. Our research field is called computer vision and machine learning. It’s part of the general field of artificial intelligence. So ultimately, we want to teach the machines to see just like we do: naming objects, identifying people, inferring 3D geometry of things, understanding relations, emotions, actions and intentions. You and I weave together entire stories of people, places and things the moment we lay our gaze on them.

4:27
The first step towards this goal is to teach a computer to see objects, the building block of the visual world. In its simplest terms, imagine this teaching process as showing the computers some training images of a particular object, let’s say cats, and designing a model that learns from these training images. How hard can this be? After all, a cat is just a collection of shapes and colors, and this is what we did in the early days of object modeling. We’d tell the computer algorithm in a mathematical language that a cat has a round face, a chubby body, two pointy ears, and a long tail, and that looked all fine. But what about this cat? (Laughter) It’s all curled up. Now you have to add another shape and viewpoint to the object model. But what if cats are hidden? What about these silly cats? Now you get my point. Even something as simple as a household pet can present an infinite number of variations to the object model, and that’s just one object.

5:43
So about eight years ago, a very simple and profound observation changed my thinking. No one tells a child how to see, especially in the early years. They learn this through real-world experiences and examples. If you consider a child’s eyes as a pair of biological cameras, they take one picture about every 200 milliseconds, the average time an eye movement is made. So by age three, a child would have seen hundreds of millions of pictures of the real world. That’s a lot of training examples. So instead of focusing solely on better and better algorithms, my insight was to give the algorithms the kind of training data that a child was given through experiences in both quantity and quality.

6:43
Once we know this, we knew we needed to collect a data set that has far more images than we have ever had before, perhaps thousands of times more, and together with Professor Kai Li at Princeton University, we launched the ImageNet project in 2007. Luckily, we didn’t have to mount a camera on our head and wait for many years. We went to the Internet, the biggest treasure trove of pictures that humans have ever created. We downloaded nearly a billion images and used crowdsourcing technology like the Amazon Mechanical Turk platform to help us to label these images. At its peak, ImageNet was one of the biggest employers of the Amazon Mechanical Turk workers: together, almost 50,000 workers from 167 countries around the world helped us to clean, sort and label nearly a billion candidate images. That was how much effort it took to capture even a fraction of the imagery a child’s mind takes in in the early developmental years.

8:03
In hindsight, this idea of using big data to train computer algorithms may seem obvious now, but back in 2007, it was not so obvious. We were fairly alone on this journey for quite a while. Some very friendly colleagues advised me to do something more useful for my tenure, and we were constantly struggling for research funding. Once, I even joked to my graduate students that I would just reopen my dry cleaner’s shop to fund ImageNet. After all, that’s how I funded my college years.

8:40
So we carried on. In 2009, the ImageNet project delivered a database of 15 million images across 22,000 classes of objects and things organized by everyday English words. In both quantity and quality, this was an unprecedented scale. As an example, in the case of cats, we have more than 62,000 cats of all kinds of looks and poses and across all species of domestic and wild cats. We were thrilled to have put together ImageNet, and we wanted the whole research world to benefit from it, so in the TED fashion, we opened up the entire data set to the worldwide research community for free. (Applause)

9:40
Now that we have the data to nourish our computer brain, we’re ready to come back to the algorithms themselves. As it turned out, the wealth of information provided by ImageNet was a perfect match to a particular class of machine learning algorithms called convolutional neural network, pioneered by Kunihiko Fukushima, Geoff Hinton, and Yann LeCun back in the 1970s and ’80s. Just like the brain consists of billions of highly connected neurons, a basic operating unit in a neural network is a neuron-like node. It takes input from other nodes and sends output to others. Moreover, these hundreds of thousands or even millions of nodes are organized in hierarchical layers, also similar to the brain. In a typical neural network we use to train our object recognition model, it has 24 million nodes, 140 million parameters, and 15 billion connections. That’s an enormous model. Powered by the massive data from ImageNet and the modern CPUs and GPUs to train such a humongous model, the convolutional neural network blossomed in a way that no one expected. It became the winning architecture to generate exciting new results in object recognition. This is a computer telling us this picture contains a cat and where the cat is. Of course there are more things than cats, so here’s a computer algorithm telling us the picture contains a boy and a teddy bear; a dog, a person, and a small kite in the background; or a picture of very busy things like a man, a skateboard, railings, a lampost, and so on. Sometimes, when the computer is not so confident about what it sees, we have taught it to be smart enough to give us a safe answer instead of committing too much, just like we would do, but other times our computer algorithm is remarkable at telling us what exactly the objects are, like the make, model, year of the cars.

12:09
We applied this algorithm to millions of Google Street View images across hundreds of American cities, and we have learned something really interesting: first, it confirmed our common wisdom that car prices correlate very well with household incomes. But surprisingly, car prices also correlate well with crime rates in cities, or voting patterns by zip codes.

12:43
So wait a minute. Is that it? Has the computer already matched or even surpassed human capabilities? Not so fast. So far, we have just taught the computer to see objects. This is like a small child learning to utter a few nouns. It’s an incredible accomplishment, but it’s only the first step. Soon, another developmental milestone will be hit, and children begin to communicate in sentences. So instead of saying this is a cat in the picture, you already heard the little girl telling us this is a cat lying on a bed.

13:23
So to teach a computer to see a picture and generate sentences, the marriage between big data and machine learning algorithm has to take another step. Now, the computer has to learn from both pictures as well as natural language sentences generated by humans. Just like the brain integrates vision and language, we developed a model that connects parts of visual things like visual snippets with words and phrases in sentences.

14:01
About four months ago, we finally tied all this together and produced one of the first computer vision models that is capable of generating a human-like sentence when it sees a picture for the first time. Now, I’m ready to show you what the computer says when it sees the picture that the little girl saw at the beginning of this talk.

14:30
(Video) Computer: A man is standing next to an elephant. A large airplane sitting on top of an airport runway.

14:40
FFL: Of course, we’re still working hard to improve our algorithms, and it still has a lot to learn. (Applause)

14:50
And the computer still makes mistakes.

14:53
(Video) Computer: A cat lying on a bed in a blanket.

14:57
FFL: So of course, when it sees too many cats, it thinks everything might look like a cat.

15:04
(Video) Computer: A young boy is holding a baseball bat. (Laughter)

15:08
FFL: Or, if it hasn’t seen a toothbrush, it confuses it with a baseball bat.

15:14
(Video) Computer: A man riding a horse down a street next to a building. (Laughter)

15:19
FFL: We haven’t taught Art 101 to the computers.

15:24
(Video) Computer: A zebra standing in a field of grass.

15:27
FFL: And it hasn’t learned to appreciate the stunning beauty of nature like you and I do.

15:33
So it has been a long journey. To get from age zero to three was hard. The real challenge is to go from three to 13 and far beyond. Let me remind you with this picture of the boy and the cake again. So far, we have taught the computer to see objects or even tell us a simple story when seeing a picture.

15:58
(Video) Computer: A person sitting at a table with a cake.

16:02
FFL: But there’s so much more to this picture than just a person and a cake. What the computer doesn’t see is that this is a special Italian cake that’s only served during Easter time. The boy is wearing his favorite t-shirt given to him as a gift by his father after a trip to Sydney, and you and I can all tell how happy he is and what’s exactly on his mind at that moment.
describing what she sees in a series of photos.

16:30
This is my son Leo. On my quest for visual intelligence, I think of Leo constantly and the future world he will live in. When machines can see, doctors and nurses will have extra pairs of tireless eyes to help them to diagnose and take care of patients. Cars will run smarter and safer on the road. Robots, not just humans, will help us to brave the disaster zones to save the trapped and wounded. We will discover new species, better materials, and explore unseen frontiers with the help of the machines.

17:14
Little by little, we’re giving sight to the machines. First, we teach them to see. Then, they help us to see better. For the first time, human eyes won’t be the only ones pondering and exploring our world. We will not only use the machines for their intelligence, we will also collaborate with them in ways that we cannot even imagine.

17:40
This is my quest: to give computers visual intelligence and to create a better future for Leo and for the world.

17:50
Thank you.

这篇关于【TED, Fei-Fei Li】How we're teaching computers to understand pictures的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/238398

相关文章

013.Python爬虫系列_re正则解析

我 的 个 人 主 页:👉👉 失心疯的个人主页 👈👈 入 门 教 程 推 荐 :👉👉 Python零基础入门教程合集 👈👈 虚 拟 环 境 搭 建 :👉👉 Python项目虚拟环境(超详细讲解) 👈👈 PyQt5 系 列 教 程:👉👉 Python GUI(PyQt5)文章合集 👈👈 Oracle数据库教程:👉👉 Oracle数据库文章合集 👈👈 优

Lenze伦茨EMF2102IBC−LECOM−A/B/LI L−force Communication手测

Lenze伦茨EMF2102IBC−LECOM−A/B/LI L−force Communication手测

Login failed:make sure your username and password are correct and that you’re an admin or moderator

Login failed:make sure your username and password are correct and that you’re an admin or moderator   1.使用MySql查看工具进入数据库,进入表“ofuser”,把字段 plainPassword 改成 123,然后在你的控制台上输入该表的   username跟plainPa

【HDU】5320 Fan Li【线段树】

传送门:【HDU】5320 Fan Li my  code: my~~code: #include <stdio.h>#include <string.h>#include <vector>#include <algorithm>using namespace std ;typedef long long LL ;#define clr( a , x ) memset ( a , x

解决Re-download dependencies and sync project

解决Re-download dependencies and sync project 问题描述 新建一个工程,报错 Error:Failed to open zip file.Gradle's dependency cache may be corrupt (this sometimes occurs after a network connection timeout.)<a hr

正则 re中要转义的特殊字符

如果要查找文件名中有*的文件,则需要对*进行转义,即在其前加一个\。ls \*.txt。正则表达式有以下特殊字符。需要转义  特别字符 说明 $ 匹配输入字符串的结尾位置。如果设置了 RegExp 对象的 Multiline 属性,则 $ 也匹配 ‘\n' 或 ‘\r'。要匹配 $ 字符本身,请使用 \$。 ( ) 标记一个子表达式的开始和结束位置。子表达式可以获取供以后使用

正则表达式模块re及其应用

正则表达式是一种强大的文本处理工具,能够用来匹配、查找、替换复杂的文本模式。Python中的正则表达式由re模块提供。 以下是一些常用的方法及示例: 一. 常用方法 re.match() 从头开始匹配re.search() 搜索第一个匹配串re.findall() 查找所有匹配项re.finditer() 遍历所有匹配re.sub() 替换字符串中匹配的模式re.split() 将字符串分割

Layer-refined Graph Convolutional Networks for Recommendation【ICDE2023】

Layer-refined Graph Convolutional Networks for Recommendation 论文:https://arxiv.org/abs/2207.11088 源码:https://github.com/enoche/MMRec/blob/master/README.md 摘要 基于图卷积网络(GCN)的抽象推荐模型综合了用户-项目交互图的节点信息和拓

re正则模块

re模块用于处理正则表达式,它的基本功能包括:匹配、查找、替换。 匹配 匹配的使用方法一般有三个参数:第一个参数:正则模式、第二个参数:需要处理的字符、 第三个参数:附加处理方法 下面列举的是一写匹配中使用到的一些方法 re.match() result = re.match(r"asd", "asdasdasdq")print(result, dir(result))#re.ma

P1079 Vigenère 密码

题目地址 注意点: 写完一段代码后应当先进行一次静态查错. #include<cstdio>#include<iostream>#include<cstring>using namespace std;const int MAXN=2e3;bool isCapital(char val){//是否大写 if(val>='A'&&val<='Z')return 1;e