跟TED演讲学英文:How to govern AI — even if it‘s hard to predict by Helen Toner

2024-05-05 17:12

本文主要是介绍跟TED演讲学英文:How to govern AI — even if it‘s hard to predict by Helen Toner,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

How to govern AI — even if it’s hard to predict

在这里插入图片描述

Link: https://www.ted.com/talks/helen_toner_how_to_govern_ai_even_if_it_s_hard_to_predict?

Speaker: Helen Toner

Date: April 2024

文章目录

  • How to govern AI — even if it's hard to predict
    • Introduction
    • Vocabulary
    • Transcript
    • Summary
    • 后记

Introduction

No one truly understands AI, not even experts, says Helen Toner, an AI policy researcher and former board member of OpenAI. But that doesn’t mean we can’t govern it. She shows how we can make smart policies to regulate this technology even as we struggle to predict where it’s headed — and why the right actions, right now, can shape the future we want.

Vocabulary

But when it comes to how they work on the inside, there are serious limits to how much we know. 但是,当谈到它们在内部如何工作时,我们知道的程度有很大的限制。

hurdle: 美 [ˈhɜːrdl] 障碍;难关;临时围栏;

And the fact that we have such a hard time understanding what’s going on with the technology and predicting where it will go next, is one of the biggest hurdles we face in figuring out how to govern AI. 事实上,我们很难理解这项技术的进展并预测它的下一步走向,这是我们在解决如何治理人工智能方面面临的最大障碍之一。

forge: 美 [fɔːrdʒ] 锻造;铸造;稳步前进;创造;缔造

forge a path: 开辟一条路

We have to forge some kind of path forward anyway.无论如何,我们必须开辟一条前进的道路。

different experts have completely different intuitions about what lies at the heart of intelligence.不同的专家对智能的核心有完全不同的直觉。

a far cry: 相去甚远

But it’s a far cry from being able to do everything as well as you or I could do it. 但要做到你我都能做的那么好还相差甚远。

tease apart:分离出来,区分

we don’t yet have good ways of teasing apart what they’re all doing.我们还没有很好的方法来区分他们都在做什么。

So how do we govern this technology that we struggle to understand and predict? 那么我们如何管理这项我们难以理解和预测的技术呢?

intimidated:美 [ɪnˈtɪmɪdeɪtɪd] 害怕的;受到威胁的

First, don’t be intimidated. 首先,不要被吓倒。

opaque:美 [oʊˈpeɪk] 不透明的;不透光的;晦涩的;难理解的

opacity: 美 [oʊˈpæsədi] 不透明

even the parts we don’t understand won’t be opaque forever. 即使是我们不理解的部分也不会永远不透明。

Machiavellian:美 [ˌmɑkiəˈvɛliən] 狡猾的;诡计多端的;不择手段的;马雅基弗利的

elbow deep in: deeply engaged (in work, etc.) 深度参与

entitled to:有权做某事,有资格

technologists sometimes act as though if you’re not elbows deep in the technical details, then you’re not entitled to an opinion on what we should do with it. 技术专家有时表现得好像如果你不深入研究技术细节,那么你就没有资格对我们应该如何处理它发表意见。

Second, we need to focus on adaptability, not certainty. 第二,我们需要关注适应性,而不是确定性。

get bogged down: 陷入泥潭;停滞不前;陷入困境

A lot of conversations about how to make policy for AI get bogged down in fights 许多关于如何为人工智能制定政策的对话陷入了争斗

slam:猛烈撞击;砰地被关上;猛撞;使劲推

slamming on the brakes:急刹车

hit the gas:踩油门

it’s not just a choice between slamming on the brakes or hitting the gas. 这不仅仅是在急刹车或踩油门之间的选择。

twists and turns:山路迂回曲折;复杂(或曲折)变化

steering system:转向系统

windshield:美 [ˈwɪndʃiːld] 挡风玻璃

If you’re driving down a road with unexpected twists and turns, then two things that will help you a lot are having a clear view out the windshield and an excellent steering system. 如果你在一条意想不到的曲折道路上行驶,那么对你有很大帮助的两件事是挡风玻璃外的清晰视野和出色的转向系统。

political beliefs:政治信仰

rudimentary:初级的,初步的,未成熟的

Right now, if we want to figure out whether an AI can do something concerning, like hack critical infrastructure or persuade someone to change their political beliefs, our methods of measuring that are rudimentary. 现在,如果我们想弄清楚一个人工智能是否能做一些令人担忧的事情,比如入侵关键基础设施或说服某人改变他们的政治信仰,我们衡量这些的方法还很初级。

Just like the data we collect on plane crashes and cyber attacks. 就像我们收集的飞机失事和网络攻击的数据一样。

And by default, it looks like the enormous power of more advanced AI systems might stay concentrated in the hands of a small number of companies, or even a small number of individuals. 默认情况下,看起来更先进的人工智能系统的巨大力量可能会继续集中在少数公司甚至少数个人手中。

tempting:诱人的

So as tempting as it might be, we can’t wait for clarity or expert consensus to figure out what we want to happen with AI. 因此,尽管这可能很诱人,但我们不能等待明确性或专家共识来弄清楚我们希望人工智能发生什么。

arena:美 [əˈriːnə]场地;活动场所;辩论场所

and then we can get in the arena and push for futures we actually want. 然后我们可以进入场地,推动我们真正想要的未来。

Transcript

When I talk to people
about artificial intelligence,

something I hear a lot from non-experts
is “I don’t understand AI.”

But when I talk to experts,
a funny thing happens.

They say, “I don’t understand AI,
and neither does anyone else.”

This is a pretty strange state of affairs.

Normally, the people
building a new technology

understand how it works inside and out.

But for AI, a technology that’s radically
reshaping the world around us,

that’s not so.

Experts do know plenty about how to build
and run AI systems, of course.

But when it comes to how
they work on the inside,

there are serious limits
to how much we know.

And this matters because without
deeply understanding AI,

it’s really difficult for us to know
what it will be able to do next,

or even what it can do now.

And the fact that we have
such a hard time understanding

what’s going on with the technology
and predicting where it will go next,

is one of the biggest hurdles we face
in figuring out how to govern AI.

But AI is already all around us,

so we can’t just sit around and wait
for things to become clearer.

We have to forge some kind
of path forward anyway.

I’ve been working on these AI
policy and governance issues

for about eight years,

First in San Francisco,
now in Washington, DC.

Along the way, I’ve gotten an inside look

at how governments are working
to manage this technology.

And inside the industry,
I’ve seen a thing or two as well.

So I’m going to share a couple of ideas

for what our path
to governing AI could look like.

But first, let’s talk about what actually
makes AI so hard to understand

and predict.

One huge challenge in building
artificial “intelligence”

is that no one can agree
on what it actually means

to be intelligent.

This is a strange place to be in
when building a new tech.

When the Wright brothers started
experimenting with planes,

they didn’t know how to build one,

but everyone knew what it meant to fly.

With AI on the other hand,

different experts have
completely different intuitions

about what lies
at the heart of intelligence.

Is it problem solving?

Is it learning and adaptation,

are emotions,

or having a physical body
somehow involved?

We genuinely don’t know.

But different answers lead
to radically different expectations

about where the technology is going
and how fast it’ll get there.

An example of how we’re confused
is how we used to talk

about narrow versus general AI.

For a long time, we talked
in terms of two buckets.

A lot of people thought we should
just be dividing between narrow AI,

trained for one specific task,

like recommending the next YouTube video,

versus artificial general
intelligence, or AGI,

that could do everything a human could do.

We thought of this distinction,
narrow versus general,

as a core divide between
what we could build in practice

and what would actually be intelligent.

But then a year or two ago,
along came ChatGPT.

If you think about it,

you know, is it narrow AI,
trained for one specific task?

Or is it AGI and can do
everything a human can do?

Clearly the answer is neither.

It’s certainly general purpose.

It can code, write poetry,

analyze business problems,
help you fix your car.

But it’s a far cry
from being able to do everything

as well as you or I could do it.

So it turns out this idea of generality

doesn’t actually seem to be
the right dividing line

between intelligent and not.

And this kind of thing

is a huge challenge
for the whole field of AI right now.

We don’t have any agreement
on what we’re trying to build

or on what the road map
looks like from here.

We don’t even clearly understand
the AI systems that we have today.

Why is that?

Researchers sometimes describe
deep neural networks,

the main kind of AI being built today,

as a black box.

But what they mean by that
is not that it’s inherently mysterious

and we have no way
of looking inside the box.

The problem is that when
we do look inside,

what we find are millions,

billions or even trillions of numbers

that get added and multiplied together
in a particular way.

What makes it hard for experts
to know what’s going on

is basically just,
there are too many numbers,

and we don’t yet have good ways
of teasing apart what they’re all doing.

There’s a little bit more to it
than that, but not a lot.

So how do we govern this technology

that we struggle
to understand and predict?

I’m going to share two ideas.

One for all of us
and one for policymakers.

First, don’t be intimidated.

Either by the technology itself

or by the people
and companies building it.

On the technology,

AI can be confusing, but it’s not magical.

There are some parts of AI systems
we do already understand well,

and even the parts we don’t understand
won’t be opaque forever.

An area of research
known as “AI interpretability”

has made quite a lot of progress
in the last few years

in making sense of what all those
billions of numbers are doing.

One team of researchers, for example,

found a way to identify
different parts of a neural network

that they could dial up or dial down

to make the AI’s answers
happier or angrier,

more honest,

more Machiavellian, and so on.

If we can push forward
this kind of research further,

then five or 10 years from now,

we might have a much clearer
understanding of what’s going on

inside the so-called black box.

And when it comes to those
building the technology,

technologists sometimes act as though

if you’re not elbows deep
in the technical details,

then you’re not entitled to an opinion
on what we should do with it.

Expertise has its place, of course,

but history shows us how important it is

that the people affected
by a new technology

get to play a role
in shaping how we use it.

Like the factory workers in the 20th
century who fought for factory safety,

or the disability advocates

who made sure the world
wide web was accessible.

You don’t have to be a scientist
or engineer to have a voice.

(Applause)

Second, we need to focus
on adaptability, not certainty.

A lot of conversations
about how to make policy for AI

get bogged down in fights
between, on the one side,

people saying, "We have to regulate AI
really hard right now

because it’s so risky."

And on the other side, people saying,

“But regulation will kill innovation,
and those risks are made up anyway.”

But the way I see it,

it’s not just a choice
between slamming on the brakes

or hitting the gas.

If you’re driving down a road
with unexpected twists and turns,

then two things that will help you a lot

are having a clear view out the windshield

and an excellent steering system.

In AI, this means having a clear picture
of where the technology is

and where it’s going,

and having plans in place
for what to do in different scenarios.

Concretely, this means things like
investing in our ability to measure

what AI systems can do.

This sounds nerdy, but it really matters.

Right now, if we want to figure out

whether an AI can do something concerning,

like hack critical infrastructure

or persuade someone to change
their political beliefs,

our methods of measuring that
are rudimentary.

We need better.

We should also be requiring AI companies,

especially the companies building
the most advanced AI systems,

to share information
about what they’re building,

what their systems can do

and how they’re managing risks.

And they should have to let in external
AI auditors to scrutinize their work

so that the companies aren’t just
grading their own homework.

(Applause)

A final example of what this can look like

is setting up incident
reporting mechanisms,

so that when things do go wrong
in the real world,

we have a way to collect data
on what happened

and how we can fix it next time.

Just like the data we collect
on plane crashes and cyber attacks.

None of these ideas are mine,

and some of them are already starting
to be implemented in places like Brussels,

London, even Washington.

But the reason
I’m highlighting these ideas,

measurement, disclosure,
incident reporting,

is that they help us
navigate progress in AI

by giving us a clearer view
out the windshield.

If AI is progressing
fast in dangerous directions,

these policies will help us see that.

And if everything is going smoothly,
they’ll show us that too,

and we can respond accordingly.

What I want to leave you with

is that it’s both true
that there’s a ton of uncertainty

and disagreement in the field of AI.

And that companies are already
building and deploying AI

all over the place anyway
in ways that affect all of us.

Left to their own devices,

it looks like AI companies might go
in a similar direction

to social media companies,

spending most of their resources
on building web apps

and for users’ attention.

And by default, it looks like the enormous
power of more advanced AI systems

might stay concentrated in the hands
of a small number of companies,

or even a small number of individuals.

But AI’s potential goes
so far beyond that.

AI already lets us leap
over language barriers

and predict protein structures.

More advanced systems could unlock clean,
limitless fusion energy

or revolutionize how we grow food

or 1,000 other things.

And we each have a voice in what happens.

We’re not just data sources,

we are users,

we’re workers,

we’re citizens.

So as tempting as it might be,

we can’t wait for clarity
or expert consensus

to figure out what we want
to happen with AI.

AI is already happening to us.

What we can do is put policies in place

to give us as clear
a picture as we can get

of how the technology is changing,

and then we can get in the arena
and push for futures we actually want.

Thank you.

(Applause)

Summary

In Helen Toner’s speech, she discusses the challenges surrounding understanding and governing artificial intelligence (AI). Non-experts often express confusion about AI, while even experts admit to limited understanding. Toner highlights the importance of grasping AI’s inner workings to anticipate its future capabilities. She emphasizes the need for proactive governance despite the complexity of AI technology.

Toner delves into the difficulty in defining intelligence, a crucial aspect in AI development. The absence of consensus among experts complicates predicting AI’s trajectory. Traditional distinctions between narrow and general AI are blurred by advancements like ChatGPT. Toner stresses the necessity of agreement on AI’s purpose to inform regulatory efforts effectively.

Despite AI’s opacity, Toner encourages engagement rather than intimidation. She advocates for transparent research and inclusive policymaking to address AI’s risks and potentials. Toner proposes measures such as improved measurement standards, mandatory disclosure from AI companies, and incident reporting mechanisms. These efforts aim to foster adaptability and provide a clearer view of AI’s development path.

In conclusion, Toner urges active participation in shaping AI’s future. She emphasizes the importance of informed governance to steer AI’s progress responsibly. By promoting transparency, accountability, and public involvement, Toner advocates for a collective effort in navigating the complexities of AI technology.

后记

2024年5月4日16点16分完成这篇演讲的学习。

2024年5月4日于上海。

这篇关于跟TED演讲学英文:How to govern AI — even if it‘s hard to predict by Helen Toner的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/962227

相关文章

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

AI绘图怎么变现?想做点副业的小白必看!

在科技飞速发展的今天,AI绘图作为一种新兴技术,不仅改变了艺术创作的方式,也为创作者提供了多种变现途径。本文将详细探讨几种常见的AI绘图变现方式,帮助创作者更好地利用这一技术实现经济收益。 更多实操教程和AI绘画工具,可以扫描下方,免费获取 定制服务:个性化的创意商机 个性化定制 AI绘图技术能够根据用户需求生成个性化的头像、壁纸、插画等作品。例如,姓氏头像在电商平台上非常受欢迎,

从去中心化到智能化:Web3如何与AI共同塑造数字生态

在数字时代的演进中,Web3和人工智能(AI)正成为塑造未来互联网的两大核心力量。Web3的去中心化理念与AI的智能化技术,正相互交织,共同推动数字生态的变革。本文将探讨Web3与AI的融合如何改变数字世界,并展望这一新兴组合如何重塑我们的在线体验。 Web3的去中心化愿景 Web3代表了互联网的第三代发展,它基于去中心化的区块链技术,旨在创建一个开放、透明且用户主导的数字生态。不同于传统

AI一键生成 PPT

AI一键生成 PPT 操作步骤 作为一名打工人,是不是经常需要制作各种PPT来分享我的生活和想法。但是,你们知道,有时候灵感来了,时间却不够用了!😩直到我发现了Kimi AI——一个能够自动生成PPT的神奇助手!🌟 什么是Kimi? 一款月之暗面科技有限公司开发的AI办公工具,帮助用户快速生成高质量的演示文稿。 无论你是职场人士、学生还是教师,Kimi都能够为你的办公文

Andrej Karpathy最新采访:认知核心模型10亿参数就够了,AI会打破教育不公的僵局

夕小瑶科技说 原创  作者 | 海野 AI圈子的红人,AI大神Andrej Karpathy,曾是OpenAI联合创始人之一,特斯拉AI总监。上一次的动态是官宣创办一家名为 Eureka Labs 的人工智能+教育公司 ,宣布将长期致力于AI原生教育。 近日,Andrej Karpathy接受了No Priors(投资博客)的采访,与硅谷知名投资人 Sara Guo 和 Elad G

AI hospital 论文Idea

一、Benchmarking Large Language Models on Communicative Medical Coaching: A Dataset and a Novel System论文地址含代码 大多数现有模型和工具主要迎合以患者为中心的服务。这项工作深入探讨了LLMs在提高医疗专业人员的沟通能力。目标是构建一个模拟实践环境,人类医生(即医学学习者)可以在其中与患者代理进行医学

AI行业应用(不定期更新)

ChatPDF 可以让你上传一个 PDF 文件,然后针对这个 PDF 进行小结和提问。你可以把各种各样你要研究的分析报告交给它,快速获取到想要知道的信息。https://www.chatpdf.com/

【北交大信息所AI-Max2】使用方法

BJTU信息所集群AI_MAX2使用方法 使用的前提是预约到相应的算力卡,拥有登录权限的账号密码,一般为导师组共用一个。 有浏览器、ssh工具就可以。 1.新建集群Terminal 浏览器登陆10.126.62.75 (如果是1集群把75改成66) 交互式开发 执行器选Terminal 密码随便设一个(需记住) 工作空间:私有数据、全部文件 加速器选GeForce_RTX_2080_Ti

AI Toolkit + H100 GPU,一小时内微调最新热门文生图模型 FLUX

上个月,FLUX 席卷了互联网,这并非没有原因。他们声称优于 DALLE 3、Ideogram 和 Stable Diffusion 3 等模型,而这一点已被证明是有依据的。随着越来越多的流行图像生成工具(如 Stable Diffusion Web UI Forge 和 ComyUI)开始支持这些模型,FLUX 在 Stable Diffusion 领域的扩展将会持续下去。 自 FLU

AI基础 L9 Local Search II 局部搜索

Local Beam search 对于当前的所有k个状态,生成它们的所有可能后继状态。 检查生成的后继状态中是否有任何状态是解决方案。 如果所有后继状态都不是解决方案,则从所有后继状态中选择k个最佳状态。 当达到预设的迭代次数或满足某个终止条件时,算法停止。 — Choose k successors randomly, biased towards good ones — Close