A Sort of Buzzing Inside My Head
June 25, 2023 2023年6月25日
Whether ChatGPT passes the Turing Test is a less troubling question than what Alan Turing meant by “intelligence.” ChatGPT是否通过图灵测试并不是一个令人不安的问题,比起这个问题,更令人困扰的是艾伦·图灵对“智能”一词的含义。
Illustration by Lucas Adams 由卢卡斯·亚当斯绘制的插图
“Please write me a sonnet on the subject of the Forth Bridge.” This was apparently the first question that occurred to the English mathematician Alan Turing when, in a captivatingly strange 1950 paper entitled “Computing Machinery and Intelligence,” he imagined conversing with an intelligent machine and founded the field of artificial intelligence. The Forth Bridge, built in 1890, is a cantilever railway bridge spanning the Firth of Forth near Edinburgh, Scotland. Why a sonnet about the bridge? The juxtapositions are unexpected: a lovelorn poetic form, a 2,500-ton iron structure, and a computing device. If the last could produce authentic sense by applying the first to the second, Turing must have thought, that would indicate intelligence. “请为我写一首关于福斯桥的十四行诗。”这似乎是英国数学家艾伦·图灵在一篇引人入胜的奇怪论文《计算机与智能》中,与一台智能机器对话并创立了人工智能领域时的第一个问题。福斯桥建于1890年,是一座悬臂式铁路桥,跨越苏格兰爱丁堡附近的福斯湾。为什么要写一首关于桥的十四行诗呢?这些对比是出人意料的:一种痴情的诗歌形式,一座重达2500吨的铁结构,以及一台计算设备。如果后者能够通过将前者应用于后者来产生真实的意义,图灵一定会认为这表明了智能。
When I typed the same question into ChatGPT, it generated a bad poem in sonnet-like quatrains. How did Turing’s imaginary machine answer? “Count me out on this one,” it demurred. “I never could write poetry.” I guess it’s not surprising that I find Turing’s imaginary machine’s answer infinitely more persuasive than ChatGPT’s, since of course the first was written by an intelligent human: Turing himself. But it does seem surprising that the design process he established in his foundational paper has led to an “artificial intelligence” utterly unlike the intelligent machines he imagined in the same paper. 当我在ChatGPT中输入相同的问题时,它生成了一首糟糕的四行诗般的十四行诗。图灵的想象机器是如何回答的呢?它犹豫地说:“这个问题我不参与。”“我从来不擅长写诗。”我猜不难理解为什么我认为图灵的想象机器的回答比ChatGPT的更有说服力,因为第一个回答是由一个聪明的人类写的:图灵本人。但令人惊讶的是,他在他的基础论文中建立的设计过程却导致了一种与他在同一篇论文中想象的智能机器完全不同的“人工智能”。
ChatGPT is a generative AI, meaning that it uses statistical models to extrapolate patterns from data and then applies these patterns to generate new text, images, or other products such as music or computer code. Generative AIs rely on machine learning techniques whose foundations Turing laid in his landmark paper, where he hypothesized a process for arriving at an intelligent machine. He imagined first building an “unorganised machine,” a bunch of interconnected neuron-like components, that would become organized through a training process, creating the blueprint for an approach to artificial intelligence that would later be called “connectionism” and would lead to neural networks like those constituting the new generative AI large language models. But although ChatGPT descends from Turing’s protocol, it is nothing like the machine interlocutors he conjured in his dialogues, and therein lies an interesting conundrum. ChatGPT是一种生成型人工智能,意味着它利用统计模型从数据中推断出模式,然后应用这些模式生成新的文本、图像或其他产品,如音乐或计算机代码。生成型人工智能依赖于图灵在他的里程碑论文中奠定的机器学习技术基础,他在论文中假设了一种实现智能机器的过程。他首先想象构建一个“无组织机器”,即一堆相互连接的类似神经元的组件,通过训练过程使其变得有组织,为后来被称为“连接主义”的人工智能方法奠定了基础,并导致了构成新的生成型人工智能大型语言模型的神经网络。但尽管ChatGPT源于图灵的协议,它与他在对话中构想的机器对话者完全不同,其中存在一个有趣的难题。
Turing used imagined conversations with intelligent machines to introduce his idea for a test of machine intelligence, known ever since as the Turing Test. This is the test some are saying the new generative AIs have cracked—often to conclude that, since the generative AIs clearly don’t have human-like intelligence, the Turing Test must not be a reliable measure after all. Others suggest the AIs are like Frankenstein’s monster, on the verge of taking on a sinister life and mind of their own. Even as they advertise the utility of their new products, computer scientists also warn that they are potentially very dangerous: they could make it impossible to distinguish information from disinformation, thereby fatally undermining democracies or any form of rational decision-making; might cause catastrophic harms to any complex system—economic, air traffic control, energy, nuclear weapons—by malfunctioning or pursuing their goals in unforeseen ways; and might escape human control even without becoming the robot villains of science fiction and Elon Musk’s sensationalist admonitions that we’re “summoning the demon.”1 图灵通过与智能机器的想象对话,引入了他所称之为图灵测试的机器智能测试方法。有人声称,新一代生成式人工智能已经突破了这一测试,但往往得出结论:由于生成式人工智能显然没有人类般的智能,因此图灵测试并不是一个可靠的衡量标准。其他人则认为,这些人工智能就像弗兰肯斯坦的怪物,即将拥有自己邪恶的生命和思维。尽管他们宣传其新产品的实用性,计算机科学家们也警告说,这些人工智能潜在地非常危险:它们可能使人们无法区分信息和虚假信息,从而对民主或任何形式的理性决策产生致命的破坏;它们可能通过故障或以意想不到的方式追求目标,对任何复杂系统(如经济、空中交通管制、能源、核武器)造成灾难性的危害;它们甚至可能在没有成为科幻小说中的机器人恶棍或埃隆·马斯克所警告的“召唤恶魔”之前就逃脱了人类的控制。
In March Musk was among the initial signatories of an open letter calling for a pause in development of the technologies that was signed by thousands of the very people who have been bringing them to us: computer scientists, engineers, and tech CEOs. They asked, among other things, “Should we risk loss of control of our civilization?” In May the Center for AI Safety released a stark one-sentence warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 在三月份,马斯克是一封公开信的最初签署人之一,呼吁暂停技术发展,这封信得到了成千上万一直为我们带来这些技术的人的签署,其中包括计算机科学家、工程师和科技公司的首席执行官。他们问道:“我们是否应该冒着失去对我们文明的控制的风险?”五月份,AI安全中心发布了一句警告:“减轻人工智能带来的灭绝风险应该成为全球优先考虑的事项,与大流行病和核战争等社会规模风险并列。”
But Turing’s imaginary intelligent machines weren’t the least bit menacing, nor even especially powerful, no more so than the next intelligent being. The Turing Test in its original form was a game played with the machine, the Imitation Game. A human examiner would converse with a machine and another human, both hidden behind screens and both trying to persuade the examiner that they were the true human. If the examiner couldn’t tell which was which, Turing proposed, then we could call the machine intelligent. By the end of the twentieth century, he predicted, machines would be able to carry on dialogues with humans such that an average human “will not have more than 70 percent chance of making the right identification after five minutes of questioning.” In other words the machine would win the game 30 percent of the time, and Turing considered that people would therefore “be able to speak of machines thinking without expecting to be contradicted.” 但是图灵的想象中的智能机器一点也不可怕,也没有特别强大,和下一个智能生物一样。图灵测试最初是一种与机器进行的游戏,即模仿游戏。一个人类考官会与一台机器和另一个人类进行对话,他们都隐藏在屏幕后面,都试图说服考官他们是真正的人类。如果考官无法分辨出哪个是哪个,图灵提出,那么我们可以称这台机器是智能的。他预测,在二十世纪末,机器将能够与人类进行对话,以至于一个普通人“在五分钟的提问后,不会有超过70%的准确率来判断出真正的身份。”换句话说,机器将在游戏中获胜30%的时间,图灵认为人们因此“可以谈论机器的思考,而不会被反驳”。
Turing died four years later, just before his forty-second birthday, so he was never able to evaluate his prediction’s plausibility over the second half of the twentieth century, as the fields he’d helped inaugurate—computer science and artificial intelligence—developed apace yet came no closer to creating an artificial humanlike intelligence. His work in cryptanalysis had been crucial to the Allied victory in World War II, but this didn’t deter the British government, once the war was safely won, from persecuting him to death for being gay, demonstrating that human intelligence can be a shockingly low bar. 图灵在四年后去世,就在他四十二岁生日前不久,因此他无法评估自己的预测在二十世纪下半叶的可信度,尽管他帮助开创的领域——计算机科学和人工智能——迅速发展,但仍未能接近创造出人类般智能的人工智能。他在密码分析方面的工作对于盟军在二战中的胜利至关重要,但这并没有阻止英国政府在战争胜利后迫害他致死,原因是他是同性恋,这表明人类智能可以是令人震惊的低水平。
Some holdouts don’t think the generative AIs have actually passed the Turing Test. Anyway, the question is easily defined: Do they, at least 30 percent of the time, fool humans who spend five minutes posing questions to them and to a concealed human in a concerted effort to discern which is which? I don’t think most who say these programs have passed the Turing Test have actually put them to this authentic version of it; I suspect they mean that when people read text generated by an AI, they can’t easily tell whether it was written by an AI or a human. Even then, a little practice brings proficiency, at least for the moment, though the programs are changing rapidly. In my lecture course this spring, my teaching assistants and I became expert at sniffing out AI-generated essays by their flat, featureless feel, the literary equivalent of fluorescent lighting.2 有些人认为生成式人工智能实际上并没有通过图灵测试。不管怎样,问题很容易定义:它们是否至少有30%的时间能够欺骗那些花五分钟向它们和一个隐藏的人类提问,试图辨别哪个是哪个的人类?我不认为大多数声称这些程序通过了图灵测试的人实际上将它们放到了这个真实版本中;我怀疑他们的意思是当人们阅读由人工智能生成的文本时,他们无法轻易判断它是由人工智能还是人类写的。即便如此,稍微练习一下就能熟练掌握,至少暂时是这样,尽管这些程序正在迅速变化。在今年春季的讲座课程中,我和我的助教们成为了通过它们平淡无奇的感觉、文学上的荧光灯等价物来嗅出由人工智能生成的文章的专家。
*
The essence of the Turing Test is conversation, rather than, for instance, the ability to perform a calculation or logical task such as chess playing, two of the traditional benchmarks for rational ability. Turing treated both those benchmarks in more snippets of imagined dialogue: 图灵测试的本质在于对话,而不是例如执行计算或逻辑任务(如下棋)的能力,这两个传统的理性能力基准。图灵对这两个基准都进行了更多的想象对话片段处理:
Q: Add 34957 to 70764. Q: 将34957加到70764。
The machine pauses for about thirty seconds before answering “105621.” When I typed this question into ChatGPT, its answer was doubly unlike Turing’s imaginary machine’s: instantaneous and correct. Turing’s fantasy dialogue continues: 机器停顿了大约三十秒钟,然后回答道:“105621。”当我将这个问题输入到ChatGPT时,它的回答与图灵的想象机器完全不同:即时且正确。图灵的幻想对话继续进行:
Q: Do you play chess? 你会下棋吗?
A: Yes. 是的。
ChatGPT also generated a “yes,” followed by an explanation that as a language model, it has the capability to play chess but no graphical interface to move pieces around. Turing’s next question: “I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?” This time, the machine pauses for fifteen seconds before responding “R to R-8 mate.” Well, that was an easy one. ChatGPT也生成了一个“是的”,接着解释说作为一个语言模型,它有能力下棋,但没有图形界面来移动棋子。图灵的下一个问题是:“我在K1有一个K,没有其他棋子。你在K6只有一个K和R在R1。轮到你走棋了。你打算怎么走?”这次,机器在回答“R到R-8将军”之前停顿了十五秒。嗯,这个太简单了。
Weirdly, ChatGPT gets it wrong: it says R to R6, checkmate. When I point out that from R6 its rook wouldn’t have my king in check, let alone in checkmate, the AI’s responses become increasingly incoherent. Chess-playing programs have existed for decades—unlike generative AIs, these programs are designed to accomplish a specific task, winning the game of chess—but they establish that the game is not beyond the reach of a machine, so ChatGPT’s inability to give correct answers to chess questions is in that sense neither here nor there. But surely an intelligent being would reply, along the lines of Turing’s imaginary interlocutor, “Count me out, I never could play chess.” Instead its answers are like phony books in a stage set: they seem real until you take a closer look. 奇怪的是,ChatGPT搞错了:它说R到R6,将死。当我指出从R6出发,它的车无法将我的国王置于将军,更不用说将死了,人工智能的回答变得越来越不连贯。下棋的程序已经存在了几十年,与生成式人工智能不同,这些程序旨在完成一个特定的任务,即赢得国际象棋比赛,但它们证明了机器并非无法掌握这个游戏,因此ChatGPT无法正确回答国际象棋问题在这个意义上并不重要。但是一个聪明的存在肯定会回答,类似于图灵的想象对话者:“别指望我,我从来不会下国际象棋。”相反,它的回答就像舞台布景中的假书:一开始看起来很真实,但仔细一看就会发现不对劲。
Meanwhile, one begins to sense a personality in Turing’s imaginary mechanical friend, and even to feel a kind of affection for it, this modest interlocutor who never could write poetry, adds numbers inaccurately, but can manage a simple chess scenario given a little time and gamely plays along with this desultory and whimsical interrogation. In the next bit of dialogue, Turing returns to poetry: 与此同时,人们开始感受到图灵想象中的机械朋友的个性,甚至对它产生了一种喜爱之情。这个谦逊的对话者从未能写诗,加法也不准确,但只要给它一点时间,它可以处理一个简单的国际象棋场景,并乐意参与这种散漫而古怪的询问。在接下来的对话中,图灵又回到了诗歌的话题:
Interrogator: In the first line of your sonnet which reads ‘Shall I compare thee to a summer’s day,’ would not ‘a spring day’ do as well or better? 问话者:在你的十四行诗的第一行中,写着“我是否该将你比作夏日”,用“春天的日子”来比喻,会不会更好一些呢?
Witness: It wouldn’t scan. 目击者:它无法扫描。
Interrogator: How about ‘a winter’s day’[?] That would scan all right. 审问者:那么“冬日”怎么样?这个词组的韵律应该还可以。
Witness: Yes, but nobody wants to be compared to a winter’s day. 证人:是的,但没有人愿意被比作冬日。
Interrogator: Would you say Mr. Pickwick reminded you of Christmas? 问话者:你会说皮克威克先生让你想起了圣诞节吗?
Witness: In a way. 证人:从某种意义上说。
Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison. 问讯员:然而,圣诞节是一个冬日的日子,我认为皮克威克先生不会介意这个比较。
Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas. 证人:我觉得你不是认真的。所谓冬天的一天,指的是典型的冬天,而不是像圣诞节这样特殊的一天。
When I typed the opening question of this exchange into ChatGPT—Wouldn’t a spring day work just as well?—its response was “Certainly!” and it offered another bad poem. I then proposed “a winter’s day,” and it responded again “Certainly! Here’s an alternative version with ‘a winter’s day’ in place of ‘a summer’s day,’” followed by a third bad poem. 当我在ChatGPT中输入这个对话的开场问题——难道春天的日子不一样好吗?——它的回答是“当然可以!”并且它还提供了另一首糟糕的诗。然后我提出了“一个冬天的日子”,它再次回答说“当然可以!这是一个将‘一个夏天的日子’替换为‘一个冬天的日子’的替代版本”,然后又跟着一首糟糕的诗。
Regarding Mr. Pickwick, ChatGPT responded at first like Turing’s imaginary machine—“In a way”—then stated that Mr. Pickwick is the central character in Charles Dickens’s The Pickwick Papers; that while he has no particular connection with Christmas, people do often associate Dickens with the holiday because of another of his books, A Christmas Carol; and that many also associate Mr. Pickwick with warmth and comfort, which may in turn remind them of Christmas. Another stage set answer, with only surface plausibility. Christmas is a prominent theme in The Pickwick Papers. In 1906 the illustrator George Alfred Williams issued an illustrated edition of the Christmas scenes from the book entitled Mr. Pickwick’s Christmas, which Charles Laughton then performed on a 1944 album. Perhaps—I’m speculating—ChatGPT identified a pattern that for questions of the form “Would you say that X reminds you of Y?” the connection is generally indirect, so it formulated an indirect connection between Mr. Pickwick and Christmas, even though a direct one exists. 关于皮克威克先生,ChatGPT一开始回答得像图灵的想象机器一样——“在某种程度上”,然后指出皮克威克先生是查尔斯·狄更斯的《皮克威克外传》中的中心人物;虽然他与圣诞节没有特别的关联,但人们常常将狄更斯与圣诞节联系在一起,因为他的另一本书《圣诞颂歌》;而且许多人也将皮克威克先生与温暖和舒适联系在一起,这可能会让他们想起圣诞节。这是另一个表面上看起来合理的回答。圣诞节是《皮克威克外传》中一个重要的主题。1906年,插画家乔治·阿尔弗雷德·威廉姆斯出版了一本名为《皮克威克先生的圣诞节》的插图版,查尔斯·劳顿在1944年的一张专辑中演绎了这本书中的圣诞节场景。也许——我在推测——ChatGPT识别出了一种模式,即对于“你会说X让你想起Y吗?”这种形式的问题,连接通常是间接的,所以它构建了皮克威克先生和圣诞节之间的间接联系,尽管直接联系也存在。
*
My purpose with these comparisons is to show that there’s an ineffable but stark contrast between Turing’s imagined bits of conversation and the ChatGPT corollaries. ChatGPT’s responses have a hollow, generic feel, like they were compiled by a committee for some ornamental purpose, whereas Turing’s imagined intelligent machine gives off an unmistakable aura of individual personhood, even of charm. 我的目的是通过这些比较来展示图灵想象中的对话片段与ChatGPT的对应之间存在着一种无法言喻但明显的对比。ChatGPT的回答给人一种空洞、普通的感觉,就像它们是由一个委员会为了某种装饰目的而编写的一样,而图灵想象中的智能机器则散发出明显的个人特质,甚至具有魅力。 The contrast is all the more striking since, as I’ve mentioned, the process Turing described for arriving at an intelligent machine laid the foundation for the very machine-learning techniques that are bringing us the new generative AIs. To begin with, Turing said that a thinking machine would have to acquire its ability to think through a more basic capacity to learn. The first step was thus to simulate a child’s mind, the “unorganised machine” described above, which would become organized through an education consisting of two kinds of “interference”: a “pleasure” interference that would fix the current configuration of components and a “pain” interference that would disrupt it, causing previously fixed features to change. The machine might then wander randomly among configurations, with pleasure interferences fixing elements and pain interferences causing them to vary. 对比更加鲜明的是,正如我之前提到的,图灵描述的智能机器生成过程为我们带来了新一代生成型人工智能的机器学习技术奠定了基础。首先,图灵认为,思考的机器必须通过更基本的学习能力来获得思考的能力。因此,第一步是模拟一个儿童的思维,即上述所描述的“无组织机器”,通过两种“干扰”来进行教育,一种是“愉悦”干扰,用于固定当前的组件配置,另一种是“痛苦”干扰,用于打破配置,导致之前固定的特征发生变化。然后,机器可能会在各种配置之间随机漫游,愉悦干扰固定元素,痛苦干扰导致它们变化。
A learning machine must also be fallible, an idea Turing derived from the Austrian logician Kurt Gödel’s first incompleteness theorem. The theorem states that any system of logical proofs sufficient to generate elementary arithmetic will also be capable of producing sentences that are neither provable nor disprovable within the system, self-referential statements such as “This sentence cannot be proven.” Gödel’s theorem, Turing reasoned, meant that any machine designed to determine the truth or falsity of mathematical theorems would sometimes be unable to answer, unless one tolerated occasional mistakes. And why not? Infallibility is no requirement for human intelligence. On the contrary, Turing observed, one could make very intelligent mistakes. 一个学习机器也必须是可犯错误的,这个观点是图灵从奥地利逻辑学家库尔特·哥德尔的第一不完全性定理中得出的。该定理指出,任何足以生成基本算术的逻辑证明系统也能够产生既不可证明也不可证伪的句子,即自指陈述,比如“这个句子无法被证明”。图灵推理认为,哥德尔的定理意味着任何旨在确定数学定理真假的机器有时将无法回答,除非容忍偶尔的错误。为什么不呢?对于人类智能来说,无错误并非必要条件。相反,图灵观察到,人可以犯非常聪明的错误。
Are ChatGPT’s mistakes intelligent? They seem less like misunderstandings than glitches. Evaluations of the generative AIs have focused on their ability to get things right, or at least right-seeming. Might we evaluate the success of an artificial intelligence also by the quality of its mistakes? If so, we might consider that a mistake can be intelligent only if it reflects thought, interpretation, meaning. Lacking these capacities, a machine can make only technical errors. ChatGPT的错误是聪明的吗?它们似乎更像是故障,而不是误解。对生成式人工智能的评估主要集中在它们正确或至少看起来正确的能力上。我们是否可以通过错误的质量来评估人工智能的成功?如果是这样,我们可以考虑只有反映思考、解释和意义的错误才能被认为是聪明的。如果缺乏这些能力,机器只能犯技术性错误。
In addition to pleasure, pain, and fallibility, Turing said, an intelligent entity needed two other qualities: discipline, in the form of an ability to carry out instructions, and initiative. As a model of initiative, Turing proposed the ability to conduct searches. He offered three examples of the kinds of searches one might approximate in a machine: “intellectual” searches, in which the brain seeks combinations of variables with particular properties by systematically trying out possibilities in a given order (think Wordle); “genetical or evolutionary” searches, in which organisms survive if they hit upon the right combination of genes (an idea that would give rise to the programming technique of genetic algorithms); and finally, “cultural” searches, carried out by the human community as a whole. 除了快乐、痛苦和易犯错误之外,图灵说,一个智能实体还需要另外两个品质:纪律,以能够执行指令的能力为形式,以及主动性。作为主动性的模型,图灵提出了进行搜索的能力。他提供了三个可能在机器中近似进行的搜索类型的例子:“智力”搜索,即大脑通过系统地尝试可能性的给定顺序来寻找具有特定属性的变量组合(类似于Wordle);“遗传或进化”搜索,即如果有机体找到了正确的基因组合,它们就能够生存下来(这个想法将引发遗传算法的编程技术);最后,“文化”搜索,由整个人类社群进行。
So far, the process Turing described seems like it might indeed yield a humanlike intelligence: active, searching, fallible, given to pleasure and pain, emerging and growing over time through engagement with other intelligent beings and with a larger culture. And yet there’s a crucial difference, or at least an important sense in which you might see Turing’s thinking computer as fundamentally different from a thinking human: it was designed to appear intelligent only from the outside, with no actual intelligence inside. For instance, in searching, the machine would try out possibilities either at random or according to a fixed rule. Its searches would be a combination of scripted and random, but never interpretive or reflective—never the kind of search you do when you’re looking for a good book, or the best angle for a photograph, or the right words for an idea. The machine could only give the illusion of reflecting or interpreting. 到目前为止,图灵所描述的过程似乎确实能够产生类似人类的智能:积极主动、探索性的、容易犯错的,喜怒无常,通过与其他智能生物和更大的文化进行交流而逐渐崛起和成长。然而,有一个关键的区别,或者至少有一种重要的意义上,你可以认为图灵的思考机器与人类思考方式根本不同:它被设计成只从外部看起来智能,而内部实际上没有任何智能。例如,在搜索过程中,机器会随机尝试可能性,或者按照固定规则进行尝试。它的搜索是脚本和随机的结合,但从不是解释性的或反思性的——从不是你在寻找一本好书、拍摄最佳角度或找到合适的表达一个想法的词语时所做的那种搜索。机器只能给人一种反思或解释的幻觉。
Turing addressed this question himself in January 1952 in a panel discussion on the BBC.3 His interlocutor was the mathematician and cryptologist Max Newman, his friend and former teacher. The moderator was Richard Braithwaite, a lecturer in moral science at Cambridge and, like Turing, a fellow of King’s College. Occasionally Braithwaite returned to his own view that, in order to learn, a machine would need “springs of action,” something like “appetites” or “interests” so that it would pay attention to the relevant factors in its environment. Newman also described the essence of human thinking in active terms, such as the “sudden pounce on an idea.” 图灵在1952年1月的BBC小组讨论中亲自回答了这个问题。他的对话者是数学家和密码学家马克斯·纽曼,他是他的朋友和前任老师。主持人是理查德·布雷思韦特,剑桥大学的道德科学讲师,与图灵一样,也是国王学院的成员。偶尔,布雷思韦特会回到他自己的观点,认为为了学习,机器需要“行动的动力”,类似于“欲望”或“兴趣”,以便它能够关注环境中的相关因素。纽曼也用积极的方式描述了人类思维的本质,比如“突然抓住一个想法”。
But Turing responded to Braithwaite that even without appetites or interests, a machine could try out combinations at random and then receive affirmations or corrections. Of course, in order to imitate a human convincingly the machine would have to appear to have free will. Turing proposed two ways to accomplish this. The first was to include a random element in the machine’s behavior, “something like a roulette wheel or a supply of radium.” Here he seemed to conflate acting freely with acting arbitrarily. 但是图灵回应布雷思维特说,即使没有欲望或兴趣,机器也可以随机尝试组合,然后接收肯定或纠正。当然,为了逼真地模仿人类,机器必须表现出自由意志。图灵提出了两种实现这一目标的方法。第一种是在机器的行为中包含一个随机因素,”类似于轮盘或镭的供应”。在这里,他似乎将自由行动与任意行动混为一谈。
Turing’s second idea was to base the appearance of autonomy and free will on the observer’s ignorance, both of what was happening inside the machine and of the consequences of any given facts or principles. The mathematician Ada Lovelace had said of the Analytical Engine—the calculating machine designed in the 1830s by Charles Babbage, with whom Lovelace collaborated in developing the theory of mechanical computation—that it had “no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” Turing pointed out that this assumed that when we give the machine its orders we necessarily grasp at once all the consequences of these orders, an assumption that’s clearly false. If the machine arrives at consequences we hadn’t foreseen, it’s arguably originating something. 图灵的第二个想法是基于观察者对机器内部发生的事情以及任何给定事实或原则的后果的无知,来构建自主性和自由意志的外观。数学家艾达·洛夫莱斯曾经说过关于分析引擎——查尔斯·巴贝奇在19世纪30年代设计的计算机,洛夫莱斯与巴贝奇合作发展了机械计算理论——她说这台机器“根本没有创造任何东西的意图。它只能执行我们知道如何命令它执行的任务。”图灵指出,这种假设认为当我们给机器下达指令时,我们必然立即理解这些指令的所有后果,这显然是错误的。如果机器得出了我们没有预料到的后果,可以说它创造了某种东西。
Intelligence, in other words, was in the ignorant eye of the beholder, unless it was in the private experience of the intelligent being, where it was inaccessible to science. Pressed to define thinking itself, as opposed to its outward appearance, Turing couldn’t say more than that it was “a sort of buzzing that went on inside my head.” Ultimately, the only way to be sure that a machine could think was “to be the machine and to feel oneself thinking.” But that way lay solipsism, not science. From the outside, Turing argued, a thing could seem to be intelligent only as long as one didn’t know all its rules of behavior. A science of the inner workings of intelligence would be paradoxical, since any appearance of intelligence would evaporate in its face. Newman agreed, drawing an analogy to the beautiful ancient mosaics of Ravenna. If you scrutinized these closely, you might say, “Why, they aren’t really pictures at all, but just a lot of little colored stones with cement in between.” Intelligent thought was a mosaic of simple operations that, viewed up close, disappeared into its mechanical parts. 智能,换句话说,是在无知者的眼中,除非它存在于聪明生物的私人经验中,这对科学来说是无法接触的。图灵在被迫定义思维本身时,除了说它是“我脑子里发出的一种嗡嗡声”之外,无法再多说。最终,确保机器能够思考的唯一方法是“成为机器并感受自己在思考”。但这条路通向唯我论,而非科学。图灵争论道,从外部来看,一件事物只有在不了解其所有行为规则的情况下才会显得智能。对智能内部运作的科学研究将是自相矛盾的,因为任何智能的表现都会在面对科学研究时消失。纽曼同意,并将其类比为拉文纳古代美丽的马赛克。如果你仔细观察这些马赛克,你可能会说:“嗯,它们实际上并不是真正的图片,而只是一堆有着水泥间隔的小彩色石头。”智能思维是由简单操作组成的马赛克,近距离观察时,它们会消失在机械部件中。
The necessity of measuring intelligence from without and not within was a crucial point in “Computing Machinery and Intelligence.” Turing specified that a machine must be admissible for the test even if its engineers couldn’t fully describe how it worked. The “teacher” of the machine might be largely ignorant of what went on inside; if not, the teacher would lose any sense that the machine was intelligent. But the same would be true of a human being, according to Turing. If you could attribute a human’s behavior entirely to a set of explicit rules, you’d come to see the human as an unintelligent machine. Moreover, Turing thought such a set of rules did in fact exist for humans—not a complete list of “precepts such as ‘Stop if you see red lights,’” but a set of “laws of behavior” or “laws of nature as applied to a man’s body such as ‘if you pinch him he will squeak.’” Although in his imagined dialogues Turing conjured a mechanical intelligence with authentic personhood, he also denied there was any such thing. His paper is therefore fundamentally and strangely at odds with itself. 在《计算机与智能》中,从外部而非内部来衡量智能的必要性是一个关键点。图灵指出,即使机器的工程师无法完全描述其工作原理,机器也必须能够通过测试。机器的“教师”可能对内部发生的事情知之甚少;如果不是这样,教师就会失去对机器智能的任何感知。但图灵认为,对于人类来说也是如此。如果你能将一个人的行为完全归因于一套明确的规则,你会认为这个人是一个无智能的机器。此外,图灵认为这样一套规则对于人类确实存在——不是“停下来如果看到红灯”之类的完整列表,而是一套“行为规律”或“适用于人体的自然法则,比如‘捏他他会叫’”。尽管在他想象的对话中,图灵创造了一个具有真实人格的机械智能,但他也否认了这样的存在。因此,他的论文在本质上与自身存在着奇怪的矛盾。
*
It wasn’t just Turing. Remarkably, the pioneers of artificial intelligence shared a consensus that natural intelligence didn’t exist: it was an illusion, a mirage, a trick of light and color. This was a strange intellectual moment when people in several fields—evolutionary biology, psychology, the new field of cybernetics, which would give rise to computer science—were busy denying the real existence of mind and mind-made meaning in the world and eradicating all mention of these from science. Purity of any reference to irreducible mind or meaning became a hallmark of the sciences, distinguishing them from humanistic ways of thinking. By insisting that mind reduced to meaningless bits, and that they alone grasped this fundamental reality, scientists partitioned their fields from all those benighted humanistic disciplines that dwelt at the frothy level of illusory meanings. Turing was instrumental in this partitioning, even as he himself included whimsical literary dialogues in his paper on machine intelligence and made an engagement with poetry and novels the chief characteristic of his hypothetical intelligent machines. 不仅仅是图灵。令人惊讶的是,人工智能的先驱们共享了一个共识:自然智能并不存在,它只是一种幻觉、一种幻影、一种光与色彩的把戏。这是一个奇怪的智力时刻,当时在几个领域——进化生物学、心理学、新兴的控制论领域(后来演变为计算机科学)——人们正忙于否认心智和心智产生的意义在世界上的真实存在,并将这些概念从科学中彻底消除。对于不可简化的心智或意义的任何提及的纯净成为科学的标志,将其与人文思维方式区分开来。通过坚持认为心智可以简化为毫无意义的位,以及他们独自理解这个基本现实,科学家将自己的领域与那些居于虚幻意义水平的愚昧人文学科区分开来。图灵在这种区分中起到了重要作用,即使他自己在关于机器智能的论文中包含了奇特的文学对话,并将与诗歌和小说的接触作为他假设的智能机器的主要特征。
To be properly scientific, according to the new standard, an explanation must take a certain narrow form: it must be the reductive account of passive, mechanical devices. This ideal of scientific explanation had a much longer history. During the seventeenth century, various philosophical revolutionaries, especially René Descartes, held that a philosopher’s understanding of nature should be like a clockmaker’s understanding of a clock. But Descartes specifically exempted human intelligence and selfhood from the purview of his science. 为了符合科学的标准,根据新的规范,解释必须采取一种特定的狭义形式:它必须是被动的、机械装置的还原描述。这种科学解释的理想有着更长久的历史。在17世纪,各种哲学革命家,尤其是笛卡尔,认为哲学家对自然的理解应该像钟表匠对钟表的理解一样。但是笛卡尔明确将人类的智慧和自我排除在他的科学研究范围之外。
Moreover, the classical mechanist standard clearly didn’t apply in all areas of physical science, and this became ever more the case as the modern physical sciences developed. According to quantum mechanics, for instance, it’s impossible to give a complete, reductive description of a subatomic particle; if you know the exact position of an electron, you can’t know its velocity, and vice versa. Still, despite the ever more apparent limitations of the reductive, mechanical model of science, during the later nineteenth and twentieth centuries scientists enthusiastically applied it to living things, including human beings, establishing a view of humans as passive, mechanical devices through and through. 此外,经典机械主义标准显然并不适用于物理科学的所有领域,随着现代物理科学的发展,这种情况变得越来越明显。例如,根据量子力学,给出一个亚原子粒子的完整的还原描述是不可能的;如果你知道一个电子的确切位置,你就无法知道它的速度,反之亦然。尽管还原主义、机械模型在科学中的局限性越来越明显,但在19世纪后期和20世纪期间,科学家们仍然热衷于将其应用于生物,包括人类,从而确立了人类是完全被动、机械装置的观点。
Around the time Turing wrote his paper, neo-Darwinist evolutionary theorists were arriving at the “modern synthesis” of evolution and genetics, casting living beings as the objects of external forces, able to do nothing but undergo random genetic variations and be acted upon by natural selection. Behaviorists such as the Harvard psychologist B.F. Skinner were eliminating all mention of “consciousness,” “mind,” “will,” and “intellect” from psychology. Cyberneticists were founding their new science on a view of intelligence epitomized by the English psychiatrist and early cyberneticist Ross Ashby when he observed, “What I am saying is that…‘real’ intelligence does not exist. It is a myth. It has come into existence in the same way that the idea of ‘real’ magic comes to a child who sees conjuring tricks.”4 To speak of intelligence or mind had become childish, naive, the opposite of scientific. Yet beings with minds are ubiquitous elements of our empirical experience. Why would it be more scientific to deny their existence than to acknowledge it? 图灵写论文的时候,新达尔文主义的进化论学者正在达到进化和遗传学的“现代综合”,将生物视为外部力量的对象,除了经历随机的基因变异和受自然选择的影响外,无能为力。行为主义者,如哈佛大学心理学家B.F.斯金纳,从心理学中消除了“意识”、“思维”、“意志”和“智力”的所有提及。控制论学家们正在以英国精神病学家和早期控制论学家罗斯·阿什比的观点为基础创立他们的新科学,他观察到:“我所说的是...‘真正的’智能不存在。这是一个神话。它的产生方式与一个看到魔术的孩子产生‘真正的’魔术的想法相同。” 4 谈论智能或思维已经变得幼稚、天真,与科学相反。然而,具有思维的存在在我们的经验中无处不在。为什么否认它们的存在比承认它们更科学呢?
During the 1970s Berkeley became the headquarters of the resistance to this approach to artificial intelligence. Two UC Berkeley philosophers, Hubert Dreyfus and John Searle, devoted careers to criticizing the founding assumptions of AI, in Dreyfus’s case influencing research in the field itself. Dreyfus argued that the brain is no electronic digital computer, operating according to formal rules; thinking is a continual, physical engagement with the world. A brain, Dreyfus maintained, is part of a body, and in order to think you need the rest of the body, too.5 在20世纪70年代,伯克利成为对人工智能这种方法的抵制总部。两位加州大学伯克利分校的哲学家,休伯特·德雷福斯和约翰·西尔,致力于批评人工智能的基本假设,其中德雷福斯的研究对该领域产生了影响。德雷福斯认为大脑不是一个按照形式规则运行的电子数字计算机;思考是与世界持续不断的身体互动。德雷福斯坚持认为,大脑是身体的一部分,为了思考,你也需要身体的其他部分。
A new tradition of AI researchers responded to this critique in the 1980s by designing “embodied” artificial intelligences, robots with sensors that moved and engaged with their surroundings. The MIT roboticist Rodney Brooks built Herbert, for instance, a robot that wandered around the lab, going into people’s offices and stealing empty soda cans. But even Brooks said that intelligence and cognition were “in the eye of the observer.” He designed each robot using discrete behavior modules in the various parts, with no single module containing instructions for the robot’s overall project. Brooks took this to represent how natural creatures navigated the world. He said his robots looked as though they were acting on a centralized set of purposes, but this was only an illusion: they contained no centralized, intelligent self.6 20世纪80年代,一群新一代的人工智能研究者对这一批评做出了回应,他们设计了“具身化”的人工智能,即具备传感器、能够移动并与周围环境互动的机器人。例如,麻省理工学院的机器人学家罗德尼·布鲁克斯设计了赫伯特,一个在实验室里四处游荡、进入人们办公室并偷走空罐的机器人。但即使是布鲁克斯也表示,智能和认知是“观察者的眼中物”。他在各个部分使用离散行为模块来设计每个机器人,没有一个单一模块包含了机器人整体项目的指令。布鲁克斯认为这样设计能够反映自然生物在世界中的导航方式。他说他的机器人看起来好像是在按照一个集中的目标行动,但这只是一种幻觉:它们并没有集中的、智能的自我。
Searle’s critique of AI, unlike Dreyfus’s, didn’t inspire a new research program, but it did include a hypothetical scenario that became peculiarly famous. Searle imagined himself in a locked room being handed bits of writing in Chinese, along with instructions in English for how to compose some of these into appropriate responses. To people outside, he reckoned, it would seem as though someone in the room understood Chinese, but in fact no understanding of Chinese would be happening. Likewise, a digital computer programmed to generate appropriate text wouldn’t be understanding what it was doing. Passing the Turing Test, then, was no guarantee of intelligence. Searle rejected what he called the “underlying dualism” of the idea that thinking consists of information processing: the assumption that the software of thinking was separable from the hardware. Neither lactation nor photosynthesis could happen in a silicon machine; similarly, only a brain’s biochemistry could secrete the “intentionality” that Searle took to be the basis of thought.7 Searle对人工智能的批评与Dreyfus的批评不同,它并没有激发出一个新的研究计划,但它确实包含了一个特别著名的假设情景。Searle想象自己被锁在一个房间里,被递给一些用中文写的文字,同时还有用英文写的指令,告诉他如何将其中一些组合成适当的回答。在外面的人看来,房间里的人似乎懂中文,但实际上并没有理解中文的发生。同样地,一个被编程生成适当文本的数字计算机也不会理解自己在做什么。通过图灵测试并不能保证智能。Searle拒绝了他所称之为“潜在二元论”的观点,即思维是信息处理的假设:认为思维的软件可以与硬件分离。乳汁分泌和光合作用都无法在硅机器中发生;同样地,只有大脑的生物化学才能分泌Searle认为是思维基础的“意向性”。
Still, Searle described mental phenomena like consciousness not as primitive constituents of the world but as “emergent” features arising from the complex connectivity among neurons.8 He said that mental phenomena couldn’t cause anything to happen at the lower level of “hardware.” Any appearance of intelligence at that level was an “as-if” situation, nothing but an “optical illusion.” Searle therefore recommended “inverting” any explanation in cognitive science that assigned a function to a mental cause. For instance, rather than saying that to keep my retinal image stable while I’m driving I make lots of tiny eye movements, instead we should say that the tiny movements of my eyes keep my retinal image stable. This simple maneuver, grammatically omitting my agency in moving my eyes, “eliminates [the] mental cause altogether. There is nothing there except a brute physical mechanism that produces a brute physical effect.” 然而,西尔将意识等心理现象描述为复杂神经连接中产生的“新兴”特征,而非世界的原始组成部分。他表示,心理现象无法在“硬件”层面上引起任何事件。在那个层面上出现的智能只是一种“仿佛”情况,只是一种“视觉错觉”。因此,西尔建议在认知科学中颠倒任何将功能归因于心理原因的解释。例如,我们应该说是我的眼睛微小的运动使得我的视网膜图像保持稳定,而不是说为了在开车时保持视网膜图像稳定,我进行了许多微小的眼动。这种简单的操作,在语法上省略了我移动眼睛的主动性,“完全消除了[心理]原因。除了产生了一种粗暴的物理效应的粗暴物理机制,什么都没有。”
The nineteenth-century English naturalist T.H. Huxley had said essentially the same thing a century earlier: consciousness was a “collateral product,” like the whistle on a locomotive or the bell on a clock.9 Human feelings of volition were not functional causes on the lowest level; they were merely secondary consequences of physical states of the brain’s machinery. Each critique, it seems, has led back to the same place: an emptiness, at bottom, in place of a thinking mind. This emptiness axiom was so firmly established in the founding of AI that it appears to have been invincible. 19世纪的英国博物学家Huxley在一个世纪前就已经说过类似的话:意识就像火车的哨子或钟表上的铃铛一样,是一种“附带产物”。人类的意愿感受并不是最底层的功能性原因,而只是大脑机制的物理状态的次要结果。每一次批评似乎都指向同一个地方:在思维的底层,存在一种空虚。这种空虚的公理在人工智能的创立中如此牢固确立,以至于看起来无法被击败。
Despite the impossibility of knowing what thinking was or that it existed, Turing wrote that it was “usual to have the polite convention that everyone thinks.” Alas, it has been anything but usual. For as long as there have been people, no doubt, they’ve been declining to assume that other people think. But during the first half of the twentieth century, this attitude took on the newly powerful guise of science. In addition to informing AI’s emptiness axiom, the reductive, scientistic approach to mind expressed itself in the new regime of intelligence testing, which represented intelligence as reducible to a unitary, measurable quantity—a founding axiom, in turn, of the eugenics movement. Another central principle of the eugenics movement from midcentury onward was the causal attribution of qualities of mind to DNA. If intelligence reduced to certain nucleotides in the right places, once again this implied it could be rigorously quantified. By reducing intelligence clean out of existence, the founders of AI—and their fellow-travelers in cybernetics, biology, and behaviorist psychology—were carrying the reigning reductive approach to its logical extreme. 尽管无法知道思维是什么或者它是否存在,图灵写道,”通常有一个礼貌的约定,即每个人都在思考。”然而,这种情况并不寻常。毫无疑问,自从有人类以来,他们一直不愿意假设其他人在思考。但在二十世纪上半叶,这种态度以科学的新面貌变得更加强大。除了为人工智能的空洞公理提供信息外,对心智的还原主义、科学主义方法还表现为智力测试的新制度,将智力视为可归约为一个可测量的单一数量,这也是优生运动的基本公理之一。从中世纪以来,优生运动的另一个核心原则是将心智的特质归因于DNA。如果智力可以归约为某些正确位置的核苷酸,那么这意味着它可以被严格量化。通过将智力完全消除,人工智能的创始人以及他们在控制论、生物学和行为主义心理学方面的同道们将主导的还原主义方法推向了极端。
These reductive models of human beings authorized all sorts of discriminatory and repressive measures, from educational policies that treated students unequally by race, class, and sex to tens of thousands of forced sterilizations. In contrast, the emancipatory movements of the same period, in which people belatedly came to ascribe full human intelligence to other people, grew not from any reductive science of intelligence but from the irreducible action of intelligence in the world, from inner personhood irrefutably asserting itself. “We can never be satisfied,” Martin Luther King Jr. proclaimed, “as long as our children are stripped of their selfhood.” 这些简化的人类模型授权了各种歧视性和压制性措施,从教育政策中对待不同种族、阶级和性别的学生不平等,到数以万计的强制绝育。相比之下,同一时期的解放运动并非源于对智力的简化科学,而是源于智力在世界中的不可简化的行动,源于内在人性无可辩驳地主张自己。“只要我们的孩子被剥夺了自我,我们就永远不会满足。”马丁·路德·金恩牧师宣称。
I don’t mean that Turing had any pernicious purpose in developing his experimental approach to machine intelligence. He was in fact on the receiving end of a dehumanizing miscarriage of science when the British government sentenced him in 1952 to hormonal treatment with diethylstilbestrol, a nonsteroidal estrogen used to perform what’s known as “chemical castration,” as an alternative to imprisonment for homosexuality. This treatment likely led to his probable suicide by cyanide poisoning two years later. But Turing did participate in the establishment of a bizarre and darkly potent idea of human personhood: the idea that there’s no such thing. The personable interlocutors in his dialogues in “Computing Machinery and Intelligence” suggest that he did so in some way despite himself. 图灵在开发机器智能的实验方法时,并没有任何恶意目的。事实上,他在科学上遭受了一次非人性化的错误判决,英国政府在1952年判决他接受雌二醇激素治疗,这是一种非类固醇雌激素,被用作所谓的“化学阉割”,作为对同性恋的替代监禁的手段。这种治疗很可能导致他两年后因氰化物中毒而自杀。但是,图灵确实参与了一个奇怪而黑暗的人格观念的确立:即人格不存在。他在《计算机与智能》中的对话中友善的对话者表明,尽管如此,他在某种程度上还是参与了这个观念的确立。
*
In the new generative-AI language models, we have a process like what Turing described: fallible neural networks programmed to search for patterns and improve their pattern-finding ability over time. The result has been the logical outcome Turing’s approach: an extremely complex yet hollow system, which exploits its audience’s ignorance to present the appearance of a person while containing no actual personhood. 在新的生成式AI语言模型中,我们有一个类似于图灵所描述的过程:可犯错误的神经网络被编程为搜索模式并随着时间的推移改善其模式识别能力。结果是图灵方法的逻辑结果:一个极其复杂但空洞的系统,它利用受众的无知来呈现出一个人的外观,但实际上并不包含真正的人性。
The programming approaches that Turing described have been tremendously powerful. For better and worse, they’ve remade the world. But in one regard they have been utterly ineffectual: they’ve gotten us no closer to understanding the basis of living intelligence. In retrospect this is unsurprising, since they were predicated upon the axiom that living intelligence doesn’t exist. Turing’s literary dialogues seem to me to indicate what’s wrong with Turing’s science as an approach to intelligence. They suggest that an authentic humanlike intelligence resides in personhood, in an interlocutor within, not just the superficial appearance of an interlocutor without; that intelligence is a feature of the world and not a figment of the imagination. 图灵描述的编程方法非常强大,无论是好是坏,它们都改变了世界。但在某个方面,它们却毫无效果:它们并没有让我们更接近理解生命智能的基础。回顾起来,这并不令人意外,因为它们是建立在生命智能不存在的公理之上的。图灵的文学对话似乎表明了作为一种智能方法的图灵科学的问题所在。它们暗示着一个真正类似人类的智能存在于人性中,存在于内部的对话者,而不仅仅是外部对话者的表面外观;智能是世界的一个特征,而不是想象的产物。
Recently I was talking with a group of very smart undergraduates, and we got to discussing the new AIs and what sort of intelligence they have, if any.10 Suddenly one of the students said, “I wonder though, maybe that’s all I do too! I just derive patterns from my experiences, then spit them back out in a slightly different form.” My answer came out of my mouth almost before I could think: “No! Because you’re you in there thinking and responding. There’s no ‘I’ in ChatGPT.” He smiled uncertainly. How we can we tell there’s no “I” in there, he and the others wondered? To insist that ChatGPT can’t be intelligent because it’s a computer system and not a living thing is just a dogmatic assertion, not a reasoned argument. 最近我和一群非常聪明的本科生聊天,我们开始讨论新的人工智能以及它们是否具备某种智能。 10 突然间,其中一位学生说:“我在想,也许我也是这样做的!我只是从我的经验中提取模式,然后以稍微不同的形式呈现出来。”我的回答几乎是下意识地脱口而出:“不!因为你在那里思考和回应。ChatGPT里没有‘我’。”他不确定地笑了笑。他和其他人都在想,我们怎么能确定里面没有“我”呢?坚持认为ChatGPT不能智能,只因为它是一个计算机系统而不是一个有生命的东西,这只是一种教条式的断言,而不是一个有理有据的论证。
How do we know when we’re in the presence of another intelligent being? Definitely not by giving it a test. We recognize an intelligent being by a kind of sympathetic identification, a reciprocal engagement, a latching of minds. Turing was definitely on to something with his idea about conversations, and if we were able to have conversations like the ones he imagined with machines, that might be a different matter. It wouldn’t be a test of artificial intelligence, but it might be a compelling indication of it. Such machines, though, would be fundamentally different from the generative AIs. To contemplate what they might be like, I think we’d need to draw upon the very sort of intelligence whose existence the the founders of AI denied: an irreducibly reflective, interpretive kind of thinking. In fact, the sort Turing used to imagine conversing with intelligent machines. 我们如何知道我们是否在另一个智能生物的面前?绝对不是通过给它一个测试。我们通过一种共鸣的认同、相互的参与和思维的交融来识别一个智能生物。图灵的关于对话的想法绝对是有道理的,如果我们能够像他想象的那样与机器进行对话,那可能是另一回事。这不是对人工智能的测试,但可能是一个令人信服的指标。然而,这样的机器与生成型人工智能根本不同。要思考它们可能是什么样子,我认为我们需要借助一种不可简化的反思、解释性思维的智能。事实上,这正是图灵用来想象与智能机器对话的那种思维方式。
Whether or not the new generative AIs pass the Turing Test, they’re not intelligent. They’re the products of an engineering program founded on the denial of intelligence. If we want a legendary destructive power to compare them to, they’re nothing whatsoever like Frankenstein’s monster, who was monstrous by virtue of his intellectual and emotional self. Perhaps they’re something like Charybdis, the fearsome whirlpool that almost destroyed Odysseus: eternally sucking the world in and belching it out, hollow to the very depths. 无论新一代生成式人工智能是否通过图灵测试,它们并不具备智能。它们是建立在否认智能的工程计划的产物。如果我们想找一个具有传说中毁灭性力量的比喻,它们与弗兰肯斯坦的怪物完全不同,后者之所以怪异,是因为他具备了智力和情感。也许它们有点像卡吕布迪斯,那个几乎摧毁奥德修斯的可怕漩涡:永远吞噬着世界,然后再将其吐出,内心空洞到极致。
Appendix: ChatGPT’s responses, taken on April 17, 2023 附录:ChatGPT的回答,截至2023年4月17日
**1) A sonnet on the subject of the Forth Bridge
- 一首关于福斯桥的十四行诗**
Ode to the Forth Bridge, a Mighty Sight 致第四座桥的颂歌,壮丽的景象
With steel and sweat, a marvel takes its form, 用钢铁和汗水,一个奇迹正在形成, A bridge of strength, enduring and sublime. 一座坚固、持久而崇高的桥梁。 Spanning the Forth, defying wind and storm, 跨越福斯湾,挑战风雨, A feat of human ingenuity, prime. 人类智慧的壮举,无与伦比。
A symphony of girders standing tall, 一座高耸的钢梁交响曲 In crimson hue, majestic in their might. 以深红色的色调,威严而庄重。 A beacon of engineering for all, 一个为所有人提供工程指引的灯塔 A masterpiece, a wonder of our sight. 一件杰作,我们视觉的奇迹。
For years, men toiled with grit and skill untold, 多年来,男人们默默地辛勤努力,展现出无与伦比的毅力和技能 A symphony of workers, hand in hand, 一群工人齐心协力,手牵手, A saga of labor, brave and bold, 一段关于劳动的传奇,勇敢而大胆 Each rivet placed with care, as planned. 每个铆钉都小心地放置,按计划进行。
Ode to the Forth Bridge, a testament, 致第四座桥的颂歌,一份见证, To human prowess, prowess excellent. 对于人类的能力,卓越非凡。 A sight that draws the eye, a work of art, 一个吸引眼球的景象,一件艺术品, Its graceful curves, a dance of strength and grace. 它优雅的曲线,展现出力量与优雅的舞姿。
A marvel of design, a masterpiece, 设计的奇迹,一件杰作, A symbol of industrial embrace. 工业拥抱的象征。 Through fog and rain, it stands unwavering, 在雾气和雨水中,它屹立不倒, A beacon of resilience and might. 坚韧与力量的象征。
A triumph of human innovation, 人类创新的胜利 A bridge that spans the waters, day and night. 一座横跨水面的桥,日夜不停。 Its towers rise, a symbol of resolve, 高耸的塔楼,象征着坚定的决心, A testament to human skill and toil. 人类技能和辛勤努力的证明。
A monument of progress, strong and bold, 一个象征进步的纪念碑,坚实而大胆, A marvel that ignites our awe and foil. 一种令我们敬畏和叹为观止的奇迹。 So here’s to you, dear Forth Bridge, so grand, 亲爱的福斯桥,如此宏伟壮丽,向你致敬, A testament to human’s mighty hand. 人类强大之手的证明。
2) ChatGPT’s response to whether “a spring day” would do as well as “a summer’s day” in Shakespeare’s sonnet 2) ChatGPT对于莎士比亚十四行诗中是否可以用“春天”代替“夏天”的回答
Certainly! As an AI language model, I can certainly modify the first line of the sonnet to suit your preference. Here’s an alternative version with “a spring day” in place of “a summer’s day”: 当然可以!作为一个AI语言模型,我可以根据您的喜好修改十四行诗的第一行。以下是一个替代版本,将“夏日”改为“春日”:
Sonnet: A Spring Day’s Comparison 十四行诗:春日的比喻 在这个春日的早晨,我站在花园里, 欣赏着大自然的美丽和生机。 阳光洒在绿草和鲜花上, 给人一种温暖和愉悦的感觉。 我看到一只蝴蝶在花丛中翩翩起舞, 它的翅膀像彩虹一样绚丽多彩。 而旁边的蜜蜂忙碌地采集花蜜, 它们的勤劳和努力令人钦佩。 与此同时,我抬头望向天空, 看到一群小鸟在飞翔。 它们的歌声如同天籁之音, 给人一种宁静和安慰的感觉。 这个春日的景象让我想起了人生的比喻, 就像大自然一样,我们也经历着四季的变化。 有时我们像蝴蝶一样自由自在, 有时我们像蜜蜂一样勤奋努力。 而小鸟的飞翔则象征着我们追求自由和梦想。 在这个美丽的春日里, 让我们珍惜每一刻的生活, 像大自然一样,展现出自己的美丽和活力, 让我们的人生充满希望和喜悦
Shall I compare thee to a spring day? 我应该将你比作一个春天吗? Thou art as lovely, fresh, and full of grace. 你如此可爱,新鲜,充满优雅。 When blossoms bloom, in colors bright and gay, 当花朵绽放时,色彩明亮而欢快, And nature’s beauty shines in every place. 大自然的美丽在每一个地方都闪耀着光芒。
Thy warmth, a gentle sun’s embrace, 你的温暖,如同温柔的阳光拥抱, Brings life to earth, with promises anew. 为大地带来生机,带来崭新的希望。 Thy fragrant breeze, a sweet and subtle trace, 你芬芳的微风,甜美而微妙的痕迹, Delights the senses, and my heart pursue. 让感官愉悦,让我的心追求。
Though summer’s heat, with fiery blaze, 尽管夏天的炎热如火焰般炽烈, May wilt the blooms, and scorch the verdant scene, 或许会使花朵凋谢,炙烤青翠的景色, In spring, thy charms, forever amaze, 春天,你的魅力永远令人惊叹 A time of growth, of youth, and dreams serene. 一个成长的时光,青春与梦想的宁静。
So let me compare thee to a spring day, 那么让我将你比作一个春天的日子, With beauty blooming, in a wondrous way. 美丽如花,以奇妙的方式绽放。
3) ChatGPT’s response to the possibility of substituting “a winter’s day” 3) ChatGPT对于替换“一个冬日”的可能性的回应
Certainly! Here’s an alternative version with “a winter’s day” in place of “a summer’s day”: 当然!这是一个替代版本,用“冬天的一天”代替“夏天的一天”:
Sonnet: A Winter’s Day’s Comparison 十四行诗:冬日的比喻 冬日如同一位寒冷的老人, 他的脸庞苍白而冰冷, 寒风呼啸,吹散了一切温暖, 大地被冰雪覆盖,一片寂静。 然而,冬日也有它的美丽之处, 就像一幅素雅的画作, 树枝上挂满了晶莹的冰晶, 阳光照耀下,闪烁着光芒。 冬日是一段沉默的时光, 人们躲在温暖的屋子里, 喝着热茶,享受宁静的时刻, 思考着生命的意义和价值。 冬日是一段等待的时光, 等待着春天的到来, 等待着大地重新苏醒, 等待着生命的循环再次开始。 冬日虽然寒冷,却也有它的美丽, 它是大自然的一部分, 让我们欣赏它的独特之处, 感受生命的多样和奇妙
Shall I compare thee to a winter’s day? 我应该将你比作冬日吗? Thou art as pure, serene, and coldly bright. 你如此纯洁、宁静和冷冽明亮。 When frosty breath hangs in the air, at play, 当寒冷的呼吸在空气中凝结时,尽情玩耍, And nature rests in peaceful, snowy light. 大自然在宁静的雪光中休憩。
Thy grace, a tranquil landscape dressed in white, 你的恩典,如一幅穿着白色的宁静风景画, Brings stillness to the world, a hush profound. 给世界带来宁静,一片深沉的寂静。 Thy icy touch, a kiss of chill and bite, 你冰冷的触摸,像一阵寒冷而刺骨的吻, Creates a beauty, silent and profound. 创造了一种美丽,寂静而深远的氛围。
Though summer’s warmth, with vibrant hues and sound, 尽管夏天的温暖,充满了鲜艳的色彩和声音, May stir the senses, and inspire cheer, 可能激发感官,激励欢乐 In winter’s calm, a quiet can be found, 在冬天的宁静中,可以找到一种宁静 A time of peace, when all seems crystal-clear. 一段和平的时光,一切都显得清晰明了。
Jessica Riskin is the Frances and Charles Field Professor of History at Stanford. She is currently writing a book about the French naturalist Jean-Baptiste Lamarck and the origins and history of evolutionary theory. (June 2023) 杰西卡·里斯金是斯坦福大学历史学教授弗朗西斯和查尔斯·菲尔德教授。她目前正在撰写一本关于法国博物学家让-巴普蒂斯特·拉马克和进化理论起源与历史的书籍。(2023年6月)