More than two years ago, a pair of Google researchers started pushing the company to release a chatbot built on technology more powerful than anything else available at the time. The conversational computer program they had developed could confidently debate philosophy and banter about its favorite TV shows, while improvising puns about cows and horses.

两年多以前,两名谷歌(Google)研究员开始力促公司发布一款聊天机器人,它依托的技术在当时是所向披靡的。他们开发的那款对话式计算机程序可以自信地进行哲学辩论,拿它最喜欢的电视节目打趣,还能即兴创作有关牛和马的谐音梗。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


The researchers, Daniel De Freitas and Noam Shazeer, told colleagues that chatbots like theirs, supercharged by recent advances in artificial intelligence, would revolutionize the way people searched the internet and interacted with computers, according to people who heard the remarks.

据知情人士说,研究员丹尼尔·迪弗雷塔斯(Daniel De Freitas)和诺姆·沙泽(Noam Shazeer)告诉同事们,像他们开发的这种由最新人工智能技术驱动的聊天机器人将彻底改变网络搜索和人机交互方式。

They pushed Google to give access to the chatbot to outside researchers, tried to get it integrated into the Google Assistant virtual helper and later asked for Google to make a public demo available.

他们催促谷歌允许外部研究人员使用该聊天机器人,尝试将其整合到虚拟助手Google助理(Google Assistant)中,随后还要求谷歌进行公开演示。

Google executives rebuffed them at multiple turns, saying in at least one instance that the program didn’t meet company standards for the safety and fairness of AI systems, the people said. The pair quit in 2021 to start their own company to work on similar technologies, telling colleagues that they had been frustrated they couldn’t get their AI tool at Google out to the public.

他们说,谷歌高管多次回绝了他们,至少有一次说这个项目不符合公司关于AI系统安全性和公平性的标准。两人在2021年辞职创业从事于类似技术,并告诉同事,在谷歌他们因开发的AI无法面世倍感挫败。

Now Google, the company that helped pioneer the modern era of artificial intelligence, finds its cautious approach to that very technology being tested by one of its oldest rivals. Last month Microsoft Corp. announced plans to infuse its Bing search engine with the technology behind the viral chatbot ChatGPT, which has wowed the world with its ability to converse in humanlike fashion. Developed by a seven-year-old startup co-founded by Elon Musk called OpenAI, ChatGPT piggybacked on early AI advances made at Google itself.

谷歌帮助开创了现代人工智能时代,而如今,它对这项技术的谨慎做法在一位宿敌的挺进下面临考验。上个月,微软公司(Microsoft Corp.)宣布了将爆红的聊天机器人ChatGPT背后的技术接入其搜索引擎必应(Bing)的计划,此前ChatGPT已经凭借类似人类的对话能力惊艳全球。开发ChatGPT的是一家由马斯克(Elon Musk)共同创办的初创公司。这家成立七年的公司名叫OpenAI,其ChatGPT得益于谷歌较早时候在AI领域取得的进步。

Months after ChatGPT’s debut, Google is taking steps toward publicly releasing its own chatbot based in part on technology Mr. De Freitas and Mr. Shazeer worked on. Under the moniker Bard, the chatbot draws on information from the web to answer questions in a conversational format. Google said on Feb. 6 it was testing Bard internally and externally with the aim of releasing it widely in coming weeks. It also said it was looking to build similar technology into some of its search results.

在ChatGPT亮相几个月后,谷歌正在采取行动,准备公开发布部分基于迪弗雷塔斯和沙泽的技术的自家聊天机器人。这款名为Bard(译注:意为吟游诗人)的聊天机器人借鉴互联网上的信息,以对话形式回答问题。谷歌在2月6日表示,它正在对Bard进行内部和外部测试,旨在于未来几周内大范围推出。该公司还表示,它正寻求将类似技术应用到一些搜索结果中。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Google’s relatively cautious approach was shaped by years of controversy over its AI efforts, from internal arguments over bias and accuracy to the public firing last year of a staffer who claimed that its AI had achieved sentience.

谷歌相对谨慎的做法源于多年来其AI努力引发的争议,包括公司内部关于偏见和准确性的争论,以及去年公开解雇一名声称谷歌的AI已经有“知觉力”(sentience)的员工。

Those episodes left executives wary of the risks public AI product demos could pose to its reputation and the search-advertising business that delivered most of the nearly $283 billion in revenue last year at its parent company, Alphabet Inc., according to current and former employees and others familiar with the company.

据现任和前任员工以及其他熟悉该公司的人称,这些事件让高管们对公开的AI产品演示可能对其声誉和搜索广告业务构成的风险保持警惕,广告业务在谷歌母公司Alphabet Inc.去年近2,830亿美元的收入中占了大头。

“Google is struggling to find a balance between how much risk to take versus maintaining thought leadership in the world,” said Gaurav Nemade, a former Google product manager who worked on the company’s chatbot until 2020.

“谷歌正艰难地在承担风险与维持世界思想领袖地位之间寻找一个平衡点,”曾在谷歌从事聊天机器人产品经理工作直到2020年的高拉夫·内玛德(Gaurav Nemade)说。

Messrs. De Freitas and Shazeer declined requests for an interview through an external representative.

迪弗雷塔斯和沙泽通过一名外部代表回绝了采访请求。

A Google spokesman said their work was interesting at the time, but that there is a big gap between a research prototype and a reliable product that is safe for people to use daily. The company added that it has to be more thoughtful than smaller startups about releasing AI technologies.

谷歌的一位发言人说,他们的成果在当时来说是有趣的,但研究原型与可供人们日常安全使用的可靠产品之间存在巨大差距。该公司补充说,在发布AI技术方面,它必须比规模更小的初创公司考虑得更周全。

Google’s approach could prove to be prudent. Microsoft said in February it would put new limits on its chatbot after users reported inaccurate answers, and sometimes unhinged responses when pushing the app to its limits.

谷歌的做法可能被证明是审慎的。微软在2月份表示将对其聊天机器人加以新限制,此前用户报告收到了不准确的回答,以及该应用有时会在被推向极限后产生奇怪的反应。
Alphabet及其子公司谷歌的CEO皮查伊告诉员工,该公司一些最成功的产品是随着时间推移赢得了用户的信任。
图片来源:KYLE GRILLOT/BLOOMBERG NEWS
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


In an email to Google employees last month, Sundar Pichai, chief executive of both Google and Alphabet, said some of the company’s most successful products weren’t the first to market but earned user trust over time.

上个月,谷歌和Alphabet的首席执行官桑达尔·皮查伊(Sundar Pichai)在一封给谷歌员工的电子邮件中说,该公司部分最成功的产品并不是率先上市的,而是随着时间的推移赢得了用户的信任。

“This will be a long journey—for everyone, across the field,” Mr. Pichai wrote. “The most important thing we can do right now is to focus on building a great product and developing it responsibly.”

“这将是一段漫长的旅程——对每一个人,对整个领域来说都是,”皮查伊写道。“我们现在能做的最重要的事情是专注于打造一个伟大的产品,并负责任地开发它。”

Google’s chatbot efforts go as far back as 2013, when Google co-founder Larry Page, then CEO, hired Ray Kurzweil, a computer scientist who helped popularize the idea that machines would one day surpass human intelligence, a concept known as “technological singularity.”

谷歌的聊天机器人开发可以追溯到2013年,当时谷歌联合创始人兼时任首席执行官拉里·佩奇(Larry Page)聘请了计算机科学家雷·库兹韦尔(Ray Kurzweil),他让机器有一天会超过人类智能的想法广为传播,即所谓“技术奇点”的概念。

Mr. Kurzweil began working on multiple chatbots, including one named Danielle based on a novel he was working on at the time, he said later. Mr. Kurzweil declined an interview request through a spokeswoman for Kurzweil Technologies Inc., a software company he started before joining Google.

他后来说,库兹韦尔开始着手开发多个聊天机器人,其中一个是基于他当时正在创作的小说命名的Danielle。库兹韦尔通过库兹韦尔技术公司(Kurzweil Technologies Inc.)的一名发言人拒绝了采访请求,该公司是他在加入谷歌之前创办的一家软件公司。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Google also purchased the British artificial-intelligence company DeepMind, which had a similar mission of creating artificial general intelligence, or software that could mirror human mental capabilities.

谷歌还收购了英国人工智能公司DeepMind,该公司有一个类似的使命,即创造通用人工智能(artificial general intelligence),或可以映射人类心理能力的软件。

At the same time, academics and technologists increasingly raised concerns about AI—such as its potential for enabling mass surveillance via facial-recognition software—and pressured companies such as Google to commit not to pursue certain uses of the technology.

与此同时,学术界和技术专家越来越多地提出了对AI的担忧——例如其通过面部识别软件实现大规模监控的潜力——并向谷歌等公司施压,要求他们承诺不将该技术用于某些用途。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Partly in response to Google’s growing stature in the field, a group of tech entrepreneurs and investors including Mr. Musk formed OpenAI in 2015. Initially structured as a nonprofit, OpenAI said it wanted to make sure AI didn’t fall prey to corporate interests and was instead used for the good of humanity. (Mr. Musk left OpenAI’s board in 2018.)

包括马斯克在内的一群科技企业家和投资者于2015年成立了OpenAI,他们此举部分出于抗衡谷歌在该领域日益提升的地位。OpenAI成立之初是一个非营利组织,它称其想确保AI不会沦为企业牟利的工具,而是致力于造福人类。(马斯克于2018年离开了OpenAI的董事会。)

Google eventually promised in 2018 not to use its AI technology in military weapons, following an employee backlash against the company’s work on a U.S. Department of Defense contract called Project Maven that involved automatically identifying and tracking potential drone targets, like cars, using AI. Google dropped the project.

谷歌最终在2018年承诺不将其AI技术用于军事武器,此前员工强烈反对该公司与美国国防部签订的一项名为Project Maven的项目,涉及利用AI让无人机自动识别和跟踪潜在目标,比如汽车。谷歌后来放弃了这个项目。

Mr. Pichai also announced a set of seven AI principles to guide the company’s work, designed to limit the spread of unfairly biased technologies, such as that AI tools should be accountable to people and “built and tested for safety.”

皮查伊还公布了一套指导该公司工作的AI七大原则,旨在限制带有不公平偏见的技术传播,例如AI工具应该对人负责,“要基于安全进行开发和测试。”
沙泽和迪弗雷塔斯在他们新公司位于帕罗奥图的办公室。
图片来源:WINNI WINTERMEYER FOR THE WASHINGTON POST/GETTY IMAGES
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Around that time, Mr. De Freitas, a Brazilian-born engineer working on Google’s YouTube video platform, started an AI side project.

差不多那个时候,出生于巴西、在谷歌的YouTube视频平台工作的工程师迪弗雷塔斯开始了一个AI副业。

As a child, Mr. De Freitas dreamed of working on computer systems that could produce convincing dialogue, his fellow researcher Mr. Shazeer said during a video interview uploaded to YouTube in January. At Google, Mr. De Freitas set out to build a chatbot that could mimic human conversation more closely than any previous attempts.

沙泽在一则于1月上传到YouTube的采访视频中说,迪弗雷塔斯小时候就梦想着研究能够进行令人信服的对话的计算机系统。在谷歌,迪弗雷塔斯着手开发了一个聊天机器人,它比此前的尝试都更接近地模仿了人类对话。

For years the project, originally named Meena, remained under wraps while Mr. De Freitas and other Google researchers fine-tuned its responses. Internally, some employees worried about the risks of such programs after Microsoft was forced in 2016 to end the public release of a chatbot called Tay after users goaded it into problematic responses, such as support for Adolf Hitler.

多年来,这个最初名为Meena的项目一直处于保密状态,迪弗雷塔斯和其他谷歌研究人员对其反应进行着调整。在内部,一些员工担心此类项目的风险,此前微软在2016年被迫终止了一款名为Tay的聊天机器人的公开发布,因为用户诱导它做出了有问题的回应,比如支持希特勒。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


The first outside glimpse of Meena came in 2020, in a Google research paper that said the chatbot had been fed 40 billion words from social-media conversations in the public domain.

外界第一次知道Meena是在2020年,谷歌的一篇研究论文说已经给这个聊天机器人投喂了来自公网社交媒体对话的400亿字。

OpenAI had developed a similar model, GPT-2, based on 8 million webpages. It released a version to researchers but initially held off on making the program publicly available, saying it was concerned it could be used to generate massive amounts of deceptive, biased or abusive language.

OpenAI开发了一个类似的模型——基于800万个网页的GPT-2。该公司向研究人员发布了一个版本,但最初没有公开这个程序,说是担心它可能被用来产生大量带有欺骗性、偏见或侮辱性的语言。

At Google, the team behind Meena also wanted to release their tool, even if only in a limited format as OpenAI had done. Google leadership rejected the proposal on the grounds that the chatbot didn’t meet the company’s AI principles around safety and fairness, said Mr. Nemade, the former Google product manager.

在谷歌,Meena背后的团队也想发布他们的工具,即使只是像OpenAI那样以一种有限的形式发布。前谷歌产品经理内玛德说,谷歌领导层拒绝了这一提议,理由是该聊天机器人不符合公司关于安全性和公平性的AI原则。

A Google spokesman said the chatbot had been through many reviews and barred from wider releases for various reasons over the years.

一位谷歌发言人称,这个聊天机器人经历了许多审核,多年来出于种种原因被禁止广泛发布。

The team continued working on the chatbot. Mr. Shazeer, a longtime software engineer at the AI research unit Google Brain, joined the project, which they renamed LaMDA, for Language Model for Dialogue Applications. They injected it with more data and computing power. Mr. Shazeer had helped develop the Transformer, a widely heralded new type of AI model that made it easier to build increasingly powerful programs like the ones behind ChatGPT.

该团队继续从事开发聊天机器人的工作。长期在AI研究部门Google Brain担任软件工程师的沙泽加入了这个项目,他们将其更名为LaMDA(译注:Language Model for Dialogue Applications的简称,意为对话应用语言模型)。他们给它注入了更多的数据和算力。沙泽曾帮助开发了“转换器”(transformer),这是一种广受赞誉的新型AI模型,它使得开发像ChatGPT背后那样日益强大的程序更加容易。

However, the technology behind their work soon led to a public dispute. Timnit Gebru, a prominent AI ethics researcher at Google, said in late 2020 she was fired for refusing to retract a research paper on the risks inherent in programs like LaMDA and then complaining about it in an email to colleagues. Google said she wasn’t fired and claimed her research was insufficiently rigorous.

然而,他们的开发所使用的技术很快就引发了一场公开争论。谷歌知名AI伦理学研究员蒂姆尼特·格布鲁(Timnit Gebru)在2020年底表示,她因在拒绝撤回一篇关于LaMDA等程序内在风险的研究论文后在给同事的邮件中抱怨而被解雇。谷歌说她没有被解雇,并称她的研究不够严谨。
2021年的一个谷歌虚拟大会展示了与LaMDA的对话示例。
图片来源:DANIEL ACKER/BLOOMBERG NEWS
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Google’s head of research, Jeff Dean, took pains to show Google remained invested in responsible AI development. The company promised in May 2021 to double the size of the AI ethics group.

谷歌研究主管杰夫·迪恩(Jeff Dean)努力证明,谷歌仍然投资于负责任的AI开发。该公司在2021年5月承诺将AI伦理小组的规模扩大一倍。

A week after the vow, Mr. Pichai took the stage at the company’s flagship annual conference and demonstrated two prerecorded conversations with LaMDA, which, on command, responded to questions as if it were the dwarf planet Pluto or a paper airplane.

谷歌发出上述承诺一周后,皮查伊在该公司的旗舰年度大会上展示了两段预先录制的与LaMDA的对话,LaMDA假装自己是冥王星或一架纸飞机,根据命令对问题作出回应。

Google researchers prepared the examples days before the conference following a last-minute demonstration delivered to Mr. Pichai, said people briefed on the matter. The company emphasized its efforts to make the chatbot more accurate and minimize the chance it could be misused.

知情人士透露,谷歌研究人员在大会前几天准备了这些例子,并在最后一刻向皮查伊提供了演示。该公司强调,它努力使聊天机器人更加准确,并尽量减少其被滥用的机会。

“Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks,” two Google vice presidents said in a blog post at the time.

“在创造像LaMDA这样的技术时,我们最优先考虑的是努力确保将这些风险降到最低,”谷歌的两位副总裁在当时的一篇博客文章中说。
谷歌研究主管迪恩在2020年一场在旧金山的活动上发表讲话。
图片来源:MONICA M DAVEY/SHUTTERSTOCK

Google later considered releasing a version of LaMDA at its flagship conference in May 2022, said Blake Lemoine, an engineer the company fired last year after he published conversations with the chatbot and claimed it was sentient. The company decided against the release after Mr. Lemoine’s conclusions began generating controversy internally, he said. Google has said Mr. Lemoine’s concerns lacked merit and that his public disclosures violated employment and data-security policies.

因为公布了与聊天机器人的对话,并声称它是有意识的而在去年遭谷歌解雇的工程师布莱克·莱莫因(Blake Lemoine)说,该公司本考虑在2022年5月的旗舰大会上发布LaMDA的一个版本。莱莫因说,在他的结论开始在内部引起争议后,公司决定不发布。谷歌表示,莱莫因的担忧缺乏依据,而他的公开披露违反了公司的雇用和数据安全政策。

As far back as 2020, Mr. De Freitas and Mr. Shazeer also looked for ways to integrate LaMDA into Google Assistant, a software application the company had debuted four years earlier on its Pixel smartphones and home speaker systems, said people familiar with the efforts. More than 500 million people were using Assistant every month to perform basic tasks such as checking the weather and scheduling appointments.

知情人士说,早在2020年,迪弗雷塔斯和沙泽还在寻找将LaMDA整合到谷歌助理中的方法;谷歌助理是该公司四年前首先在其Pixel智能手机和家用音响系统上推出的应用。每个月有超过5亿人使用谷歌助理来完成基本任务,如查天气和安排日程。

The team overseeing Assistant began conducting experiments using LaMDA to answer user questions, said people familiar with the efforts. However, Google executives stopped short of making the chatbot available as a public demo, the people said.

知情人士说,负责谷歌助理的团队开始试验使用LaMDA来回答用户的问题。然而,他们说,谷歌高管没有公开演示这个聊天机器人。

Google’s reluctance to release LaMDA to the public frustrated Mr. De Freitas and Mr. Shazeer, who took steps to leave the company and begin working on a startup using similar technology, the people said.

他们说,谷歌不愿意向公众发布LaMDA,这让迪弗雷塔斯和沙泽感到沮丧,他们离开了公司,开始进行类似技术的创业。

Mr. Pichai personally intervened, asking the pair to stay and continue working on LaMDA but without making a promise to release the chatbot to the public, the people said. Mr. De Freitas and Mr. Shazeer left Google in late 2021 and incorporated their new startup, Character Technologies Inc., in November that year.

他们还说,皮查伊亲自出面干预,要求两人留下并继续开发LaMDA,但没有做出向公众发布该聊天机器人的承诺。迪弗雷塔斯和沙泽于2021年底离开谷歌,并于当年11月成立了他们的新公司Character Technologies Inc.。

Character’s software, released last year, allows users to create and interact with chatbots that role-play as well-known figures such as Socrates or stock types such as psychologists.

Character公司去年发布的软件允许用户创建聊天机器人并与之互动,聊天机器人可以扮演苏格拉底等知名人物或心理医生这样的内置人设。

“It caused a bit of a stir inside of Google,” Mr. Shazeer said in the interview uploaded to YouTube, without elaborating, “but eventually we decided we’d probably have more luck launching stuff as a startup.”

“这在谷歌内部引发了一些骚动,”沙泽在上传到YouTube的采访说,但没有详细解释,“但最终我们决定,我们成立一家初创公司来推出东西可能更走运。”

Since Microsoft struck its new deal with OpenAI, Google has fought to reassert its identity as an AI innovator.

自从微软与OpenAI达成新协议以来,谷歌一直在努力重新确立其作为AI创新者的身份。

Google announced Bard in February, on the eve of a Microsoft event introducing Bing’s integration of OpenAI technology. Two days later, at an event in Paris that Google said was originally scheduled to discuss more regional search features, the company gave press and the broader public another glimpse of Bard, as well as a search tool that used AI technology similar to LaMDA to generate textual responses to search queries.

2月,就在微软宣布将把必应与OpenAI的技术进行整合的前夕,谷歌公布了Bard。两天后,在巴黎举行的一次原定会更多地讨论区域性搜索功能的活动上,谷歌让媒体和公众进一步了解了Bard,以及一个使用类似于LaMDA的AI技术对搜索请求生成文本回复的搜索工具。

Google said that it often reassesses the conditions to release products and that because there is a lot of excitement now, it wanted to release Bard to testers even if it wasn’t perfect.

谷歌表示,它经常重新评估发布产品的条件,由于现在有很多兴奋情绪,它想面向测试人员发布Bard,尽管其并不完美。
上个月,微软首席执行官纳德拉在该公司位于其华盛顿州雷德蒙德总部的一场活动上发表演讲。
图片来源:CHONA KASINGER/BLOOMBERG NEWS
微软员工Alexander Campbell在展示微软必应搜索引擎和Edge浏览器与OpenAI技术的整合成果。
图片来源:STEPHEN BRASHEAR/ASSOCIATED PRESS

Since early last year, Google has also had internal demonstrations of search products that integrate responses from generative AI tools like LaMDA, Elizabeth Reid, the company’s vice president of search, said in an interview.

谷歌负责搜索业务的副总裁伊丽莎白·里德(Elizabeth Reid)在接受采访时说,自去年年初以来,谷歌也对整合了LaMda等生成式人工智能(generative AI)工具的搜索产品进行了内部演示。

One use case for search where the company sees generative AI as most useful is for specific types of queries with no one right answer, which the company calls NORA, where the traditional blue Google lix might not satisfy the user. Ms. Reid said the company also sees potential search use cases for other types of complex queries, such as solving math problems.

该公司认为生成式人工智能最有用的一个搜索用例是回应没有一个正确答案的特定类型查询,该公司称之为NORA(译注:no one right answer的缩写),传统的谷歌搜索结果可能在这种情况下无法满足用户。里德说,该公司还看到了其他类型的复杂查询的潜在搜索用例,如解决数学问题。

As with many similar programs, accuracy remained an issue, executives said. Such models have a tendency to invent a response when they don’t have sufficient information, something researchers call “hallucination.” Tools built on LaMDA technology have in some cases responded with fictional restaurants or off-topic responses when asked for recommendations, said people who have used the tool.

高管们说,与许多类似的程序一样,准确性仍然是一个问题。这类模型在没有足够信息的情况下有发明回应的倾向,研究人员称之为“幻觉”(hallucination)。使用过LaMDA技术的人说,在某些情况下,建立在该技术上的工具当被要求做出推荐时,会给出虚构的餐馆或离题的反应。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Microsoft called the new version of Bing a work in progress last month after some users reported disturbing conversations with the chatbot integrated into the search engine, and introduced changes, such as limiting the length of chats, aimed at reducing the chances the bot would spout aggressive or creepy responses. Both Google and Microsoft’s previews of their bots in February included factual inaccuracies produced by the programs.

由于一些用户报告了与集成在搜索引擎中的聊天机器人进行的令人不安的对话,微软在上个月称新版必应是一款开发中的产品,并引入了一些措施,如限制聊天的长度,旨在减少机器人产生具有攻击性或怪异反应的机会。在2月份的预览中,谷歌和微软的机器人均出现了所给事实信息不准确的情况。

“It’s sort of a little bit like talking to a kid,” Ms. Reid said of language models like LaMDA. “If the kid thinks they need to give you an answer and they don’t have an answer, then they’ll make up an answer that sounds plausible.”

“这有点像和孩子说话。”里德在谈到LaMDA等语言模型时说,“如果孩子认为他们需要给你一个答案,而他们没有答案,那么他们就会编造一个听起来合理的答案。”

Google continues to fine tune its models, including training them to know when to profess ignorance instead of making up answers, Ms. Reid said. The company added that it has improved LaMDA’s performance on metrics like safety and accuracy over the years.

里德说,谷歌继续对其模型进行调整,包括训练它们知道在什么时候应该承认无知而不是编造答案。该公司补充说,多年来它已经提高了LaMDA在安全性和准确性等指标上的表现。

Integrating programs like LaMDA, which can synthesize millions of websites into a single paragraph of text, could also exacerbate Google’s long-running feuds with major news outlets and other online publishers by starving websites of traffic. Inside Google, executives have said Google must deploy generative AI in results in a way that doesn’t upset website owners, in part by including source lixs, according to a person familiar with the matter.

整合像LaMDA这样可以将数百万个网站的信息汇总成一段文字的程序也可能使网站失去流量,加剧谷歌与主要新闻机构和其他在线出版商的长期争执。据一位知情人士称,在谷歌内部,高管们已经表示,谷歌在搜索结果中部署生成式人工智能时必须通过不惹恼网站所有者的方式,例如在结果中包含来源链接。

“We’ve been very careful to take care of the ecosystem concerns,” said Prabhakar Raghavan, the Google senior vice president overseeing the search engine, during the event in February. “And that’s a concern that we intend to be very focused on.”

“我们一直非常小心地考虑到整个生态系统的顾虑,”谷歌负责搜索引擎的高级副总裁普拉巴卡·拉加万(Prabhakar Raghavan)在2月的一场活动中说。“而这也是我们打算高度关注的一个问题。”