There are no virtuous participants in the artificial intelligence race, but if there was, it might've been Anthropic.
人工智能竞赛中并无道德完人,但若真要说有,或许Anthropic公司曾是最接近的那一个。

Large language model tech is built on mountains of stolen data. The entire summation of decades of the open internet was downloaded and converted by billionaires into tech that threatens to destroy billions of jobs, end the global economy, and potentially the human race. But hey, at least in the short term, shareholders (might) make a stack of cash.
大型语言模型技术建立在海量盗取数据之上。数十年开放互联网的全部成果被亿万富翁们下载转化,催生出可能摧毁数十亿工作岗位、终结全球经济乃至威胁人类生存的技术。但话说回来,至少在短期内,股东们(或许)能赚得盆满钵满。

There are no moral leaders in this space, sadly. But at the very least, Anthropic of Claude fame took a strong stand this week against the United States government, to the ire of the Trump administration.
可悲的是,这个领域确实缺乏道德领袖。但至少,以Claude闻名的Anthropic公司本周对美国政府的强硬立场激怒了特朗普政府。

Anthropic was designated a supply chain risk this week, and summarily and forcibly banned from use in U.S. governmental agencies. Why? Anthropic said in a blog post it revolved around their two major red lines — no Claude AI for use in autonomous weapons, or mass surveillance of United States citizens.
本周,Anthropic公司被认定为供应链风险方,并遭美国政府部门强制全面禁用。原因何在?Anthropic公司在其博客声明中解释道,这源于他们坚守的两大红线——绝不允许Claude AI被用于自主武器系统,也绝不参与对美国公民的大规模监控。

It's not unexpected that mainstream governments of any stripe would be salivating at the thought of turbo-charged AI mass surveillance, but it is unexpected that a big tech corp like Anthropic would be willing to take such a strong stance against it in an era increasingly devoid of administrative morality. But hey, there's always someone willing to race to the metaphorical moral abyss in the name of money.
任何主流政府对AI增强型大规模监控垂涎三尺倒并不令人意外,但令人意外的是,在行政道德日益缺失的时代,像Anthropic这样的大型科技公司竟愿意对此采取如此强硬的立场。不过话说回来,总有人愿意以金钱之名冲向道德深渊。

OpenAI CEO Sam Altman and part time supervillain thankfully stepped in to bail out the U.S. Department of War, pledging ChatGPT and other OpenAI technologies to the cause.
幸好OpenAI公司首席执行官萨姆-阿尔特曼——这位兼职超级反派——及时出手为美国战争部救场,承诺将ChatGPT及其他OpenAI公司技术贡献给这项事业。

In a post on X, Altman claimed that OpenAI's models would not be used for mass surveillance, but that claim was immediately contradicted by a U.S. government official, who said that OpenAI's models would be used for "all lawful means." Mass surveillance of American citizens is lawful in "some scenarios" as part of the post-9/11 U.S. Patriot Act, which permits mass harvesting of communications meta data, even if some aspects of it have been curtailed in recent years.
阿尔特曼在X平台上发文声称,OpenAI公司的模型不会被用于大规模监控,但这一说法立即遭到美国政府官员的反驳。该官员表示,OpenAI公司的模型将被用于“所有合法途径”。根据9/11事件后颁布的《美国爱国者法案》,在某些情境下对美国公民进行大规模监控是合法的,该法案允许大规模收集通信元数据,尽管近年来其中部分条款已受到限制。

Anthropic wanted control over the way its technologies would be used, as opposed to relying on the interpretation of laws and legal frx works that even now have been the subject of debate and lawsuits. Altman by comparison is happy to let the U.S. government decide how OpenAI's systems are deployed, which under certain segments of the Patriot Act could quite easily lead to the mass surveillance of U.S. citizens, directly or incidentally as part of provisions on surveiling foreign citizens (which, by the way, is completely legal under U.S. law.)
Anthropic公司希望掌控其技术的使用方式,而非依赖至今仍存争议且涉诉的法律框架解释。相比之下,阿尔特曼则乐于让美国政府决定OpenAI公司系统的部署方式——根据《美国爱国者法案》特定条款,这很可能直接或间接导致对美国公民的大规模监控(此类监控作为监视外国公民条款的组成部分,在美国法律中完全合法)。

The move has sparked immediate backlash on ChatGPT and OpenAI communities online, across threads with thousands of upvotes on reddit of users claiming to be unsubscribing.
此举立即在ChatGPT及OpenAI公司网络社区引发强烈反弹,Reddit论坛上涌现数千点赞的讨论串,用户纷纷表示将取消订阅服务。

ChatGPT recently closed a funding round valuing the company at a frankly absurd $730 billion, with backers including Amazon, Softbank, and NVIDIA. Microsoft professed that it will continue to work with OpenAI, despite saying in an FT interview recently that it would begin building and deploying its own models.
ChatGPT近期完成的融资轮估值已达荒谬的7300亿美元,投资方包括亚马逊、软银和英伟达。微软虽在近期《金融时报》采访中表示将开始构建并部署自有模型,但仍宣称将继续与OpenAI公司保持合作。

Unfortunately, there aren't many other AI companies willing to take a stance against mass surveillance or autonomous weapons. Google removed an explicit ban on the technology last year from its internal rules. Microsoft is cool with autonomous weapons too, as long as a human pulls the final trigger. Amazon has no prohibitions whatsoever besides vague "responsible use" language, and Meta hasn't been shy about courting Pentagon military contracts either. And we all know Palantir is totally for it.
遗憾的是,愿意公开反对大规模监控或自主武器的AI公司寥寥无几。谷歌去年已从其内部准则中删除了对该技术的明确禁令。微软同样对自主武器持开放态度,只要最终由人类扣动扳机即可。亚马逊除了一些模糊的“负责任使用”条款外没有任何禁止性规定,而Meta公司在争取五角大楼军事合同方面也毫不避讳。至于Palantir(美国大数据分析公司),众所周知它完全支持这类应用。

The genie is out of the bottle, so to speak. ChatGPT is great at textual human mimicry but even the most cutting edge models often fail hilariously at even the most basic child-like logic puzzles.
可以说,魔瓶已然开启。ChatGPT擅长模仿人类的文本表达,但即便是最尖端的模型,在面对最基本的孩童式逻辑谜题时也常常会犯下令人啼笑皆非的错误。

Are you looking forward to a world where these hallucination-prone, easily-manipulated artificial intelligence models might eventually decide whether or not you're a threat to national security?
你是否期待这样一个世界:这些容易产生幻觉、易被操控的人工智能模型,最终可能决定你是否构成国家安全威胁?

As long as Sam Altman and his buddies can stay rich, they don't seem to give much of a fuck about it — or you.
只要萨姆-阿尔特曼和他的伙伴们能保持富有,他们似乎对此——或者说对你——根本不当回事。

Since writing this, OpenAI and Sam Altman have been on a damage control mission.
自本文撰写以来,OpenAI公司与萨姆-阿尔特曼已展开危机公关行动。

In an "AMA" style Q&A session on X, Sam Altman claimed that the United States "Department of War" would respect OpenAI's stated "red lines" for not using AI tech for autonomous weapons or mass surveillance of United States citizens, although remained largely vague about how these safeguards would be implemented and maintained.
在X平台的一场“问我任何事”式问答中,萨姆-阿尔特曼声称美国“战争部”将尊重OpenAI公司声明的“红线”——即不将人工智能技术用于自主武器或对美国公民的大规模监控,但对于这些保障措施将如何实施和维护,其表述仍基本含糊不清。

He suggested that existing U.S. law protects against these situations by default, although legal experts have warned that surveillance of non-U.S. citizens permits the collection of data on U.S. citizens in an indirect or incidental way.
他暗示现行美国法律对这些情况提供一定限制/约束,但法律专家警告称,针对非美国公民的监控允许以间接或附带方式收集美国公民数据。

People aren't exactly buying it. It makes little sense for the Trump administration to come out so strongly against Anthropic's stated position, while leaping head first into supporting OpenAI's. The core contention seems to be that OpenAI is happy to let the U.S. Department of War interpret what constitutes "legal," while Anthropic wants to maintain full control over how its technology is used.
公众对此并不买账。特朗普政府如此强烈反对Anthropic公司的立场,却一头扎进对OpenAI公司的支持,这显得颇为矛盾。核心争议似乎在于:OpenAI公司乐于让美国战争部来界定何为“合法”,而Anthropic公司则希望完全掌控其技术的使用方式。

It seems as though Altman is relying purely on hopes and prayers that its technology won't be used for nefarious means — which seems either naïve at best, and dishonest at worst. The current U.S. administration has shown willing to at the very least stretch definitions and precedents outlined in the U.S. constitution and across historical landmark legal rulings. I'm not sure why there's any reason to expect OpenAI's tech wouldn't be co-opted under the guise of "national security," an abuse of power that governmental institutions of all stripes have abused in the past and present.
阿尔特曼似乎纯粹寄希望于其技术不会被用于邪恶目的——这往好里说是天真,往坏里说就是不诚实。当前美国政府已表现出至少愿意曲解美国宪法和历史性法律判例所界定的定义和先例。我不确定为何有人会认为OpenAI公司的技术不会在“国家安全”的幌子下被挪用,这种权力滥用是各类政府机构过去和现在一贯的伎俩。

Since this article, Anthropic's Claude AI app has claimed the #1 top spot over ChatGPT on both Android and iOS. Claude AI is also available for Windows 11.
自本文撰写以来,Anthropic公司的Claude AI应用已在安卓和iOS平台超越ChatGPT登上榜单首位。Claude AI同时也可在Windows 11系统上使用。