毫无疑问,机器人正变得越来越复杂,机器人正在成为我们日常生活中不可或缺的一部分。 但是随着我们与机器人的互动和依赖增加,有一个重要的问题:如果机器人真的违法犯罪,甚至伤害了某人——无论是故意的还是无意的,会发生什么后果?

Robots are unquestioningly getting more sophisticated by the year, and as a result, are becoming an indelible part of our daily lives. But as we start to increase our interactions and dependance on robots, an important question needs to be asked: What would happen if a robot actually committed a crime, or even hurt someone — either deliberately or by mistake?

虽然我们对此的第一反应可能是责怪机器人,但追究责任的问题比单纯责怪机器人要复杂得多,也更微妙。 与任何涉及涉嫌犯罪行为的事件一样,我们需要考虑一系列因素。 让我们更深入地探讨一下,当您的机器人违法时,谁应该负责。

While our first inclination might be to blame the robot, the matter of apportioning blame is considerably more complicated and nuanced than that. Like any incident involving an alleged criminal act, we need to consider an entire host of factors. Let's take a deeper look and find out who should pay when your robot breaks the law.

为了更好地理解这个问题,我采访了机器人伦理专家Patrick Lin,他是加州理工州立大学伦理 + 新兴科学小组的负责人。 通过与他的交谈,我了解到这个问题有多么重要。 正如Lin告诉我的那样,“任何一方都可能对如今机器人的不当行为负责。”

To better understand this issue I spoke to robot ethics expert Patrick Lin, the Director of Ethics + Emerging Sciences Group at California Polytechnic State University. It was through my conversation with him that I learned just how pertinent this issue is becoming. As Lin told me, "Any number of parties could be held responsible for robot misbehaviour today."

机器人伦理和机器伦理

Robot and machine ethics

在我们深入探讨之前,需要区分两个不同的研究领域:机器人伦理学(robot ethics)和机器伦理学(machine ethics)。

Before we get too far along in the discussion, a distinction needs to be made between two different fields of study: robot ethics and machine ethics.

我们目前处于机器人伦理时代,关注的焦点在于如何以及为何设计、构造和使用机器人,比如Roomba等家用机器人、自动驾驶汽车以及可能的在战场上自动杀人的机器人。 这些机器人虽然能够在没有人类监督的情况下“行动”,但本质上是无意识的自动机。 因此,机器人伦理主要关注其使用的适当性。

We are currently in the age of robot ethics, where the concern lies with how and why robots are designed, constructed, and used. This includes such things as domestic robots like Roomba, self-driving cars, and the potential for autonomous killing machines on the battlefield. These robots, while capable of "acting" without human oversight, are essentially mindless automatons. Robot ethics, therefore, is primarily concerned with the appropriateness of their use.

另一方面,机器伦理学更具推测性,因为它考虑了机器人(或更准确地说,机器人的具体化人工智能编程)具有自我意识和道德思考能力的未来潜力。 因此,机器伦理关注的是先进机器人的实际行为和具体行动。

Machine ethics, on the other hand, is a bit more speculative in that it considers the future potential for robots (or more accurately, their embodied artificially intelligent programming) to have self-awareness and the capacity for moral thought. Consequently, machine ethics is concerned with the actual behavior and actions of advanced robots.

因此,在将任何责任归咎于机器人的邪恶行为之前,我们需要确定这两个类别中的哪一个真正适用于面对的问题。 现在和不久的将来,机器人道德肯定够用了,在这种情况下,责任应该归于制造商、所有者,在某些情况下甚至是受害者。

So, before any blame can get assigned to a robot for any nefarious action, we would need to decide which of these two categories apply. For now and the immediate future, robot ethics most certainly qualifies, in which case accountability should to be attributed to either the manufacturer, the owner, and in some cases even the victim.

但展望未来,随着机器人在道德成熟度方面越来越接近人类,机器人很可能必须为自己的罪行负责
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


But looking further into the future to a time when robots match our own level of moral sophistication, the day is coming when they will very likely to have to answer for their crimes.

制造商的责任

Manufacturer liability

在现在和可预见的未来,机器人犯错的责任通常落在制造商身上。Lin说: “当涉及到更基本的自主机器和系统时,制造商应确认预见到的任何软件或硬件缺陷。”

For now and the foreseeable future, culpability for a robot that has gone wrong will usually fall on the manufacturer. "When it comes to more basic autonomous machines and systems," said Lin, "a manufacturer needs to ensure that any software or hardware defect should have been foreseen."

他举了一个关于Roomba的假设例子,假如它身临混乱之中——制造商无法预料的一系列变量。 “人们可以想象 Roomba从边缘掉落并正好砸在一只猫上,”他说,“在这种情况下,可以说制造商负有责任。”

He cited the hypothetical example of a Roomba that experiences a perfect storm of confusion — a set of variables that the manufacturer could not have anticipated. "One could imagine the Roomba falling off an edge and landing right on top of a cat," he said, "in which case it could be said that the manufacturer is responsible."

事实上,因为机器人只是根据其编程运行,所以它们不能为自己的行为负责,也绝对没有恶意。 如果消费者按照说明书使用机器人并且没有以任何方式进行修改,那么消费者也不应该承担责任。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Indeed, because the robot is just operating according to the limits of its programming, it cannot be held accountable for its actions. There was absolutely no malice involved. And assuming that the robot was being used according to instructions and not modified in any way, the consumer shouldn't be held liable either.

超出预期用途的使用

Outside intended use

Lin指出,这引起了另一个问题。他说:"机器人拥有者也有可能误用并直接骇入自己的机器人。Lin举例说,家庭防卫机器人在亚洲正被越来越多地应用--包括进行家庭巡逻的机器人,可以发射胡椒喷雾和使用彩弹枪。他告诉我:"可以想象,有人可能想把Roomba武器化,"他说,"在这种情况下,所有者将承担责任,而不是制造商。" 因为机器人以完全超出其预期用途的方式行动,所以制造商免责。

Which, as Lin pointed out, raises another issue.
"It's also possible that owners will misuse their robots and hack directly into them," he said. Lin pointed to the example of home defense robots that are being increasingly used in Asia — including robots that go on home patrol and can shoot pepper spray and paint-ball guns. "It's conceivable that someone might want to weaponize the Roomba," he told me, "in which case the owner would be on the hook and not the manufacturer." In such a scenario, the robot would act in a way completely outside of its intended use, thus absolving the manufacturer from liability.

但是,Lin也澄清说,事情依然不简单。"他说:"仅仅因为拥有者修改了机器人,使其做了制造商从未设想过或无法预见的事情,并不意味着制造商就完全脱责了。有些人可能会说,制造商应该预见到黑客攻击的可能性,或其他类似的修改,并建立保障措施,防止这种操纵。"

But as Lin clarified for us, it's still not as cut-and-dry as that. "Just because the owner modified the robot to do things that the manufacturer never intended or could never foresee doesn't mean they're completely off the hook," he said. "Some might argue that the manufacturer should have foreseen the possibility of hacking, or other such modifications, and in turn build in safeguards to prevent this kind of manipulation."

受害者的责任
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Blame the victim

某些情况下甚至受害者也会被追究责任。"想想自动驾驶汽车,"Lin说,"一个乱穿马路的人可能会突然跑过马路并被撞。在这种情况下,真正应该负责的是受害者。

And there are still yet other scenarios in which even the victim could be held responsible. "Consider self-driving cars," said Lin, "and the possibility that a jay-walker could suddenly run across the street and get hit." In such a case it's the victim that's really to blame.

事实上,人们可以设想很多场景,在这些场景中,人们由于不注意或鲁莽,不小心被他们周围越来越多的强大且自主的机器所伤。

And indeed, one can imagine a entire host of scenarios in which people, through their inattention or recklessness, fall prey to the growing number of powerful and autonomous machinery around them.

杀人的机器

Machines that are supposed to kill

使这一切更加复杂的是自主杀人机器的潜力。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


Complicating all this yet even further is the potential for autonomous killing machines.

目前,作战无人机由人类操作员远程操纵,而人类操作员又要对设备实施的任何暴力行为负责。 如果操作员误杀平民或战友,他们将不得不为自己的错误负责,并可能根据具体情况面临军事法庭。

Currently, combat drones are guided remotely by human operators, who are in turn responsible for any violent action committed by the device. If an operator kills a civilian or fellow soldier by mistake, they will have to answer for their mistake and likely face a military tribunal depending on the circumstances.

话虽如此,以色列和韩国已经有哨兵机器人值班。 如果这些机器人中的一个误杀了某人,怎么办? 实际上,正如Lin告诉我们的那样,这种事已经发生了。 早在2007年10月,南非军队部署的半自动机器人大炮发生故障,导致9名“友军”士兵丧生,另有14人受伤。
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


But that said, there are already sentry bots on duty in Israel and S. Korea. What would happen if one of these robots were to kill somebody by mistake? Actually, as Lin informed us, it's already happened. Back in October 2007 a semi-autonomous robotic canon deployed by the South African army malfunctioned, killing nine "friendly" soldiers and wounding 14 others.

将此类事件归咎于机器人太方便了,人们对此甚至不假思索。 但是这些系统不能被追究责任因为它们没有任何道德意识。

It would be all too convenient, and even instinctive, to blame the robot for an incident like this. But because these systems lack any kind of moral awareness, they cannot be held responsible.

因此,谁应该为如此严重的错误负责? 部署机器的人? 采购员? 技术的开发者? 或者正如Lin问的那样,“我们应该在指挥系统中溯责多远——会不会牵连到总统,因为名义上总统也是总司令?”

Who, therefore, should account for such an egregious mistake? The person who deployed the machine? The procurement officer? The developer of the technology? Or as Lin asked, "Just how far up the chain of command should we go — and would we ever go so far as to implicate the President, who technically speaking is the Commander-in-Chief?"

最后,Lin建议,这些事件必须逐案处理。 “完全取决于实际情况,”他说。

Ultimately, suggested Lin, these incidents will have to be treated on a case-by-case basis. "It will all depend on the actual scenario," he said.

准人类机器

Quasi-persons

展望未来,在相当先进的人工智能和完全健全的道德机器人之间可能会存在过渡阶段。 可以想象,有可能产生一种早期道德人工智能,它具有非常有限的自我意识和个人责任感——但仍然具有自主性和意识。 机器人也有可能被植入道德规范。

Looking ahead to the future, there's the potential for a kind of behavioral grey area to emerge between a fairly advanced AI and a fully robust moral machine. It's conceivable that a precursor moral AI will be developed that has a very limited sense of self-awareness and personal responsibility — but a sense of subjectivity and awareness nonetheless. There's also the potential for robots to have ethics programmed right into them.

与更简单的自动机不同,这些机器将能够进行实际的决策——尽管处于非常初级的水平。 从某种意义上说,他们很像孩子——根据年龄的不同,儿童并不完全为自己的行为负责。

Unlike more simple automatons, these machines would be capable of actual decision making — albeit at a very rudimentary level. In a sense, they'd be very much like children — who, depending on their age, aren't entirely held accountable for their actions.

“在机器人伦理方面存在一种奇怪的脱节,”Lin指出,“我们期望机器人的行为近乎完美,而我们并不真正期望人类本身也如此完美。” 他同意儿童是一种特例,儿童本质上是准人类。 他认为,可能要以类似的方式看待机器人。

"There's a kind of strange disconnect when it comes to robot ethics," noted Lin, "in that we're expecting near perfect behavior from robots when we don't really expect it from ourselves." He agrees that children are a kind of special case, and that they're essentially quasi-persons. Robots, he argues, may have to regarded in a similar way.

因此,机器人的主人必须充当父母或监护人,确保机器人的学习和行为得当——在某些情况下甚至要为机器人的行为承担全部责任。 “就像对孩子一样,”Lin说,“机器人的责任必须根据它们的复杂程度来确定。”

Consequently, owners of robots would have to serve as parents or guardians, ensuring that they learn and behave appropriately — and in some cases even take full responsibility for their actions. "It's the same with children," said Lin, "there will have to be a sliding scale of responsibility for robots depending on how sophisticated they are."

道德机器的崛起
原创翻译:龙腾网 http://www.ltaaa.cn 转载请注明出处


The rise of moral machines

最后,还有真正的道德机器的潜力——那些能够明辨是非的机器人。 但同样很棘手。 一个人工智能机器人将被赋予一种与人类思维截然不同的思维方式。 就其本质而言,机器人的想法与我们的想法截然不同。 因此,很难知道机器人确切的内心想法。

And finally, there's the potential for bona fide moral machines — those robots capable of knowing right from wrong. But again, this is still going to prove a tricky area. An artificially intelligent robot will be endowed with a very different kind of mind than one possessed by a human. By its very nature it will think very different than we do. And by consequence, it will be very difficult to know its exact inner cogitations.

但正如Lin指出的那样,作为人类的我们也仍在努力解决自己在伦理领域的问题。 他注意到最新的神经科学如何表明我们拥有的自由意志并没有想象的那么多。 事实上,法院开始难以将责任归咎于可能患有生理障碍的人。

But as Lin noted, this is an area that, as humans, we're still struggling to deal with ourselves. He noted how the latest neuroscience suggests that we may not have as much free will as we think. Indeed, courts are beginning to have difficulty in assigning blame to those who may suffer from biological impairments.

综上所述,我们能否证明一些命题,例如,机器人可以出于自由意志行动? 机器人是否真正了解其行为的后果? 它们真的有同理心吗?

All this said, could we ever prove, for example, that a robot can act out of free will? Or that it truly understands the consequences of its actions? And does it really feel empathy?

如果答案是肯定的,那么机器人就可以真正为自己的罪行负责。

If the answers are yes, then a robot could truly be made to pay for its crimes.

但从概念上讲,这些问题很重要,因为作为一个社会,我们倾向于将权利和自由赋予有这些思维能力的人。 因此,如果我们能够证明机器人具有道德行为和内省的能力,我们不仅要让它对自己的行为负责,我们还必须赋予它基本权利并保护它。

But more conceptually, these questions are important because, as a society, we tend to confer rights and freedoms to those persons capable of such thoughts. Thus, if we could ever prove that a robot is capable of moral action and introspection, we would not only have to hold it accountable for its actions, we would also have to endow it with fundamental rights and protections.

因此,看起来我们离机器人主动拨打电话的那一天不远了。

It would appear, therefore, that we're not too far from the day when robots will start to demand their one phone call.

这篇文章最先发表于io9。

This article originally appeared at io9.

乔治于2013年2月16日发表