We are the people they warned you about – Chris Anderson – Medium
As the “Slaughterbots” video of (fictional) killer drones goes viral, here are some quick thoughts from a guy who helped start the “hey, open source drones, what could go wrong?” movement. (I also run Dronecode, the leading drone software consortium)
I’m not super concerned about the particular scenario in the video (swarms of microdrones carrying shaped charges hunting down individuals and putting holes in their head) because it seems technically unfeasible — microdrones are less maneuverable than insects, and we can swat bugs away easily. But in general, I have a bigger problem with the message of the video: I’m an enabler of what’s described in that scenario, but I have no idea what I should do differently.
Indeed, not only do I not know how we could prevent weaponization by sufficiently motivated bad guys, I’m not even quite sure what weaponization means.
In this video, at what point did the “weaponization” occur?
- A) Just by adding the shaped charge?
- B) By adding the individual targeting?
- C) By deploying as a swarm?
If you say A, then are B and C okay? (hope so, since we’ve already done that). And if A, then how do you stop that part, since it’s a trivial addition that could be done by anyone?
The Autonomous Weapons group, which made this video, defines killer robots as “weapons systems that, once activated, would select & fire on targets without meaningful human control.” My question: is it just the “fire on” part that’s bad? Or are we supposed to stop “select”, too? In which case, too late (Apple Face ID)
The problem is that the “firing on” part is technically trivial. Anyone can strap a gun/bomb to a bot, legally or not. It’s the targeting/AI that’s hard and possibly(?) controllable. Thus the paradox: putting explosives on things (“firing”) is uncontrollable. Advanced AI tech is more likely the domain of institutions, such as companies, governments or universities that might be controllable, but it’s not the “firing” part.
But even that last part (avoiding killer AI) is so fuzzy as to be unactionable. Since we haven’t really defined AI, it’s hard to know what part or who to stop. For example, when along the decade-long road of DIYDrones, as the drones got smarter and added more machine learning, should we have pulled the plug? And how?
So I asked Stuart Russell, the famed Berkeley AI professor who helped make the video and closes it with a call to action to visit AutonomousWeapons.org, what I, as a drone technology leader, should do. He responded as follows:
In short, the answer is:
* Support a treaty banning lethal autonomous weapons
* Cooperate in developing and complying with a treaty mechanism (analogous to one in the chemical weapons convention) that includes “Know Your Customer”, various notifications and checks for large purchases, etc., to prevent large-scale diversion.
Note that this is much like the treaties and policies that most of the world has already agreed to on land mines, which are another form of autonomous weapon.
I also asked him if I was right in thinking that the shown example of weaponization would be impossible to stop, and he agreed, but felt that it would be at a limited scale, much like ISIS repurposing of consumer drones today.
I don’t think there’s much that can be done to prevent small-scale diversion and repurposing as weapons, although that would still require considerable software and engineering expertise.
At a small scale, autonomy does not seem to offer much advantage compared to remote piloting, except for immunity to jamming.