I ran across a new “smart gun” proposal, AI Can Stop Mass Shootings, and More. Yes, these people — three superior intellects who toil away at the Rensselaer Polytechnic Institute — are proposing that “ethical” artificial intelligence be built into guns, which would make them really smart guns. Allegedly.
While we hope that such [conventional gun control] measures, which of late have thankfully been gaining some traction, will be put in place, our optimism is instead rooted in AI; specifically, in ethically correct AI; and even more specifically still: our hope is in ethically correct AI that guards guns.
Unless AI is harnessed in the manner we recommend, it seems inevitable that politicians (at least in the U.S.) will continue to battle each other, and it does not strike us as irrational to hold that even if some legislation emerges from their debates, which of late seems more likely, it will not prevent what can also be seen as a source of the problem in many cases: namely, that guns themselves have no ethical compass.
Guns themselves, you see, lack a “moral compass,” a conscience if you will. Therefore, in the brave new world that’s dawning, we must install ethics in our firearms so the gun can decide if it’s morally permissible for it to fire.
Mass shooter at a Walmart? That would be bad.
The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are).
At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.
No trigger for you.
A cop taking down a mass shooter? That’s OK. Fire away.
But I think the three galaxy brains behind this bold idea have failed to comprehend that the world isn’t always quite so simple. You wake up in the middle of the night to sounds of ransacking somewhere in your house. You grab your gun…and it sits there pondering the possibilities.
Is it just your child clumsily looking for a late night snack? Is it actual home invaders? Maybe it’s a drunk who’s blundered into the wrong house. Or maybe it’s an actual burglar, but the gun thinks what it’s hearing is the sound of you abusing your spouse. By the time the gun makes up its mind, you might be dead.
You may be thinking, “Hey, wouldn’t bad guys just disable the AI?” Don’t worry, Doctors Howard, Fine and Howard have already thought of that.
The AI wouldn’t be just in the gun(s), but it would be “imbued” in other objects that would somehow (it’s not really specified) stop the shooter, too. Besides, you wouldn’t be able to hack it or turn it off because . . .
This is an objection that we have long anticipated in our work devoted to installing ethical controls in such things as robots, and we see no reason why our approach there, which is to bring machine ethics down to an immutable hardware level, cannot be pursued for weapons as well.
Oh. This is immutable hardware we’re talking about. Magically hack-proof. Artificial intelligence that’s instilled with its own (naturally superior) ethical reasoning.
The three researchers behind this scholarly work don’t seem to have considered the hundreds of millions of pre-existing firearms in the US alone. But that’s all beside the point. It turns out that we already have such a system. One that’s more sophisticated than the
I know that initiating force or preying on innocents is unethical. I know that all force initiated against me doesn’t necessarily call for the use of deadly force in response. I’ve studied the ethics of — and laws surrounding armed self defense for decades. I don’t need a machine to try to make decisions for me. And I don’t want someone else to program my gun’s brain to decide for me when I can and can’t use it.
The problem, of course, is that the world has a certain population of people — criminals, gang members, psychopaths, lunatics — who are out to harm innocents and commit crimes. These people lack any such a sense of ethics. And they aren’t about to equip themselves with an artificial conscience.
The answer is not to adopt more machine control over our lives, but to allow more of us who are ethical — the vast majority of people — to freely access firearms to protect ourselves and our families. As is oft noted, an armed society is a polite society…and an increasingly ethical one.