It’s not Skynet, but it’s still really, really bad.

There’s been a bit of a news wave in the Tech Press about a letter signed by 1000 AI experts, (I wasn’t asked. <sniff>) including some technology and science superstars ,(Elon Musk, Steve Wozniak, Stephen Hawking) urging governments to forgo the use of AI and Machine Learning for making “offensive autonomous weapons.”

Considering that I’ve been a bit critical of Musk and Hawking and their ilk concerning  their AI FUD, when I first read the headline, I thought it would be just more typical noise from members of the the wannabe chattering classes. However, they do get it right this time. The time is now, before the AI arms race starts, to nip this thing in the bud. Something along the lines of the Non-Proliferation of Nuclear Weapons, except with everybody signing instead of having some grandfathered in, and some not signing.

Yeah, it’s not a high hurdle to jump, but it’s something.

The deal is this: even though the warnings of “AI consciousness” or of the AI “singularity” are overblown, offensive autonomous weapons and tactics are not. The technology to do a combination (an ensemble in the parlance) of facerecognition, gait analysis, and/or speaker recognition could build a convincing case for biometric identification at a distance, (not to mention the digital trail everyone leaves behind today,) culminating with the launching of a hellfire missile from a drone. But that would just apply to “extra-judicial” killings of the kind that happens now.  Even if the target is hit (without due process), the weapons of terror kill a lot of others. This technology is available now, and you can bet that it’s being weaponized right now. The last thing the current horribly failed system needs is the removal a person who is ultimately responsible for the decision out of the loop.

What happens if some (at best) terribly misguided AI researcher tries to find enemies in real-time?  Via a rifle mounted on a jeep or tank, or on a body camera on a soldier in combat? What if based on behavioral characteristics a weapon could automatically open fire on a perceived enemy? How much “collateral damage” would have to be ignored before somebody with the authority to shut the program down would think of doing so, before deciding not to do so?

Who would end up being responsible for a friendly fire or the death of an innocent in this case? Would it be considered a software bug, covered as “not under warranty of any kind?” Could there be any liability? How long would it take for a law to be passed to retroactively remove liability? Not very long, I’ll bet you dollars to doughnuts.

The conservative canard that government programs never die has a bit of unfortunate truth to it, and that goes just as well for weapons systems that should have died a long time ago.

The tech luminaries have it right this time. Offensive autonomous weapons should be banned before they get started, before they have the chance never to die.

 

 

Advertisements
This entry was posted in Artificial Intelligence, Data Activisim, Machine Learning and tagged , , , , , . Bookmark the permalink.

One Response to It’s not Skynet, but it’s still really, really bad.

  1. Pingback: More voices | Information Entropy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s