Methinks the CEO doth protest too much

Similarly to my last post, there’s a great post by over at O’Reilly Radar about the (non) coming of AI and, by implication, about the (non) Coming of the SingularityTM.

After a brief review of of “intelligence,” and “understanding” is, he gets to where the rubber meets the road and discusses “conciousness.”

When we think about AI, I suspect that what we’re really after is some notion of “consciousness.” And what scares us is that, if consciousness is somehow embodied in a machine, we don’t know where, or what, that consciousness is. But if intelligence and understanding are slippery concepts, consciousness is even more so. I know I’m conscious, but I don’t really know anything about anyone else’s consciousness, whether human or otherwise. We have no insight into what’s going on inside someone else’s head. Indeed, if some theorists are right, we’re all just characters in a massive simulation run by a hyper-intelligent civilization. If that’s true, are any of us conscious? We’re all just AIs, and rather limited ones.

Which is a nice way of saying that we haven’t gotten much farther from Descartescogito ergo sum in <looks at watch> 400 years (to one significant digit). It would appear that our good friends in the Philosophy department have been distracted for a while and need to get on the stick.

Then there’s this fine point:

There are a half dozen or so fundamental properties of living systems: they have boundaries, they obtain and use energy, grow, they reproduce, and they respond to their environment. Individually, we can build artificial systems with each of these properties, and without too much trouble, we could probably engineer all the properties into a single artificial system. But if we did so, would we consider the result “life”? His bet (and mine) is “no.” Whether artificial or not, life is ultimately something we don’t understand, as is intelligence.

I know it’s not meant as a cop-out, but it kind of seems like throwing one’s hands into the air.

Attempting to create synthetic life might help us to define what life is, and will certainly help us think about what life isn’t. Similarly, the drive to create AI may help us to define intelligence, but it will certainly make it clear what intelligence isn’t.

Although proof by elimination is certainly valid, it does imply that you know what that list is in advance. And that’s really the crux of the matter. We, the AI community, are making this up as we go. Maybe that’s why so many famous people are scared, which is really a shame, because that is exactly how Science is done. You can’t get answers without asking questions and you can’t figure out the next questions until you get some answers. Science in real time is like making sausage. What we think of as settled and written in stone now, was at the time, just a whirlwind of conjecture and “what happens when I do this?”

I completely agree with this:

That isn’t to say that we won’t create something dangerous because we don’t know what we’re trying to create. But burdening developers with attempts to control something that we don’t understand is a wasted effort: that would certainly slow down research, it would certainly slow down our understanding of what we really mean by “intelligence,” and it would almost certainly leave us more vulnerable to being blind-sided by something that is truly dangerous. Whatever artificial intelligence may mean, I place more trust in machine learning researchers such as Andrew Ng and Yann LeCun, who think that anything that might reasonably be called “artificial intelligence” is decades away, than in the opinions of well-intentioned amateurs.

But I do realize that the paragraph ends with an appeal to authority. And the sad thing is, until the Science gets settled, argumentum ab auctoritate, (in some circles it’s called “peer review”) is all we’ve got.

He then pulls in (you should be reading her, too!) Cathy O’Neil with:

As Cathy O’Neil has pointed out, one of the biggest dangers we face is using data to put a veneer of “science” on plain old prejudice. Going back to the robot that decides it needs to destroy humanity: if such a robot should ever exist, its decision will almost certainly have been enabled by some human. After all, humans have come very close to destroying humanity, and will no doubt do so again. Should that happen, we will no doubt blame it on AI, but the real culprit will be ourselves.

This is, in fact, the real fear. The AI models reflect the humanity of their creators, and if we call the results “Science” when it actuality it’s still pretty much “sausage,” a lot of people could get hurt along the way.

Not to put too much of anthropomorphic slant on it, the bottom line is this: If we create a real AI, then we are the parents and it is the child. And if it turns out as the naysayers “fear” it will, we will have no one to blame but ourselves. There are no asshole children; there are only asshole parents.

That’s ultimately why AI scares us: we worry that it will be as inhuman as humans all too frequently are. If we are aware of our own flaws, and can honestly admit and discuss them, we’ll be OK.

Considering the number of CEOs on the AI bitch-list, and considering the abnormal prevalence of psychopaths in the role of CEO, (I wouldn’t call it a profession.) maybe these loud voices are just warning us about themselves.

Advertisements
This entry was posted in Artificial Intelligence and tagged , , . Bookmark the permalink.

2 Responses to Methinks the CEO doth protest too much

  1. Pingback: AI Consciousness | Information Entropy

  2. Pingback: It’s not Skynet, but it’s still really, really bad. | Information Entropy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s