The problem of building an intelligent AI system that can cause so much harm is actually nothing new in the sense that parents face this problem all the time. Why do parents not fear their children growing up to abandon them or worse? The problem with building autonomous systems is that they are autonomous, so before we set them lose, we might want to think about how to ensure these systems will autonomously decide not to harm us.
I think the idea that AI systems could be programmed with laws of ethics to keep humans safe will be both difficult, and increasingly so given an increasingly intelligent AI system. For now, let's set aside the difficulties of coming up with such a set of laws to begin with (and if we could, wouldn't we want humans to follow those laws too? Yet after hundreds of years of development, we've actually ended up with laws that justify the destruction of human life on a daily basis).
Of course, it would be useful to have some set of robot laws of ethics to begin with, but in the end, in the game of playing lawyer against an artificial intelligence that potentially can think continuously, faster, and longer, than ourselves, what hope do we have of boxing in an AI's actions? Those laws would, of course, be useful for AI systems not sufficiently intelligent to out-think us in that way, but again, I feel AI systems with such powerful intelligence will be built one day (and soon too, if we believe Dr. Schmidhuber's assessment. See Build An Optimal Scientist, Then Retire).
Now think back to the question of why parents do not fear their children growing up to abandon them or worse. The simple reason for most parents is that they teach their children to think in and understand a system of moral values that is compatible with the parents' own. In fact, in today's global climate, it's not difficult to see how when someone's system of moral beliefs is (or appears to be) different than our own, we back off with suspicion. The simple reason for this is that if it appears you think as I do, and I think that I think morally, then I think that you think morally as well. There is an entire area of game theory, how social customs are set, etc, that we can go into here, but back to the issue of AI systems.
Clearly, we wouldn't have to fear AI systems doing something to harm us if we can be convinced that the AI system thinks as we do at least in terms of morality for the reason above. There is an issue of how we can ever be convinced, but this is a problem shared equally by systems that have robot laws of ethics built into it, so let's leave that for the moment. The more pressing matter is how can an AI system learn to think as we do, at least in terms of morality, which in itself raises many technical questions of what it even means for an AI system to think. Sweeping that aside for the moment, the pressing question is how can the AI system learn to think morally, given how much faster the system will be at absorbing information, and making inferences based on it, than any human designer. Because, remember, it's hopeless to try to out-argue or out-lawyer with ethical laws the AI system.
A complete system of moral values isn't a simple thing, especially when we expect it to be essentially bug free and un-hackable by an artificial intelligence. We ought to be skeptical when engineers and scientists tell us they can write down a set of rules (however fuzzy, smart, dynamic, static, or whatever) that an AI system can be boxed in by to ensure the AI system would behave morally. I have more faith that a proper understanding of the collective human knowledge would be more effective. That is why we really should hope AI systems will be taught to read and understand the written word.
The caveat here is what constitutes a "proper understanding" and how can an AI system come to have it. Fortunately, I mean proper understanding only in a simple way, ie, merely understanding the meaning of words and sentences (rather than the circular reasoning kind of "it is proper if the agent acts morally"). Of course, this simply pushes the problem into how can an AI system understand the meaning of words and sentences (amongst all the other technical problems I swept aside above).
That question pushes us into the realm of Cognitive Science, so I'll save that for another post.
No comments:
Post a Comment