2010-01-31

Artificial Intelligence Needs to Learn to Read

To complete my previous post on the danger of artificial intelligence, I'll now briefly explain why I believe AI systems needs the ability to read and understand the written word.

The problem of building an intelligent AI system that can cause so much harm is actually nothing new in the sense that parents face this problem all the time. Why do parents not fear their children growing up to abandon them or worse? The problem with building autonomous systems is that they are autonomous, so before we set them lose, we might want to think about how to ensure these systems will autonomously decide not to harm us.


Isaac Asimov envisioned Three Laws of Robotics that would keep humans safe from autonomous robots endowed with these laws. Of course, he promptly wrote stories after stories, including Foundation and Earth for example, of how these laws could fail in various subtle ways in keeping humans safe.

I think the idea that AI systems could be programmed with laws of ethics to keep humans safe will be both difficult, and increasingly so given an increasingly intelligent AI system. For now, let's set aside the difficulties of coming up with such a set of laws to begin with (and if we could, wouldn't we want humans to follow those laws too? Yet after hundreds of years of development, we've actually ended up with laws that justify the destruction of human life on a daily basis).

Of course, it would be useful to have some set of robot laws of ethics to begin with, but in the end, in the game of playing lawyer against an artificial intelligence that potentially can think continuously, faster, and longer, than ourselves, what hope do we have of boxing in an AI's actions? Those laws would, of course, be useful for AI systems not sufficiently intelligent to out-think us in that way, but again, I feel AI systems with such powerful intelligence will be built one day (and soon too, if we believe Dr. Schmidhuber's assessment. See Build An Optimal Scientist, Then Retire).

Now think back to the question of why parents do not fear their children growing up to abandon them or worse. The simple reason for most parents is that they teach their children to think in and understand a system of moral values that is compatible with the parents' own. In fact, in today's global climate, it's not difficult to see how when someone's system of moral beliefs is (or appears to be) different than our own, we back off with suspicion. The simple reason for this is that if it appears you think as I do, and I think that I think morally, then I think that you think morally as well. There is an entire area of game theory, how social customs are set, etc, that we can go into here, but back to the issue of AI systems.

Clearly, we wouldn't have to fear AI systems doing something to harm us if we can be convinced that the AI system thinks as we do at least in terms of morality for the reason above. There is an issue of how we can ever be convinced, but this is a problem shared equally by systems that have robot laws of ethics built into it, so let's leave that for the moment. The more pressing matter is how can an AI system learn to think as we do, at least in terms of morality, which in itself raises many technical questions of what it even means for an AI system to think. Sweeping that aside for the moment, the pressing question is how can the AI system learn to think morally, given how much faster the system will be at absorbing information, and making inferences based on it, than any human designer. Because, remember, it's hopeless to try to out-argue or out-lawyer with ethical laws the AI system.

Unless we provide the system with the sum total of the human library (or some approximation), holding out hope and having faith that the collective human knowledge in the form of the written word would, if understood properly, endow an AI agent with an understanding of what we believe to be a worthwhile system of moral value. Providing the collective works from Confucious, Lao Tzu, Plato, Aristotle, Kant, and more (and perhaps we should withhold items like Machiavelli's The Prince).

A complete system of moral values isn't a simple thing, especially when we expect it to be essentially bug free and un-hackable by an artificial intelligence. We ought to be skeptical when engineers and scientists tell us they can write down a set of rules (however fuzzy, smart, dynamic, static, or whatever) that an AI system can be boxed in by to ensure the AI system would behave morally. I have more faith that a proper understanding of the collective human knowledge would be more effective. That is why we really should hope AI systems will be taught to read and understand the written word.

The caveat here is what constitutes a "proper understanding" and how can an AI system come to have it. Fortunately, I mean proper understanding only in a simple way, ie, merely understanding the meaning of words and sentences (rather than the circular reasoning kind of "it is proper if the agent acts morally"). Of course, this simply pushes the problem into how can an AI system understand the meaning of words and sentences (amongst all the other technical problems I swept aside above).

That question pushes us into the realm of Cognitive Science, so I'll save that for another post.

No comments: