Donald Norman on the design of intelligent machines

June 11, 2007 ⋅ 2 Comments »

Don Norman with robot This morning I was lucky enough to attend a talk at U of T by Donald Norman. In case you’ve never heard of him:

Don Norman is the champion of human-centered design. … Norman is Professor of Computer Science and Electrical Engineering, Psychology, and Cognitive Science at Northwestern University. There he teaches design while co-directing the Segal Design Institute. He is cofounder of the Nielsen Norman Group and a Professor at Northwestern University. He has been Vice President of Apple Computer and an executive at Hewlett Packard.

He is well known for his books “The Design of Everyday Things” and “Emotional Design.” … He lives at www.jnd.org.

His talk was based on his upcoming book, The Design of Future Things, which discusses “the role that automation plays in such everyday places as the home, and automobile.” The main thesis of Norman’s new book seems to be “intelligent devices aren’t.” The intelligence is really in the designer. Given a certain set of sensors and controls, a designer creates a simple approximation of intelligence. Like adaptive cruise control: the car can sense how far in front the next car is, and adjust the speed of your car to maintain a constant distance. Norman told us a story of one of his friends, who was driving a car with adaptive cruise control in heavy traffic. He’d been sitting in traffic so long that he forgot the cruise control was even on. He got a bit of a shock when he pulled onto the off-ramp on the car suddenly accelerated.

The problem is that this kind of automation is based on simple sensors and simple rules, but this all breaks down in the face of unexpected events. We all know that human reasoning and decision making are extremely complicated. A person can “know” something, even think it’s obvious, but not be able to say why. So what hope do we have of being able to design decision-making machines?

Norman thinks the ideal situation is that we don’t try to make machines that think, but machines that help us think. These systems should be optional, natural, and predictable. I don’t entirely disagree with him, but I think there are definitely situations where a computer really could make a better decision than a human.

Malcolm Gladwell gives an interesting example in his book Blink. He tells a story about a Chicago ER, and their process for diagnosing coronary patients. One doctors, after studying two years’ worth of data, came up with a decision tree based on only four factors. By using this simple, “unintelligent” decision making process, they actually got 70% better at recognizing patients who were having a heart attack.

In a question after the talk, Bill Buxton also pointed out that there are many kinds of automation that just work, like thermostats and ABS brakes. He asked Norman where would should draw the line. I tend to agree with him — I don’t think the problem is with automation per se, but with poorly designed automation. But I do think that Norman’s “optional, natural, and predictable” is a great set of design goals for automated systems.

To me, the most challenging part of the design is in the interaction between a person and a machine. In his talk, Norman pointed out that we talk about “conversations” between people and computers, but that’s not what happens. We don’t have a dialog, we have two monologues. I think we need to focus more on this conversation aspect.

HAL 9000

“Open the pod bay doors, HAL.”
“I’m sorry Dave, I’m afraid I can’t do that.”

Norman had a really interesting point that this conversation between people and machines is similar to the concept in linguistics of common ground. You can have a meaningful conversation with someone only if there’s enough common ground, like shared experiences and beliefs. Because machines have a different reality (based on the sensors they have available) and a different decision-making process, it’s hard to have a real conversation between a human and a machine.

In practice, there is always a gap between what the person wants and what the machine wants. When you’re hitting the brakes hard, you want to avoid a crash. Your ABS brakes don’t want to avoid a crash, they want to avoid the wheels locking up. This is what Norman calls the Gulf of Goals. There is also a Gulf of Actions, which is the difference between what you want to do and what the machine wants to do. In the case of ABS, you want to put the brakes on, while your car wants to let them off (but just for a quick second).

So, I think that Don Norman made a lot of good points about the possible problems with consumer-level automation, but I think he is overly pessimistic. He seems to believe that we shouldn’t have real automation, but always a human making the most important decisions. There are many good reasons to strive for real, useful automation. Will we ever be perfect? No. But I think we can design intelligent systems that are usable and helpful.


2 Comments:

  1. Fanis Tsandilas - June 13, 2007:

    I agree with your points. Norman's stand is not very surprising. He repeats the mainstream opinion within the HCI community towards automation and intelligent user interfaces. The same criticism has been expressed in the past by several other researchers such as Ben Schneiderman. This criticism is not totally unfair. It has been based on disappointing past experience and, as you already mentioned, on bad designs. Briefly, I am going to explain why, in my opinion, such bad designs came into place and why HCI experts have concluded to such pessimistic generalizations.

    At some point, researchers in AI saw that user interfaces was a nice domain for applying their intelligent techniques and algorithms. Modelling users and predicting their needs sounds as a very challenging problem. I felt this challenge when I started switching from a more theoretic background to HCI. People who like algorithms and modelling techniques will try to apply them everywhere even if this is not appropriate. Nevertheless, starting from this point is totally wrong. Many poor designs of intelligent UIs have been driven less by real user needs and more by the desire of AI people to apply their theories. Yet, such theories are subject to unrealistic assumptions about users. Unfortunately, the greatest amount of the work on intelligent and adaptive user interfaces has been conducted on such grounds. I believe that the correct approach to designing a new UI is to first study the user needs and apply automation only if this approach seems to satisfy these needs much better than a simply intelligently-designed UI. In my experience, automation can be useful in a few only situations and not for everyone. People tend to consider that if a UI fails for the majority of the people, then it is not useful. What if a 20% of users get a great benefit from it?

    I think that the use of the conversation metaphor to demonstrate the limitations of intelligent systems is quite misleading (I admit that I have also used it in the past as it is very elegant). Automation should not been designed having in mind a companion who tries to understand what the user wants. This is a wrong metaphor, whose use has lead to very poor designs. Finally, let me remind you of two very simple examples of automation who have been very successful even if their decision-making mechanism is not completely transparent. My mailbox would be unusable without my junk filter as I receive 100 junk messages per day. It rarely misses really junk messages and almost never filters out useful messages. I also find extremely helpful the suggestion/correction mechanism provided by eclipse although it makes several mistakes. When benefits coming from automation outweigh the cost of errors, then automation is useful. The cost of an error when driving a car is huge, but this is an extreme scenario.

  2. Patrick - June 14, 2007:

    Fanis, I think you're right -- the conversation metaphor is elegant, but it's not a good way to design systems. I wonder if voice-controlled systems might suffer from a similar uncanny valley as robots?

    But, I do think a lot of the problems stem from communication problems. Not only is it difficult to communicate our desires to the computer, it's also frustrating when we don't understand why it has done something. I think that if we understood why, we would probably be willing to put up with problems more.

    And you're right, there are several examples of automation where the problems outweigh the benefits. There are other scenarios -- web search for example -- where things would be completely unmanageable without some amount of "smart" decision making by the computer.