This morning I was lucky enough to attend a talk at U of T by Donald Norman. In case you’ve never heard of him:
Don Norman is the champion of human-centered design. … Norman is Professor of Computer Science and Electrical Engineering, Psychology, and Cognitive Science at Northwestern University. There he teaches design while co-directing the Segal Design Institute. He is cofounder of the Nielsen Norman Group and a Professor at Northwestern University. He has been Vice President of Apple Computer and an executive at Hewlett Packard.
He is well known for his books “The Design of Everyday Things” and “Emotional Design.” … He lives at www.jnd.org.
His talk was based on his upcoming book, The Design of Future Things, which discusses “the role that automation plays in such everyday places as the home, and automobile.” The main thesis of Norman’s new book seems to be “intelligent devices aren’t.” The intelligence is really in the designer. Given a certain set of sensors and controls, a designer creates a simple approximation of intelligence. Like adaptive cruise control: the car can sense how far in front the next car is, and adjust the speed of your car to maintain a constant distance. Norman told us a story of one of his friends, who was driving a car with adaptive cruise control in heavy traffic. He’d been sitting in traffic so long that he forgot the cruise control was even on. He got a bit of a shock when he pulled onto the off-ramp on the car suddenly accelerated.
The problem is that this kind of automation is based on simple sensors and simple rules, but this all breaks down in the face of unexpected events. We all know that human reasoning and decision making are extremely complicated. A person can “know” something, even think it’s obvious, but not be able to say why. So what hope do we have of being able to design decision-making machines?
Norman thinks the ideal situation is that we don’t try to make machines that think, but machines that help us think. These systems should be optional, natural, and predictable. I don’t entirely disagree with him, but I think there are definitely situations where a computer really could make a better decision than a human.
Malcolm Gladwell gives an interesting example in his book Blink. He tells a story about a Chicago ER, and their process for diagnosing coronary patients. One doctors, after studying two years’ worth of data, came up with a decision tree based on only four factors. By using this simple, “unintelligent” decision making process, they actually got 70% better at recognizing patients who were having a heart attack.
In a question after the talk, Bill Buxton also pointed out that there are many kinds of automation that just work, like thermostats and ABS brakes. He asked Norman where would should draw the line. I tend to agree with him — I don’t think the problem is with automation per se, but with poorly designed automation. But I do think that Norman’s “optional, natural, and predictable” is a great set of design goals for automated systems.
To me, the most challenging part of the design is in the interaction between a person and a machine. In his talk, Norman pointed out that we talk about “conversations” between people and computers, but that’s not what happens. We don’t have a dialog, we have two monologues. I think we need to focus more on this conversation aspect.
“Open the pod bay doors, HAL.”
“I’m sorry Dave, I’m afraid I can’t do that.”
Norman had a really interesting point that this conversation between people and machines is similar to the concept in linguistics of common ground. You can have a meaningful conversation with someone only if there’s enough common ground, like shared experiences and beliefs. Because machines have a different reality (based on the sensors they have available) and a different decision-making process, it’s hard to have a real conversation between a human and a machine.
In practice, there is always a gap between what the person wants and what the machine wants. When you’re hitting the brakes hard, you want to avoid a crash. Your ABS brakes don’t want to avoid a crash, they want to avoid the wheels locking up. This is what Norman calls the Gulf of Goals. There is also a Gulf of Actions, which is the difference between what you want to do and what the machine wants to do. In the case of ABS, you want to put the brakes on, while your car wants to let them off (but just for a quick second).
So, I think that Don Norman made a lot of good points about the possible problems with consumer-level automation, but I think he is overly pessimistic. He seems to believe that we shouldn’t have real automation, but always a human making the most important decisions. There are many good reasons to strive for real, useful automation. Will we ever be perfect? No. But I think we can design intelligent systems that are usable and helpful.