CORVALLIS, Ore. — In the movie “2010,” while trying to salvage the mission to Jupiter, the Hal 9000 computer noted, “I enjoy working with human beings, and have stimulating relationships with them.”
Well, 2010 is just around the corner, and as usual Hollywood was a little ahead of its time — but in this case, not by much. Oregon State University researchers are pioneering the concept of “rich interaction” — computers that do, in fact, want to communicate with, learn from and get to know you better as a person.
The idea behind this “meaningful” interaction is one of the latest advances in machine learning and artificial intelligence, in which a computer doesn’t just try to learn from its own experiences, it listens to the user, tries to combine what it “hears” with its internal reasoning, and changes its program as a result. When ordinary users spot the machine’s errors they should be able to step in and explain directly to the machine the logic it should be using.
“There are limits to what the computer can do just by its own observations and efforts to learn from experiences,” said Margaret Burnett, an associate professor of computer science at OSU. “It needs to understand not just what it did right or wrong, but why. And for that, it has to continue interacting with human beings and make constant changes in its own programming, based on their feedback.”
OSU researchers say that many advanced learning systems begin learning the moment they are delivered to an end user’s desktop in an effort to customize themselves to the end user. Systems like this are the basis of spam filters on personal computers, e-mail sorting, product recommendation ? “If you liked this book, here’s another one you might find interesting.”
A lot of these systems are based on word statistics, set rules, similarities, and other such approaches. But even the most advanced systems only allow a user to tell the computer something is right or wrong. The user is never asked to explain what the real problem is.
Consider the Nigerian money scam, a form of spam so common that almost everyone who ever used email has gotten a plaintive query from someone who just lost their rich uncle and has several million dollars tucked away, needing only a helpful and discreet friend to get it to a safe bank. Help me out and we’ll split the money, your new “friend” implores.
Now, this scam is so pervasive that most computer spam filters will immediately spot it and send it to junk mail, perhaps because it saw the words “Nigeria” and “money” in the same message. But what if the recipient regularly received email from a friend in Nigeria who is a legitimate banker? That’s not spam, but how do you explain that to the computer?
“In a case like this, we want to develop algorithms that will allow the end user to ask the computer why it did something, read its response, and then explain why that was a mistake,” said Weng-Keen Wong, an OSU assistant professor of computer science. “Ideally, the computer will consider the response and change its programming to perform better in the future. It’s like debugging a program.”
For a computer to be of optimal help to its user, the researchers said, it has to customize itself to the end user and get more personal.
“We all have fairly specific life experiences, personal preferences, ways of doing things, different types of jobs,” Burnett said. “For machine learning to reach its potential the computer and the user have to interact with each other in a fairly meaningful way, the computer really needs to get to know your situations and understand why it made a mistake, so that it can try not to make the same mistake again.”
A major part of this challenge, the scientists say, is to create interactive systems that are easy enough to operate that you don’t have to be a computer programmer. That should be possible, they said.
And worth noting, the scientists said, is that it’s not always the human who is the teacher. In one study, users disagreed with what the computer had done 28 percent of the time. In about one fourth of those cases, the computer was actually right and the user was wrong. The learning can be a two-way street, and this also presents one of the challenges. A stubborn human user may insist that the computer “learn” something that is incorrect.
OSU is one of the leaders in this field of “rich guidance” research, and has several recent publications on advances in the field, including one in the International Journal of Human-Computer Studies and three in the ACM Conference on Intelligent User Interfaces. Collaborating on the studies is Simone Stumpf, an OSU researcher for several years and now a faculty member at City University London. OSU students are helping to develop the latest approaches, and researchers also recently received a three-year, $1 million grant from the National Science Foundation for continued research.
The era of humans as passive observers in the field of artificial intelligence, the researchers said, may be coming to a close.
“In the future we believe the computer should be like your partner,” Burnett said. “You help teach it, it gets to know you, you learn from each other, and it becomes more useful.”