“Sophia, The Humanoid Robot Is Essentially A Sort Of A Puppet,” says Prof. Kambhampati

When the past president of the Association for Advancement of Artificial Intelligence (AAAI) and one of the leading authorities on AI, Prof. Subbarao Kambhampati came visiting IIITH to deliver a talk on Human-AI collaboration, we caught up with him. Over a free-wheeling conversation, he talked about the past and the current AI buzz, portents of an AI apocalypse, and the way forward in imparting education. Here are edited excerpts from the interview. Read on.

What prompted you to move into the Human-Computer Interaction field?

I’m not quite sure that there was any obvious single point. There were like a series of things.The non-romantic explanation is that there was a Martin University Research initiative which involved human-robot interaction. I was brought in because I have expertise in pan generation and I just assumed that the human actors will do the human thing, I will just do the planning thing and we’ll all just work together. Clearly that doesn’t happen because everything in the system architecture has to change when there’s a human in the loop. The AI Systems have to change. So it was around 2007 to 2012 when that particular project was underway, that I had the beginning of my realisation that a) I thought the AI systems have to change quite significantly when there is a human in the loop. And b) that these changes were actually quite exciting. Until then I didn’t particularly care about humans myself, even though I was a lot more into autonomous agents. Some of my own reasons for not caring about them were things like if the humans are in the loop, they might actually be cheating in the sense that they would be helping the system as against the system helping the humans.

Can you elaborate on the cheating aspect?

What that means is that in the beginning, say given an initial setup and some goals to be reached, you have a planning system which has to come up with a sequence or a course of action to achieve those goals. This requires combinatorial search. Basically you have to consider all possible ways of sequencing actions and figuring out which one will actually get to the goal. Typically those search spaces are quite large and there would be situations where the planners were so slow that the humans would go into the search queue of the planners and help it improve its search. So this is human-AI interaction where humans are helping AI systems, not vice-versa. People at the beginning of computing would essentially do half the work and the computers would do the other half.

The cheating aspect comes in cases where what if, what the human is doing is the hard part of the problem and what the computer is doing is the easy part. Then essentially you’re just giving a stamp – that a computer was involved in this enterprise but really the hard work was all being done by the humans.

Actually, you would be surprised, less than a month and a half back in Russia, there was a trade show where there was this very intelligent robot going around etc. People got a little suspicious, pulled it apart and found there was a human being inside! In fact a whole bunch of people within the AI community cringe at the popularity of Sophia, the humanoid robot which supposedly shows a lot of intelligent behaviour. It’s essentially a sort of a puppet. It is not really a full-scale AI system. What is actually happening is that there are many layers in which people or programmers are controlling Sophia. With an audience that is willing to suspend its disbelief (that AI can do lots of impressive things), you can easily exploit it.

Why do you think AI has become such a household name?

Well, AI as a discipline started in this conference called the Dartmouth conference in 1956. And yet it’s only in the last five-six years that AI has become a household name and in fact every country is crazy. Every ministry in India is running some AI-related initiative. It’s become like a level of hype that’s unheard of and part of it is connected to the fact that while the big early feats of AI like Deep Blue beating Kasporov in chess were very impressive, you talked about it for a couple of days and you’ve moved on with your life. It wasn’t affecting your life everyday. What changed is that AI systems can now do perception well. The way kids demonstrate intelligent behaviour is very different from the route taken by AI. When children come into this world, they can recognize faces. Then they can manipulate objects, demonstrate emotional intelligence, gain social intelligence and then at some point of time they start showing cognitive intelligence of the kind that we normally mean by intelligence (IQ). AI went the opposite way. In AI, we were doing cognitive intelligence tasks like chess much before we could recognize faces.

In the more recent times in AI, perception started becoming feasible along with the rest of computer vision and speech recognition. In 1997 right after Deep Blue beat Kasparov, there were no articles about computers taking over the world. Hollywood made Terminator movies but there were no articles about doomsday predictions, dealing with the ethics of AI, and so on. Now the world has changed because basically many people actually can see the advantages of just the perception part and they are generally keen to believe anything. And of course, there are others willing to exploit this.

What’s the difference between artificial intelligence and autonomous?

Artificial intelligence in general is this field where you try to get non-biological entities show behaviors that when shown by biological entities will be considered intelligent. So it’s a circular definition. Autonomous is actually just a characteristic of a system, an autonomous system essentially means it is not being controlled remotely by anybody. In fact, autonomy and intelligence are orthogonal. A ballistic missile is autonomous. It’s not particularly intelligent. it can autonomously go once you tell it to go. You are not controlling it at all, nobody is controlling a ballistic missile. It just seeks the goal, autonomously goes there and then blows up the place. So autonomy and intelligence are completely different. People tend to be worried about autonomous systems. And then think that somehow AI systems will be autonomous so it’ll be bad. Or if they are intelligent, they can’t be bad necessarily. So those are orthogonal dimensions. You can be intelligent and choose to be controlled. For example, a system which works in the presence of humans might be autonomous and yet it might give precedence to the humans goals over its own goals. To me autonomy and artificial intelligence are essentially orthogonal dimensions and in fact it’s easier to get autonomy.

What do you think is the single biggest challenge in human-computer interaction?

I think in this area essentially, if you are working with an AI system you shouldn’t be able to tell the difference between working with the AI system and working with a human team member. It’s like the version of the Turing test. Everything that we thought we were good at, that ship has long sailed. We are not the fastest in anything. Even cheetahs are faster than us. I mean with bipedal locomotion, we are faster than most of the other animals but now, every little motorcycle is faster than us. And so we are not the fastest. We are not the best weightlifters. We are not the tallest… We basically have nothing really left to claim as the best about humanity. But the thing about humanity is general intelligence. We are not the best at any one thing but we are flexible and that’s ultimately the goal of AI.

In the end we will have to start worrying about human privacy overall when you have machines that are generally intelligent. They may not be like the best lawyer, or the best doctor but they’ll be just like normal humans. And normal humans are basically very flexible showing pretty effective behavior in all sorts of scenarios. They can drive without getting into accidents, and can have a conversation without basically offending people. And all these things that humans can do, when an AI system starts doing that, then all the ethical questions arise. You then have a world where it may be the case that humans might find it more fun to interact with the AI systems than other humans.

What’s your take on the doomsday scenario of machines taking over the world?

Taking over the world aspect as I said is stupid autonomy operating here. Thermonuclear bombs exist.. And the other thing that’s much worse, I’m more worried about how quasi-intelligent systems can be weaponised by humans. That’s what I’m more worried about. Basically whether it’s the Russian election scenarios, or Facebook scandals, these are essentially humans weaponizing quasi-intelligent systems. It’s not systems just having goals and saying hey let’s get rid of humanity. That makes for great Hollywood movies!

There are some people that I admire, and respect quite well such as Stuart Russell who is also really worried about computers winding up having survival instincts. And at that point, in their ability to survive they might unwittingly cause harm to humans. His argument is that any system that optimizes for its effectiveness of achieving goals will indirectly essentially learn some survival instincts. There are also some philosophers like Nick Bostrom who are worried about this issue of super intelligence and what happens when machines become more intelligent than humans in all areas.

For me, I’m not as worried about it. It’s like somebody said that’s like worrying about atmospheric pollution on Mars which is not the most pressing problem right now. I think to me, the most pressing problem is things like adversarial use of existing AI technology by malicious human actors.

As a professor you have always used innovative methods of teaching and even have a YouTube channel. Do you foresee traditional classrooms being replaced by machines? 

When Coursera came along, we all assumed that teaching as we know it, in classrooms would go away. In fact, I would make this joke that I’m going to become a swimming teacher. Why? Because it’s extremely hard to learn swimming by watching videos alone.

Here is something where I do think the AI systems can be very useful in – tutoring. You know if we had the resources, the best way of learning any material is somebody just teaching you and you alone ultimately. Not the 200 people in a classroom because in a 200-people classroom you would have to teach to the median, whatever your conception of median is. That means half of the people are bored, and the other half of the people are lost and one person is kind of getting it. To me one of the biggest applications of AI systems would be personalized tutoring systems. Sitting from where we are, I can see either of two things happening – only mooks exist or only personalized tutors exist. Personalized ‘tutoring’ typically happens in research. Essentially that’s what we do with Phd or research students. That’s completely personalized. But It’s not tutoring, it’s more collaboration. You’re working one-on-one on a single problem and it’s not even clear who is teaching whom and oftentimes my students teach me rather than the other way around! And that’s the way best research typically winds up being.

Can you tell us about your association with IIITH so far?

Honestly, I think I knew of IIITH almost since Rajiv Sangal was the director but suddenly I became aware of it a lot more once P.J. Narayanan joined here. He was my classmate at the University of Maryland. He used to work at the Centre for AI and Robotics in Bangalore earlier and then he came here. So that’s how I became a lot more aware of it and I also had students from IIITH. I think the connection of combining research with teaching, IIITH does it better than some of the IITs at least. I mean IITs are mostly teaching institutions essentially and of course IITs are more general. They’re also not just Computer Science. IIITH is mostly Computer Science but they seem to have done this in a more interesting way of having research experience for all the students. So this is something that sets apart the few students from IIITH that I’ve worked with as grad students at Arizona State University (ASU).

Sarita Chebbi is a compulsive early riser. Devourer of all news. Kettlebell enthusiast. Nit-picker of the written word especially when it’s not her own.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *

Next post