Friday, July 31, 2009 | The New York Times last week published a somewhat disturbing report on a private conference of robotics experts that was held in February at the Asilomar Conference Grounds in Monterey Bay.

Scientists attending the conference were “impressed and alarmed” at recent advances in artificial intelligence. They spoke of how Predator drones, which are manufactured by San Diego-based General Atomics, are becoming more and more adept at killing, and how robots are taking over an increasing number of human jobs.

The advances have rekindled a long-standing debate over whether we are on the verge of an “intelligence explosion,” in which smart machines begin reproducing themselves like in James Cameron’s Terminator movies. And talk has turned to whether the time has come for artificial intelligence to be governed.

Jason Fleischer, one of the top young researchers in this field, is an associate fellow at San Diego’s Neurosciences Institute. Fleischer spends his days at the institute’s W.M. Keck Laboratory running smart robots through mazes like scientists have been doing with rats for hundreds of years.

And in machines he has been able to duplicate some of activity that happens in a real nervous system, or in his words, get robots to “do stuff” on their own. In the process, he’s spent the vast majority of his work in recent years trying to understand memory.

Fleischer did not make it up to the Asilomar conference, but he is steeped in the debate around artificial intelligence. He took some time away from his robots this week to provide his take on the issue, and talk a bit about his research.

There seems to be concern that we are in a period of an “intelligence explosion,” the idea that we have reached a new level of artificial intelligence.

Definitely, you have to think of these things well in advance before they become real issues. And, to be fair, I think our culture has been thinking about these issues for a long time. I mean Mary Shelley wrote Frankenstein in the 1800s. That is the same kind of idea — what happens if we can create something in our own image, what are the issues.

And more recently people have been talking about what the real ethical issues are in real philosophical and scientific discussions. And this conference in Asilomar is the most recent of those.

It is always a dialogue that you need to try to understand what is important from both an ethical standpoint and what the public reaction will be. You could be doing something that is ethical, but the public doesn’t understand it or has fears of it. That is a huge issue and can totally disable an entire research field. So science communication is just as important as discussing the ethics in these kinds of situations.

So I gather you do not think that we’ve reached the point where artificial intelligence needs to be somehow restrained?

I think it is the right time to talk about what we should do with it, I don’t think that it is an imminent problem. There are people who do believe, Raymond Kurzweil for instance, who was one of the organizers of this conference I believe, has talked for a number of years that this is an imminent issue.

My view on it is that it is not an imminent issue, but we need to be discussing it. Artificial intelligence, and various other disciplines that deal with the same thing — machine learning for instance — have been trying to address learning in artificial systems since the 1950s probably, and it has been a long hard road. What you find is that it is fairly easy to design a system which can adapt its behavior and learn something that is limited and well defined. If you have a particular niche, you can design a system to fill it.

What is much harder, and no one has been able to deal with yet, is flexible general intelligence, the kinds of infinite discriminations that you and I make when we walk down the road and look around and make sure we aren’t going to be hit by a car when we cross the road … applying learned knowledge into new realms is the thing that has never been demonstrated by anybody.

Now, people like Kurzweil, has an optimistic view — he believes there will be a very quick transition. That we are going, slow, slow, slow, and suddenly everything is going to take off. Then at that point it is already too late. That is why I agree with him that it is time to think about it, I just don’t share his optimism about the quick transition being so soon.

How many decades away do you think we are from something like that?

I find that thing kind of hard to predict, and I’m a little prejudiced because this is what my work is.

I think that we are not going to make that progress until we have a better understanding of how the nervous system functions in these same kind of ways — and they are terribly complex, let me tell you.

Real nervous systems don’t work in the same ways that most of the algorithms and computer programs that most artificial intelligence machine learning people are trying to address. … I’m very sure that they are generally not working by the same mechanisms, and I think it is those mechanisms that are really going to lead to the ability to produce machines that are truly flexibly intelligent.

You say we are a long way from this so-called flexible intelligence in machines. What is just around the corner?

I think that things we can do now is we are very good at making these systems adapt to anything that we can define. And what we are getting better and better at is defining those niches.

I think there is absolutely going to be a lot of interesting things happening where we see more intelligent machines aiding and getting into our everyday lives.

We’re getting better at both the design of intelligent machines and at designing them for their right niches … as we understand more how we can interact with these things, how we can design a personal assistant that can handle all our voicemail for us or whatever.

That is part of where I think this amazing up-swelling of intelligent machines is going to happen, as we get better at doing that kind of stuff. The machines are getting much better, they are more flexible, and operate with much more realistic kinds of reaction times than ever before. All your mail, for example, is almost certainly sorted by a machine, which reads the address, and has been for a decade or more.

There will be a lot more of the kinds of services that you see on the web, where you are going to have these intelligent assistants that are going to be pulling data for you in a variety of ways. Those things are going to happen, but it my opinion it’s going to be awhile before we have anything that is even approaching the flexible intelligence of a dog, let alone C-3PO.

— Interview by DAVID WASHBURN

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.