Henry Kissinger is a former Secretary to State and has an entirely new field of interest at the age of 98: Artificial intelligence. Eric Schmidt, then Google’s executive chairman, convinced him to go to a lecture. To write a new, bracing book, the two authors have teamed up to Daniel Huttenlocher (dean of MIT Schwarzman College of Computing). AI: The Age of AI, about the implications of the rapid rise and deployment of artificial intelligence, which they say “augurs a revolution in human affairs.” The book argues that artificial intelligence processes have become so powerful, so seamlessly enmeshed in human affairs, and so unpredictable, that without some forethought and management, the kind of “epoch-making transformations” that they will deliver may send human history in a dangerous direction.
TIME interviewed Schmidt and Kissinger to discuss their vision of the future.
Dr. Kissinger, you’re an elder statesman. What made you believe that AI was important enough for you to study?
Kissinger: When I was an undergraduate, I wrote my undergraduate thesis of 300 pages—which was banned after that ever to be permitted—called “The Meaning of History.” The subject of the meaning of history and where we go has occupied my life. The technological miracle doesn’t fascinate me so much; what fascinates me is that we are moving into a new period of human consciousness which we don’t yet fully understand. The new human consciousness will have a different perception of reality than the one that existed between the age of Enlightenment (or the medieval period), when there was a shift from the West’s religious view of the world to one based on reason. It will happen faster.
There’s one important difference. There was an Enlightenment world that was conceptually based on faith. Galileo, and other pioneers of Enlightenment thought had become the dominant philosophy that they needed to challenge. This thinking can be traced back to its evolution. In effect, there are no philosophical views dominant in our world. Technologists are free to run wild. They can develop world-changing things, but there’s nobody there to say, ‘We’ve got to integrate this into something.’
Eric met you when [Schmidt]When he asked you to address Google, you stated that Google was a danger to civilization. What made you feel this way?
Kissinger:It was not my intention for one company to be the only source of information. It was dangerous to allow one company access to information, and then be able adjust the data to suit its analysis of what is most popular or believed to be plausible. The truth was subjective. At the time, that was all I could remember. He invited me to his algorithmic meeting to help me see that it was not random. The selection of the material was based on thought and analysis. It didn’t obviate my fear of one private organization having that power. But that’s how I got into it.
SchmidtHis visit to Google prompted him to think. We began talking about this and Dr. Kissinger shared his concern that technology will impact human existence. He also said that technologists don’t have the ability to understand their history and impact. This, I feel, is completely correct.
Given that many people feel the way that you do or did about technology companies—that they are not really to be trusted, that many of the manipulations that they have used to improve their business have not been necessarily great for society—what role do you see technology leaders playing in this new what system?
Kissinger:Technology companies, I believe, have helped us to enter a new stage of our human consciousness. This is similar to what Enlightenment people did with their shift from religion and reason. The technologists also show how we can relate reason and artificial intelligence. It’s a different kind of knowledge in some respects, because with reason—the world in which I grew up—each evidence supports the other. Amazingly, artificial intelligence allows you to draw a conclusion that is right. But you don’t know why. That’s a totally new challenge. In some ways their inventions are dangerous. It advances culture. Were we better off if this technology hadn’t been developed? I don’t know that. We must understand that fact now. It cannot be eradicated. We already have too much of it in our daily lives.
Do you consider the main geopolitical impact of artificial intelligence’s growth to be?
Kissinger: I don’t think we have examined this thoughtfully yet. Artificial intelligence weapons are available if you can imagine war between China, the United States and other countries. They are, like all artificial intelligences, more efficient at what they plan. However, they may also be effective in achieving what they believe their objective to be. And so if you say, ‘Target A is what I want,’ they might decide that something else meets these criteria even better. So you’re in a world of slight uncertainty. Secondly, since nobody has really tested these things on a broad scale operation, you can’t tell exactly what will happen when AI fighter planes on both sides interact. So you are then in a world of potentially total destructiveness and substantial uncertainty as to what you’re doing.
World War I was almost like that in the sense that everybody had planned very complicated scenarios of mobilization, and they were so finely geared that once this thing got going, they couldn’t stop it, because they would put themselves at a bad disadvantage.
The concern you have is that AIs may be too powerful. And we don’t exactly know why they’re doing what they’re doing?
Kissinger: I have studied what I’m talking about most of my life; this I’ve only studied for four years. Deep Think was taught to play four-hour games of chess against itself. It played a game that no one had seen before. The best computer can only sometimes beat it. This happens occasionally in other fields. Our world isn’t prepared for that.
The book argues that because AI processes are so fast and satisfying, there’s some concern about whether humans will lose the capacity for thought, conceptualizing and reflection. What is the answer?
Schmidt So, again, using Dr. Kissinger as our example, let’s think about how much time he had to do his work 50 years ago, in terms of conceptual time, the ability to think, to communicate and so forth. What is the major narrative in 50 years? Compression of time. We’ve gone from the ability to read books to being described books, to neither having the time to read them, nor conceive of them nor to discuss them, because there’s another thing coming. The speed of information and time is accelerating beyond human capacities, according to me. It’s overwhelming, and people complain about this; they’re addicted, they can’t think, they can’t have dinner by themselves. I don’t think humans were built for this. This can cause cortisone and other side effects. The overload of information can cause our inability to deal with it all.
What I have said—and is in the book—is that you’re going to need an assistant. So in your case, you’re a reporter, you’ve got a zillion things going on, you’re going to need an assistant in the form of a computer that says, ‘These are the important things going on. These are the things to think about, search the records, that would make you even more effective.’ A physicist is the same, a chemist is the same, a writer is the same, a musician is the same. So the problem is now you’ve become very dependent upon this AI system. We argue that the AI system is controlled by who, in our book. And what about its biases? How can we regulate what happens? This concern is particularly important when it comes to young people.
Your book focuses on the positive and negative sides of AI. Is that what you are referring to?
Kissinger:Google understood what I meant. Humanity has assumed technological advancement is beneficial and manageable. This is not the case. It may be manageable, but there are aspects to the managing part of it that we haven’t studied at all or sufficiently. I remain worried. I’m opposed to saying we therefore have to eliminate it. It’s there now. We believe that there needs to be a philosophy for the conduct of research.
What would be your suggestion for a person who would share that philosophy? What’s the next step?
Kissinger:It is necessary to have a few small groups of people who ask questions. As a graduate student in nuclear weapons was new to me. At that time, there were many concerned professors from Harvard, MIT, and Caltech who met every Saturday to discuss the matter. How can we solve it? What can we do?They came up with the idea of arms control.
SchmidtIt is necessary to have a comparable process. It won’t be one place, it will be a set of such initiatives. I hope to be able to assist in organizing those after-book events, provided that the book gets a positive reception.
The first is the fact that these things are too complex to be handled by technology alone. It’s also unlikely that it will just get regulated correctly. You need to create a philosophy. I can’t say it as well as Dr. Kissinger, but you need a philosophical framework, a set of understandings of where the limits of this technology should go. My experience with science has shown that scientists can only achieve this if they get policy-makers and scientists together. It is also true of biology.
This group must have an international dimension. The U.N. or who?
SchmidtThis is how things usually work. There are small and elite groups who think about these issues, and they must be joined together. For example, the Oxford AI and Ethics Strategy Group are both quite impressive. You will find little pockets of them all over the globe. There’s also a number that I’m aware of in China. But they’re not stitched together; it’s the beginning. So if you believe what we believe—which is that in a decade, this stuff will be enormously powerful—we’d better start now to think about the implications.
I’ll give you my favorite example, which is in military doctrine. Everything’s getting faster. The thing we don’t want is weapons that are automatically launched, based on their own analysis of the situation.
Kissinger: Because the attacker may be faster than the human brain can analyze, so it’s a vicious circle. You have an incentive to make it automatic, but you don’t want to make it so automatic that it can act on a judgment you might not make.
SchmidtThis is why there’s no discussion between major nations on the matter. And yet, it’s the obvious problem. Many discussions are held about matters that move at a human pace. However, what happens when all of this is too fast? It is important that we agree on limits to the speeds of these systems, as otherwise, things could become very volatile.
That might be why some people find the message difficult to digest. Google’s popularity was built on the speed at which information could get to users. There are many people who would agree. You actually helped create this problem.
Schmidt I did, I am guilty. Together with others, we’ve built platforms that can be very, very fast. And sometimes they’re faster than what humans can understand. That’s a problem.
Is it possible to get ahead of technology? Haven’t we always responded after it arrives? It’s true that we don’t understand what’s going on. But people initially didn’t understand why the light came on when they turned the switch. AI is a subject that many people don’t care about.
SchmidtThe misuse of any of these technologies is a concern to me. It was not something I expected to see the Internet being used to influence elections. This was something that I had never considered. It was a mistake. The internet would power anti-vax movements in such terrible ways was not something that I expected. It was a mistake. This was something I didn’t see. We’re not going to miss the next one. We’re going to call it ahead of time.
Kissinger:What would you do if you knew?
Schmidt I don’t know. It would have been possible to do something else. It would have made it possible to build different products if I knew this 10 years earlier. It would have been different to lobby. It would have been different to give speeches. It would have been easier to give the warning before it occurred.
I don’t agree with the line of your argument that it’s fatalistic. Technology is almost certain to bring us what we expect. Technology is usually predicted within 10 years, and certainly in five years. We tried to predict what would happen in our book. It is up to people how they deal with this. These are the problems I would like to solve. The book has a brief reference on misinformation. This is only the beginning. You solve it by first identifying the source of the information cryptographically, and then ranking the results so you have the best.
Kissinger: I don’t know whether anyone could have foreseen how politics are changing as a result of it. This may just be human destiny or human tragedy. The punishment could be that they must find the solution themselves. My motivation to participate in technological discussions was not there. Eric became my boss in his 90s. Each three to four week, Eric would hold small seminars that gathered four to five people. These issues were being discussed and many of your questions were raised to help us figure out what could be done. At that time, it was just argumentative, then, at the end of the period, we invited Dan Huttenlocher, because he’s technically so competent, to see how we would write it down. We met every Sunday afternoon for one year. It’s not just a fad. It’s a serious set of concerns.
Schmidt So what we hope we have done is we’ve laid out the problems for the groups to figure out how to solve them. And there’s a number of them: the impact on children, the impact on war, the impact on science, the impact on politics, the impact on humanity. We want to emphasize that these initiatives must be taken immediately.
Let me ask each of you a question about each other. Dr. Kissinger, what is the first information you would like to know about yourself if someone searches your name for 50 years?
Kissinger:It was that I contributed to peace’s conception. I’d like to be remembered for some things I actually did also. But if you ask me to sum it up in one sentence, I think if you look at what I’ve written, it all works back together toward that same theme.
What would you ask Mr. Schmidt to consider your contribution to peace-building?
Google is unlikely to be in existence within 50 years, considering the American corporate history. The tech industry is an simplified version of human nature. I was raised in it. We’ve gotten rid of all the pesky hard problems, right? I hope I’ve bridged technology and humanity in a way that is more profound than any other person in my generation.