Henry Kissinger spent much of his career thinking about the dangers of nuclear weapons. But at 99, the former secretary of state says he has become “obsessed” with a very modern concern — how to limit the potential destructive capabilities of artificial intelligence, whose powers could be far more devastating than even the biggest bomb.
Kissinger described AI as the new frontier of arms control during a forum at Washington National Cathedral on Nov. 16. If leading powers don’t find ways to limit AI’s reach, he said, “it is simply a mad race for some catastrophe.”
The warning from Kissinger, one of the world’s most prominent statesmen and strategists, is a sign of the growing global concern about the power of “thinking machines” as they interact with global business, finance and warfare. He spoke by video connection at a cathedral forum titled “Man, Machine, and God,” which was this year’s topic in the annual Nancy and Paul Ignatius Program, named in honor of my parents.
Daron Acemoglu: The AI we should fear is already here
Kissinger’s concerns about AI were echoed by two other panelists: Eric Schmidt, former chief executive of Google and chairman of the congressionally appointed National Security Commission on Artificial Intelligence, which issued its report last year; and Anne Neuberger, the Biden administration’s deputy national security adviser for cyber and emerging technology.
The former secretary of state cautioned that AI systems could transform warfare just as they have chess or other games of strategy — because they are capable of making moves that no human would consider but that have devastatingly effective consequences. “What I’m talking about is that in exploring legitimate questions that we ask them, they come to conclusions that would not necessarily be the same as we — and we will have to live in their world,” Kissinger said.
“We are surrounded by many machines whose real thinking we may not know,” he continued. “How do you build restraints into machines? Even today we have fighter planes that can fight … air battles without any human intervention. But these are just the beginnings of this process. It is the elaboration 50 years down the road that will be mind-boggling.”
Kissinger called on the leaders of the United States and China, the world’s tech giants, to begin an urgent dialogue about how to apply ethical limits and standards for AI.
The Post’s View: The United States can’t let other countries write AI policy for it
Such a conversation might begin, he said, with President Biden telling Chinese President Xi Jinping: “We both have a lot of problems to discuss, but there’s one overriding problem — namely that you and I uniquely in history can destroy the world by our decisions on this [AI-driven warfare], and it is impossible to achieve a unilateral advantage in this. So, we therefore should start with principle number one that we will not fight a high-tech war against each other.”
U.S. and Chinese leaders might start a high-tech security dialogue, Kissinger suggested, with an agreement to “create at first relatively small institutions whose job it will be to inform [national leaders] about the dangers, and which might be in touch with each other on how to ameliorate” risks. China has long resisted nuclear arms control negotiations of the sort that Kissinger conducted with the Soviet Union during his years as national security adviser and secretary of state.
U.S. officials say the Chinese won’t discuss limiting nuclear weapons until they have achieved parity with the United States and Russia, whose weapons have been capped by a series of agreements starting with the 1972 SALT treaty, negotiated by Kissinger.
Opinion: How AI could accidentally extinguish humankind
The world-changing power of AI has become a primary concern for Kissinger in his late 90s, with Schmidt as his guide. The two co-wrote a book last year with MIT professor Daniel Huttenlocher titled “The Age of AI: And Our Human Future,” which described the opportunities and dangers of the new technology.
Kissinger’s first major public comment on AI was a 2018 essay in the Atlantic magazine headlined “How the Enlightenment Ends.” The article’s subtitle summarized its chilling message: “Philosophically, intellectually — in every way — human society is unprepared for the rise of artificial intelligence.”
Kissinger told the cathedral audience that for all the destructiveness of nuclear weapons, “they don’t have this [AI] capacity of starting themselves on the basis of their perception, their own perception, of danger or of picking targets.”
Asked whether he was optimistic about the ability of humanity to limit the destructive capabilities of AI when it’s applied to warfare, Kissinger answered: “I retain my optimism in the sense that if we don’t solve it, it’ll literally destroy us. … We have no choice.”