By Berl Falbaum
Much has been written about artificial intelligence (AI) in the last few weeks, which led me to wonder about the risks and dangers of its role in a potential war, particularly given the threat of a nuclear holocaust.
I am a layman, not versed in computers nor in military matters, so I posed the following question to an expert:
“What are the inherent risks and dangers of AI to the world?” Here is the part of a 400-word answer I received which deals with AI in a potential armed conflict:
“Autonomous weapons: the development of autonomous weapons systems powered by AI raises ethical concerns. These weapons can potentially operate without human control, leading to unpredictable or disproportionate use of force and undermining human moral judgment.”
I received the answer in less than a minute from the expert: An AI program (chat.openai.com). I still have goosebumps.
As if we needed any more evidence that we are a self-destructive species—the only one on the planet. We already face doomsday scenarios from the destruction of the environment and we have some 15,000 nuclear warheads—the number grows continually—which are perilously sitting in silos. It would only take a hundred or so to wipe humans off the face of the planet.
My expert’s analysis was confirmed by two humans who wrote on the subject the same week.
Ross Andersen, under the headline, “Never Give Artificial Intelligence the Nuclear Codes” in Atlantic Magazine, observed: “AI offers an illusion of cool exactitude, especially in comparison to error-prone, potentially unstable humans. But today’s most advance AIs are black boxes, we don’t entirely understand how they work. In complex, high stakes adversarial situations, AI’s notions about what constitutes winning may be impenetrable.”
He points out that in some computer “games” AIs were endowed with the power to decide whether to enter into a nuclear exchange.
He concludes: “We cannot encrust the Earth’s surface with automated nuclear arsenals that put us one glitch away from apocalypse.”
In a New York Times essay titled, “To See AI’s Greatest Dangers, Look to the Military,” Peter Coy adds, “That the threat is growing rapidly because there is an international arms race in militarized AI.”
He writes: “The intersection of artificial intelligence that can calculate a million times faster than people can and nuclear weapons that are a million times more powerful than any conventional weapon is about as scary as intersections come ... Life and death decisions should not be delegated to a machine ...”
He points out that Patriot missiles can already fire without human intervention “when overwhelmed with incoming targets faster than a human could react.
“What makes an arms race in artificial intelligence so frightening is that it shrinks the role of human involvement.”
Both call for regulations to control AI and slow its development. Indeed, several executives of companies that created AI almost begged Congress in recent testimonies to enact needed safeguards.
Coy says, “Life and death decisions should not be delegated to a machine. It’s time for new international law to regulate these technologies.”
The major obstacle: We cannot stop unless others stop; others like Russia and China. And no one can foresee that happening.
The argument constantly repeated is that if we had stopped in the development of the A-bomb, Hitler, who was working on atomic bomb technology, might very well have ruled the world.
And world history is replete with figures like Hitler, Putin, Stalin, and numerous others who were not and are not guided by moral principles.
(Besides addressing its role in military operations, the message I received from AI also discussed: job displacement, privacy and surveillance, security threats, lack of accountability, unemployment, and economic disruption).
On the military, my expert concluded: “Addressing...risks and dangers requires a multidisciplinary approach involving policymakers, researchers, industry leaders and society at large. Striking the right balance between AI advancement and responsible deployment is crucial to harness its potential benefits while mitigating potential harm.”
So, I asked my expert if “it” could foresee the world uniting on this issue. The reply:
“As an AI language model, I don’t posses the ability to predict the future with certainty ... The global consensus on this matter is still evolving, and it is uncertain how the world will collectively address the control of AI for military use in the future.”
But it offered, “It is crucial for policymakers, researchers, and society as a whole to engage in ongoing discussions considering both the potential benefits and risks of AI in military applications to ensure responsible and ethical use of this technology.”
It is fascinating that the AI message is quite similar to the warning issued by the National Resources Defense Council (NRDC) which states that to give the Earth a chance—just a chance—at surviving the looming environmental catastrophe “... Every government at every level—national, state, city, town—every business sector, every private enterprise, every individual must be in alignment.”
I wonder if the NRDC had consulted with my new-found AI advisor.
————————
Berl Falbaum is a veteran political columnist and author of 12 books.
- Posted June 09, 2023
- Tweet This | Share on Facebook
COMMENTARY: AI could make nuclear conflict much more likely
headlines Macomb
- Fall family fun
- MDHHS announces enhancements to improve substance use disorder treatment access
- Levin Center looks at congressional investigation of torture and mistreatment of war detainees
- State Unemployment Insurance Agency provides tips on how to stop criminals from stealing benefits
- Supreme Court leaves in place Alaska campaign disclosure rules voters approved in 2020
headlines National
- Professional success is not achieved through participation trophies
- ACLU and BigLaw firm use ‘Orange is the New Black’ in hashtag effort to promote NY jail reform
- ‘Jailbreak: Love on the Run’ misses chance to examine staff sexual misconduct at detention centers
- Utah considers allowing law grads to choose apprenticeship rather than bar exam
- Can lawyers hold doctors accountable for wasting our time?
- Lawyer suspended after arguing cocaine enhanced his cognition