The quote below, from an article in the Bulletin of the Atomic Scientists, a pro bono client I've had the honor of representing, hit home for me. The author, Zachary Kellenborn, recalls an incident in 1982 when the USSR's computer driven early warning system decided, with "highest confidence", that the United States had launched a preemptive nuclear attack. Only through the intervention of a Russian monitor did we avoid nuclear annihilation had the computer's conclusion been believed. In truth, it thought the Sun's reflections on clouds were missiles.
Read his full article and you'll know why we all need to worry about AI applications to weapons of mass destruction. It's very scary. But the problem extends beyond military applications and while less threatening, it increasingly impacts on our lives every day.
As Kellenborn points out, "Machine learning-based artificial intelligences—the current AI vogue—rely on large amounts of data to perform a task. ... But in the real world, data may be biased or incomplete in all sorts of ways. For example, one hiring algorithm concluded being named Jared and playing high school lacrosse was the most reliable indicator of job performance, probably because it picked up on human biases in the data."
Let's fast forward 40 years to 2022 and think about where AI can similarly go wrong despite our growing love affair with it and the lack of any meaningful restraints on its development. If you doubt that, read Jacob Ward's new book, The Loop: How Technology Is Creating a World Without Choices and How to Fight Back.
AI is all around us. It has many valuable applications that make all our lives easier. At least that's what we were assured in 1993 when CERN put the World Wide Web software in the public domain and launched the Internet as we know it. Now we can get instant answers to just about any question or take delivery of millions of products with the click of a mouse. But we also have serious societal problems with misinformation, dangerous content, and the "programming" of our thought processes. I explore those downsides in two of my novels, Dark Data: Control, Alt, Delete and Dragon on the Far Side of the Moon. My books are fictional works. The conclusions and warnings of Zachary Kellenborn and Jacob Ward are real.
A lot of very smart people have warned about the downsides of AI, including Stephen Hawking. The question is whether those bent on applying it with abandon are heeding those warnings. I think not.