Susan D'Agostino presents a growing dilemma faced by the artificial intelligence research community in a chilling article for the Bulletin of the Atomic Scientists. She poses the question of whether programmers and researchers, like those in the medical profession, need guardrails and rules of ethics akin to the Hippocratic Oath to do no harm. Left unrestrained, will AI unleash a holy terror on society akin to Arnold Schwarzenegger in the Terminator? As she points out, it's not an easy answer but there is clearly a need to deal with the unintended consequences of the uncontrolled advancement of AI. With all the good it can do, if left without restraints, the evil it can perpetrate may far outweigh the benefits. As she concludes, "[S]ince AI’s potential to benefit humanity goes hand-in-hand with a theoretical possibility to destroy human life, researchers and the public might ask an alternate question: If not an AI Hippocratic oath, then what?"
| less than a minute read
Hippocratic Oath for AI Researchers
Hanson Robotics robot named Sophia[1] was asked whether she would destroy humans, it replied, “Okay, I will destroy humans.” Philip K Dick, another humanoid robot, has promised to keep humans “warm and safe in my people zoo.” And Bina48, another lifelike robot, has expressed that it wants “to take over all the nukes.”