And what can we learn from the wisdom of the ages?
A robot develops a will of its own and sets out to exterminate the human race. That was the apocalyptic premise of Terminator. Will it still be science fiction a few years from now?
Happily, the answer is yes! Recent developments in artificial intelligence haven’t broken through the sentience barrier yet. However, military technology is progressing at a frightening pace and automated weapons raise many ethical issues. What happens if a drone equipped with a smart bomb hits the wrong target? Or is hacked? Who could be held accountable? The manufacturer? The person controlling the drone? The hacker?
Patricia Gautrin, a Ph.D. student in philosophy, researcher at Algora Lab and author of the book PAUSE : pas d’AI sans éthique, has been considering how actions taken before such systems are deployed can guide the decisions automated weapons will make and prevent them from blowing up human rights. She is writing her thesis on the topic under the supervision of Marc-Antoine Dilhac, a professor of philosophy in the Faculty of Arts and Science.
Proliferation of kamikaze drones
Kamikaze drones that explode when they reach their target are multiplying. Examples include Iran’s Shahed-136, the United States’ Switchblade and Russia’s Lancet. “They’re all equipped with smart bombs, smart cameras and laser guidance systems,” Gautrin explained, “and they contain 3 to 40 kilograms of explosives.”
The drones are programmed to recognize a mass—a group of soldiers, a row of tanks, military infrastructure—and hit it. They are robots and carry out the commands they have been given. “But mistakes do happen,” said Gautrin. “In Ukraine, a school was mistaken for dangerous infrastructure.”
Using deep learning to teach robots morality
What can be done to avoid tragic mistakes? Program the drones differently? Gautrin argues that if there is a sliver of doubt, the following steps should be followed before a drone performs an action:
- Halt the process.
- Call in human intelligence.
- If this is not possible, rely on the virtues the robot has learned.
Step 3 is quite a challenge: How can a robot be taught to self-regulate and behave morally? Gautrin proposes to use set theory and deep learning. The goal is to factor in the multiple variables on which the decision must be based: the specific situation, past experience, frames of reference, etc.
“When people make a decision, they weigh the options, find the right balance, and then act,” she said. “They refer to their empirical experience as well as their frame of values.”
To train an algorithm to act morally, several frames of reference—political, religious and other—will have to be taken into account.
Addressing moral dilemmas
“The decision to take a human life should never be delegated to a machine.” So said a pledge published by the Future of Life Institute in 2018. It was signed by more than 200 global organizations, including Google DeepMind and Mila, and by individuals ranging from Elon Musk to Yoshua Bengio. The question now is how to translate it into practice. What moral code should robots be taught?
“So far, we’ve been approaching the moral dimension from a consequentialist and individualist perspective,” said Gautrin. “But we seem to be on the wrong track: what is good for one person may not be good for another. What if we went back to the moral wisdom of antiquity? The ancients looked to the virtues and the common good as the guiding principles to living ethically in a community. Their empirical philosophy, rooted in experience, could be the gateway to moral learning.”
Gautrin supports the “ethics by design” approach, which incorporates ethical principles into the programming of robots from the outset. The idea is to act now, while there is still time, before letting machines make our decisions for us.