Can robots ever have a conscience?

I vividly remember the first time I read I, Robot by Isaac Asimov. The book was published in 1950, but it was based on a series of short stories written between 1940 and 1950, decades before computers and other digital devices became mainstream. And for me, one of the most interesting things about I, Robot was Asimov’s Three Laws of Robotics, which were first published in a short story in 1942:

1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First

3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First orSecond Laws.

Asimov later added:

0. The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws have inspired  many other science fiction writers and have also informed debates about the ethics of artificial intelligence. But compliance with simple, binary laws do not replicate the human conscience, nor do they work in practice.

The most obvious problem with these laws is illustrated by the Trolley Problem, a thought experiment in ethics. A runway trolley is hurtling towards a group of five people tied to the track, but you have the opportunity to pull a lever to send the trolley down another track where only one person is tied. Do you do nothing and allow five people to die? Or do you pull the lever, divert the trolley and in effect, kill the one person? If you’re a robot or a piece of software, whatever you do or don’t do, you’ll violate the First Law.

In experiments with humans there are many nuances in our responses to this dilemma, but overall most people choose to sacrifice one life for the many. However, if they are told that the group of five are strangers and the single person is their best friend, what do you think they will do then?

Over the last few weeks, months and years we’ve read about people being killed or  harmed  as a  result of software written by humans. A study at MIT has estimated that as a result  of  software  written  by Volkswagen to cheat emissions tests, 1,200 people in Europe will die prematurely. Over the last few months, two Boeing 737 MAX 8airliners have crashed killing a total of 346 people on board. It  is alleged that  software problems were initially suspected after the first Lion Air crash in  2018, but was enough done to fix it? And just last week, Microsoft was accused of working with the Chinese military to develop “disturbing” face recognition AI for China’s surveillance  network,  with  the  potential  for human rights abuses.

Today, robots do not have what we call “moral agency”, meaning they do not have a sense of right and wrong or to be held accountable for their actions. Unless we  can create robots who have  consciousness and the ability to feel and understand emotions, then robots will never have a moral conscience. But if we do succeed, then do we then have the right to tell them what is right or wrong and if they disagree, do we have the right to throw a switch  and kill them?

And for those business leaders who have decided to invest billions in artificial intelligence, have you considered how much and where you should still be investing in human cognition and conscience so that we do not lose our own sense of moral agency?

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”

ISAAC ASIMOV